Home»Math Guides»Lagrange Multipliers for Optimization and Finding Extreme Points (Example 1)
Multivariable Optimization using Lagrange Multipliers Example 1
We can use Lagrange multipliers to solve problems where we’re asked to find the max/min of an objective function, subject to equation constraints.
Example: Find the extreme values of \(f(x,y,z) = {x^2} + {y^2} + {z^2}\)subject to both constraints: \(x – y = 1\) and \({y^2} – {z^2} = 1\).
For Lagrange multipliers involving 2 constraint equations, we can use the following equation (memorize):
\[\nabla f = \lambda \nabla g + \mu \nabla h\]
Where \(\lambda \) and \(\mu \) are unknown values.
We can also set the components of the gradients equal individually, so we can obtain 3 equations, each one with the variables x, y and z.
\[\begin{array}{l}\frac{{\partial f}}{{\partial x}} = \lambda \frac{{\partial g}}{{\partial x}} + \mu \frac{{\partial h}}{{\partial x}}\\\frac{{\partial f}}{{\partial y}} = \lambda \frac{{\partial g}}{{\partial y}} + \mu \frac{{\partial h}}{{\partial y}}\\\frac{{\partial f}}{{\partial z}} = \lambda \frac{{\partial g}}{{\partial z}} + \mu \frac{{\partial h}}{{\partial z}}\end{array}\]
To complete these equations, we need to find all of the partial derivatives.
I will arrange them into a matrix for neatness:
\[\begin{array}{*{20}{c}}{\frac{{\partial f}}{{\partial x}} = 2x}&{\frac{{\partial g}}{{\partial x}} = 1}&{\frac{{\partial h}}{{\partial x}} = 0}\\{\frac{{\partial f}}{{\partial y}} = 2y}&{\frac{{\partial g}}{{\partial y}} = – 1}&{\frac{{\partial h}}{{\partial x}} = 2y}\\{\frac{{\partial f}}{{\partial z}} = 2z}&{\frac{{\partial g}}{{\partial z}} = 0}&{\frac{{\partial h}}{{\partial x}} – 2z}\end{array}\]
Plug these partial derivatives into the three equations.
\[\begin{array}{l}2x = \lambda \\2y = – \lambda + 2\mu y\\2z = – 2\mu z\end{array}\]
The equations simplify nicely.
From the third equation here, you’ll see z = 0 or \(\mu \) = -1.
Sometimes you’ll notice there are many ways to solve the series of equations; however, you’ll notice that you might get imaginary numbers.
That’s not valid, and you’ll have to go back and re-solve the equations to get an answer.
So let’s choose \(\mu \) = -1. You can use it to solve the other equations to get y = -\(\lambda \)/4, then plug that into x – y = 1 to get \(\lambda \) = 4/3, so x = 2/3 and y = -1/3.
BUT, when you plug it into \({y^2} – {z^2} = 1\) you’ll get z as an imaginary number!
Normally imaginary numbers are not within the scope of this kind of calculus course, so you need to go back and re-solve the equations!
So choose z = 0 instead at the earlier step a few paragraphs above!
When z = 0, plug that into the constraints to find the values of the other variables.
\[\begin{array}{l}{y^2} – {(0)^2} = 1\\y = \pm 1\end{array}\]
And plug these y values back into the other constraint, you’ll have two cases:
\[\begin{array}{l}x – (1) = 1\\x = 2\\x – ( – 1) = 1\\x = 0\end{array}\]
So that means the extreme points are (2,1,0) and (0,-1,0).
What are these extreme points? They’re probably a max or min.
Just plug them back into the original objective function, \[f(x,y,z) = {x^2} + {y^2} + {z^2}\] and see which one is the max and which one is the min.
The max is (f(2,1,0) = 5).
The min is (f(0, – 1,0) = 1).
Try the next example for Lagrange Multipliers for multivariable function optimization