(ECE580_AC3_2017_question1)
(finish_ac3_2017_1)
 
(5 intermediate revisions by the same user not shown)
Line 21: Line 21:
 
----
 
----
 
----
 
----
:Student answers and discussions for [[QE2013_AC-3_ECE580-1|Part 1]],[[QE2013_AC-3_ECE580-2|2]],[[QE2013_AC-3_ECE580-3|3]],[[QE2013_AC-3_ECE580-4|4]],[[QE2013_AC-3_ECE580-5|5]]
+
<!--哈哈我是注释,:Student answers and discussions for [[QE2013_AC-3_ECE580-1|Part 1]],[[QE2013_AC-3_ECE580-2|2]],[[QE2013_AC-3_ECE580-3|3]],[[QE2013_AC-3_ECE580-4|4]],[[QE2013_AC-3_ECE580-5|5]]不会在浏览器中显示。-->
 +
 
 
----
 
----
1.(20 pts) Considern the following linear program, minimize <math>2x_{1} + x_{2}</math>, subject to <br/>
+
1.(20 pts) Considern the following linear program, <br/>
<math>x_{1} + 3x_{2} \geq 6 </math> <br/>
+
<center> minimize <math>2x_{1} + x_{2}</math>, </center> <br/>
<math>2x_{1} + x_{2} \geq 4</math> <br/>
+
<center> subject to <math>x_{1} + 3x_{2} \geq 6 </math> </center> <br/>
<math> x_{1} + x_{2} \leq 3 </math> <br/>
+
<center> <math>2x_{1} + x_{2} \geq 4</math> </center> <br/>
<math> x_{1} \geq 0 </math>, <math> x_{2} \geq 0 </math>. <br/>
+
<center> <math> x_{1} + x_{2} \leq 3 </math> </center> <br/>
Convert the above linear program into standard form and find an initial basix feasible solution for the program in shtandar form. <br/>
+
<center> <math> x_{1} \geq 0 </math>, <math> x_{2} \geq 0 </math>. </center> <br/>
 +
Convert the above linear program into standard form and find an initial basic feasible solution for the program in standard form. <br/>
 +
 
 +
Solution of question1: <br/>
 +
The problem equal to:<br/>
 +
Minimize <math>2x_1+x_2</math><br>
 +
Subject to <math>\begin{align*}
 +
&x_1+3x_2-x_3=6\\
 +
&2x_1+x_2-x_4=4\\
 +
&x_1+x_2+x_5=3\\
 +
&x_1, x_2, x_3, x_4,x_5 >=0
 +
\end{align*}</math><br>
 +
such that <math>A=
 +
\begin{bmatrix}
 +
1 & 3 & -1 & 0 & 0 \\
 +
2 & 1 & 0 & -1 & 0 \\
 +
1 & 1 & 0 & 0 & 1
 +
\end{bmatrix}
 +
</math><br>
 +
we take <math>B=
 +
\begin{bmatrix}
 +
1 & 3 & 0 \\
 +
2 & 1 & 0 \\
 +
1 & 1 & 1
 +
\end{bmatrix} \Rightarrow
 +
B\begin{bmatrix}
 +
x_1 \\
 +
x_2 \\
 +
x_3
 +
\end{bmatrix}
 +
=b \Rightarrow \begin{bmatrix}
 +
x_1 \\
 +
x_2 \\
 +
x_3
 +
\end{bmatrix}
 +
=
 +
\begin{bmatrix}
 +
1 & 3 & 0 \\
 +
2 & 1 & 0 \\
 +
1 & 1 & 1
 +
\end{bmatrix}^{-1}
 +
\begin{bmatrix}
 +
6\\
 +
4\\
 +
3
 +
\end{bmatrix}
 +
=
 +
\begin{bmatrix}
 +
\dfrac{6}{5} \\
 +
\dfrac{8}{5} \\
 +
\dfrac{1}{5}
 +
\end{bmatrix}
 +
</math><br>
 +
Such that <math>x^T=[\dfrac{6}{5}, \dfrac{8}{5},\dfrac{1}{5}, 0, 0]</math> is a feasible solution.
 +
 
 +
 
  
 
----
 
----
 
2.(20 pts)  
 
2.(20 pts)  
 +
*(15 pts) FInd the largest range of the step-size, <math> \alpha </math>, for which the fixed step gradient descent algorithm is guaranteed to convege to the minimizer of the quadratic function <br/>
 +
<center> <math> f = \frac{1}{2} x^{T}Qx - b^{T}x </math> </center> <br/>
 +
starting from an arbitary initial condition <math> x^{(0)} \in \mathbb{R}^{n} </math>, where  <math> x \in \mathbb{R}^{n} </math>, and <math>Q = Q^{T} > 0</math>. <br/>
 +
*(5 pts) Find the largest range of the step size, <math>\alpha</math>, for which the fixed step gradient descent algorithm is guaranteed to converge to the minimizer of the quadratic function<br/>
 +
<center><math> f= 6x_{1}^{2}+2x_{2}^{2}-5 </math></center> <br/>
 +
starting from an arbitrary initial condition <math>x^{(0)} \in \mathbb{R}^{n}</math>
  
 +
Solution of question 2: <br/>
 +
a) From the Optimization textbook, Zak Stanislaw. Lemma 8.3<br>
 +
For fixed step gradient descent algorithms <math>\alpha</math> should in the range <math>(0,\dfrac{2}{\lambda max(Q)})</math><br>
 +
b) <math>Q=\begin{bmatrix} 12 & 0 \\ 0 & 4 \end{bmatrix}</math><br>
 +
such that <math>\lambda max(Q)=12 \Rightarrow \alpha \in (0, \dfrac{1}{6})</math><br>
  
 +
 +
----
 +
3. (20 pts) Is the function <br/>
 +
<center><math> f(x_{1}, x_{2})=\frac{1}{(x_{1}-2)^{2} + (x_{2}+1)^{2}+3} </math></center><br>
 +
locally convex, concave, or neither in the neighborhood of the point <math> [2 -1]^{T} </math>? Justify your answer by giving all the details of your argument.
 +
 +
Solution of question3: <br/>
 +
Let <math>t_1=x_1-2 </math>, <math>t_2=x_2+1</math><br>
 +
so that <math>g(t_1,t_2)=\dfrac{1}{t_1^2+t_2^2+3}|t_1=0,t_2=0</math> would have some convex property<br>
 +
with <math>f(x_1,x_2)=\dfrac{1}{(x_1-2)^2+(x_2+1)^2+3}|x_1=2,x_1=-1</math><br>
 +
<math>D^2g(x)=\dfrac{1}{(t_1^2+t_2^2+3)^3}\begin{bmatrix} 6(t_1)^2-2(t_2)^3-6 & 8t_1t_2 \\ 8t_1t_2 & 6(t_2)^2-2(t_1)^3-6 \end{bmatrix}=\dfrac{1}{27}\begin{bmatrix} -6 & 0 \\ 0 & -6 \end{bmatrix}</math><br>
 +
It is easy to see that <math>\begin{bmatrix} -6 & 0 \\ 0 & -6 \end{bmatrix}</math> is n.d .<br>
 +
Such that function at <math>[2 -1]^T</math> is strictly locally concave.<br>
 +
 +
----
 +
4. (20 pts) Solve the following optimization problem:
 +
<center>optimize <math> x_{1}x_{2} </math> </center><br>
 +
<center>subject to <math> x_{1}+x_{2}+x_{3}=1 </math> </center><br>
 +
<center><math> x_{1}+x_{2}-x_{3}=0 </math></center><br>
 +
 +
Solution of question4: <br/>
 +
We form the lagrangian:<br>
 +
<math>l(x,\lambda)=x_1x_2+\lambda_1(x_1+x_2+x_3-1)+\lambda_2(x_1+x_2-x_3)</math><br>
 +
<math>\begin{cases}
 +
\nabla_xl=\begin{bmatrix} x_2+\lambda_1+\lambda_2 \\ x_1+\lambda_1+\lambda_2 \\ \lambda_1+\lambda_2\end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \\
 +
x_1+x_2+x_3-1=0 \\
 +
x_1+x_2-x_3=0
 +
\end{cases}
 +
</math><br>
 +
No valid solution for lagrangian condition <br>
 +
Such that the problem can not be optimized<br>
 +
 +
----
 +
5. (20 pts) Solve the following optimization problem:<br/>
 +
<center>maximize <math> 14x_{1}-x_{1}^{2}+6x_{2}-x_{2}^{2}+7 </math></center><br>
 +
<center>subject to <math>x_{1}+x_{2} \leq 2</math></center><br>
 +
<center><math> x_{1}+2x_{2} \leq 3 </math></center>
 +
 +
Solution of question 5: <br/>
 +
The problem equal to<br>
 +
Minimize <math>(x_1)^2+(x_2)^2-14x_1-6x_2-7</math><br>
 +
Subject to <math>x_1+x_2-2<=0</math> and <math>x_1+2x_2-3<=0</math><br>
 +
Form the lagrangian function<br>
 +
<math>l(x,\mu)=(x_1)^2+(x_2)^2-14x_1-6x_2-7+\mu_1(x_1+x_2-2)+\mu_2(x_1+2x_2-3)</math><br>
 +
The KKT condition takes the form<br>
 +
<math>\nabla_xl(x,\mu)=\begin{bmatrix}2x_1-14+\mu_1+\mu_2 \\ 2x_2-6+\mu_1+2\mu_2\end{bmatrix}=\begin{bmatrix}0 \\ 0\end{bmatrix}</math><br>
 +
<math>\mu_1(x_1+x_2-2)=0</math><br>
 +
<math>\mu_2(x_1+2x_2-3)=0</math><br>
 +
<math>\mu_1>=0</math>, <math>\mu_2>=0</math><br>
 +
<math> \Rightarrow
 +
\begin{cases}
 +
\mu_1=0 & \mu_2=0 & x_1=7 & x_2=3 & wrong \\
 +
\mu_1=0 & \mu_2=4 & x_1=5 & x_2=-1 & wrong \\
 +
\mu_1=8 & \mu_2=4 & x_1=3 & x_2=-1 & f(x)=-33 \\
 +
\mu_1=20 & \mu_2=-8 & x_1=1 & x_2=1 & wrong
 +
\end{cases}</math><br>
 +
In all <math>x^T=[3 -1]</math> is the maximizer of original function.<br>
 +
 +
----
 +
----
  
  
 
[[ECE_PhD_Qualifying_Exams|Back to ECE QE page]]
 
[[ECE_PhD_Qualifying_Exams|Back to ECE QE page]]

Latest revision as of 01:04, 24 February 2019


ECE Ph.D. Qualifying Exam

Automatic Control (AC)

Question 3: Optimization

August 2017




1.(20 pts) Considern the following linear program,

minimize $ 2x_{1} + x_{2} $,

subject to $ x_{1} + 3x_{2} \geq 6 $

$ 2x_{1} + x_{2} \geq 4 $

$ x_{1} + x_{2} \leq 3 $

$ x_{1} \geq 0 $, $ x_{2} \geq 0 $.

Convert the above linear program into standard form and find an initial basic feasible solution for the program in standard form.

Solution of question1:
The problem equal to:
Minimize $ 2x_1+x_2 $
Subject to $ \begin{align*} &x_1+3x_2-x_3=6\\ &2x_1+x_2-x_4=4\\ &x_1+x_2+x_5=3\\ &x_1, x_2, x_3, x_4,x_5 >=0 \end{align*} $
such that $ A= \begin{bmatrix} 1 & 3 & -1 & 0 & 0 \\ 2 & 1 & 0 & -1 & 0 \\ 1 & 1 & 0 & 0 & 1 \end{bmatrix} $
we take $ B= \begin{bmatrix} 1 & 3 & 0 \\ 2 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix} \Rightarrow B\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} =b \Rightarrow \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 1 & 3 & 0 \\ 2 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix}^{-1} \begin{bmatrix} 6\\ 4\\ 3 \end{bmatrix} = \begin{bmatrix} \dfrac{6}{5} \\ \dfrac{8}{5} \\ \dfrac{1}{5} \end{bmatrix} $
Such that $ x^T=[\dfrac{6}{5}, \dfrac{8}{5},\dfrac{1}{5}, 0, 0] $ is a feasible solution.



2.(20 pts)

  • (15 pts) FInd the largest range of the step-size, $ \alpha $, for which the fixed step gradient descent algorithm is guaranteed to convege to the minimizer of the quadratic function
$ f = \frac{1}{2} x^{T}Qx - b^{T}x $

starting from an arbitary initial condition $ x^{(0)} \in \mathbb{R}^{n} $, where $ x \in \mathbb{R}^{n} $, and $ Q = Q^{T} > 0 $.

  • (5 pts) Find the largest range of the step size, $ \alpha $, for which the fixed step gradient descent algorithm is guaranteed to converge to the minimizer of the quadratic function
$ f= 6x_{1}^{2}+2x_{2}^{2}-5 $

starting from an arbitrary initial condition $ x^{(0)} \in \mathbb{R}^{n} $

Solution of question 2:
a) From the Optimization textbook, Zak Stanislaw. Lemma 8.3
For fixed step gradient descent algorithms $ \alpha $ should in the range $ (0,\dfrac{2}{\lambda max(Q)}) $
b) $ Q=\begin{bmatrix} 12 & 0 \\ 0 & 4 \end{bmatrix} $
such that $ \lambda max(Q)=12 \Rightarrow \alpha \in (0, \dfrac{1}{6}) $



3. (20 pts) Is the function

$ f(x_{1}, x_{2})=\frac{1}{(x_{1}-2)^{2} + (x_{2}+1)^{2}+3} $

locally convex, concave, or neither in the neighborhood of the point $ [2 -1]^{T} $? Justify your answer by giving all the details of your argument.

Solution of question3:
Let $ t_1=x_1-2 $, $ t_2=x_2+1 $
so that $ g(t_1,t_2)=\dfrac{1}{t_1^2+t_2^2+3}|t_1=0,t_2=0 $ would have some convex property
with $ f(x_1,x_2)=\dfrac{1}{(x_1-2)^2+(x_2+1)^2+3}|x_1=2,x_1=-1 $
$ D^2g(x)=\dfrac{1}{(t_1^2+t_2^2+3)^3}\begin{bmatrix} 6(t_1)^2-2(t_2)^3-6 & 8t_1t_2 \\ 8t_1t_2 & 6(t_2)^2-2(t_1)^3-6 \end{bmatrix}=\dfrac{1}{27}\begin{bmatrix} -6 & 0 \\ 0 & -6 \end{bmatrix} $
It is easy to see that $ \begin{bmatrix} -6 & 0 \\ 0 & -6 \end{bmatrix} $ is n.d .
Such that function at $ [2 -1]^T $ is strictly locally concave.


4. (20 pts) Solve the following optimization problem:

optimize $ x_{1}x_{2} $

subject to $ x_{1}+x_{2}+x_{3}=1 $

$ x_{1}+x_{2}-x_{3}=0 $

Solution of question4:
We form the lagrangian:
$ l(x,\lambda)=x_1x_2+\lambda_1(x_1+x_2+x_3-1)+\lambda_2(x_1+x_2-x_3) $
$ \begin{cases} \nabla_xl=\begin{bmatrix} x_2+\lambda_1+\lambda_2 \\ x_1+\lambda_1+\lambda_2 \\ \lambda_1+\lambda_2\end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \\ x_1+x_2+x_3-1=0 \\ x_1+x_2-x_3=0 \end{cases} $
No valid solution for lagrangian condition
Such that the problem can not be optimized


5. (20 pts) Solve the following optimization problem:

maximize $ 14x_{1}-x_{1}^{2}+6x_{2}-x_{2}^{2}+7 $

subject to $ x_{1}+x_{2} \leq 2 $

$ x_{1}+2x_{2} \leq 3 $

Solution of question 5:
The problem equal to
Minimize $ (x_1)^2+(x_2)^2-14x_1-6x_2-7 $
Subject to $ x_1+x_2-2<=0 $ and $ x_1+2x_2-3<=0 $
Form the lagrangian function
$ l(x,\mu)=(x_1)^2+(x_2)^2-14x_1-6x_2-7+\mu_1(x_1+x_2-2)+\mu_2(x_1+2x_2-3) $
The KKT condition takes the form
$ \nabla_xl(x,\mu)=\begin{bmatrix}2x_1-14+\mu_1+\mu_2 \\ 2x_2-6+\mu_1+2\mu_2\end{bmatrix}=\begin{bmatrix}0 \\ 0\end{bmatrix} $
$ \mu_1(x_1+x_2-2)=0 $
$ \mu_2(x_1+2x_2-3)=0 $
$ \mu_1>=0 $, $ \mu_2>=0 $
$ \Rightarrow \begin{cases} \mu_1=0 & \mu_2=0 & x_1=7 & x_2=3 & wrong \\ \mu_1=0 & \mu_2=4 & x_1=5 & x_2=-1 & wrong \\ \mu_1=8 & \mu_2=4 & x_1=3 & x_2=-1 & f(x)=-33 \\ \mu_1=20 & \mu_2=-8 & x_1=1 & x_2=1 & wrong \end{cases} $
In all $ x^T=[3 -1] $ is the maximizer of original function.




Back to ECE QE page

Alumni Liaison

Ph.D. 2007, working on developing cool imaging technologies for digital cameras, camera phones, and video surveillance cameras.

Buyue Zhang