Prof Dr Jörg Fliege Semester 2, 2022/2023
School of Mathematics
University of Southampton
MATH6184
Exercise Sheet 3 Sample Solutions
1. Both algorithms stop after one step at the vertex x = (3, 5)T of the set of feasible points.
At this point, the reduced gradient g is equal to the zero vector: g = (0, 0)T . (Not a
very interesting exercise, as I now realise. Please vary the starting point randomly and
observe what happens then. Computing two or three iterations is enough.)
2. (a)
Fr (x1 , x2 ) = (x1 − 5)2 − 2x1 x2 + 10(x2 − 10)2
+r max{0, −x1 }2 + r max{0, x1 − 3}2 + r max{0, −x2 }2 + r max{0, x2 − 5}2
(b) For x1 > 3 and x2 > 5, we have
Fr (x1 , x2 ) = (x1 − 5)2 − 2x1 x2 + 10(x2 − 10)2 + r(x1 − 3)2 + r(x2 − 5)2 ,
and computing the gradient ∇Fr (x1 , x2 ), we see that the necessary condition
∇Fr (x1 , x2 ) = (0, 0)T is a linear system of equations in the two unknowns x1 , x2 .
(This system contains r as a parameter.) Rearranging this system we get
1 5 + 3r
x1 = x2 + −→ 3,
1+r 1+r
1 100 + 5r
x2 = x1 + −→ 5,
10 + r 10 + r
which answers part (c). Solving for x1 , x2 leads to
100 + 5r + (5 + 3r)(10 + r)
x1 = ,
(1 + r)(10 + r) − 1
1 100 + 5r + (5 + 3r)(10 + r) 100 + 5r
x2 = + .
10 + r (1 + r)(10 + r) − 1 10 + r
3. (a)
Fr (x1 , x2 ) = 2(x1 − 3)2 − x1 x2 + (x2 − 5)2 + r max{0, x21 + x22 − 1}2
+r max{0, −x1 }2 + r max{0, x1 − 2}2 + r max{0, −x2 }2
(b) The function Fr is convex for r ≥ 0 (because it is a sum of four convex functions,
as some elementary calculus shows.
(c) The unconstrained minimum of the original objective is (2, −4), which is not feasi-
ble. The penalty functions added to the original objective will ”push” the minimum
towards the set of feasible points for r −→ +∞, but for finite r the minimum of Fr
will stay outside.
(d) Easily done with the Matlab files provided.
1
School of Mathematics
University of Southampton
MATH6184
Exercise Sheet 3 Sample Solutions
1. Both algorithms stop after one step at the vertex x = (3, 5)T of the set of feasible points.
At this point, the reduced gradient g is equal to the zero vector: g = (0, 0)T . (Not a
very interesting exercise, as I now realise. Please vary the starting point randomly and
observe what happens then. Computing two or three iterations is enough.)
2. (a)
Fr (x1 , x2 ) = (x1 − 5)2 − 2x1 x2 + 10(x2 − 10)2
+r max{0, −x1 }2 + r max{0, x1 − 3}2 + r max{0, −x2 }2 + r max{0, x2 − 5}2
(b) For x1 > 3 and x2 > 5, we have
Fr (x1 , x2 ) = (x1 − 5)2 − 2x1 x2 + 10(x2 − 10)2 + r(x1 − 3)2 + r(x2 − 5)2 ,
and computing the gradient ∇Fr (x1 , x2 ), we see that the necessary condition
∇Fr (x1 , x2 ) = (0, 0)T is a linear system of equations in the two unknowns x1 , x2 .
(This system contains r as a parameter.) Rearranging this system we get
1 5 + 3r
x1 = x2 + −→ 3,
1+r 1+r
1 100 + 5r
x2 = x1 + −→ 5,
10 + r 10 + r
which answers part (c). Solving for x1 , x2 leads to
100 + 5r + (5 + 3r)(10 + r)
x1 = ,
(1 + r)(10 + r) − 1
1 100 + 5r + (5 + 3r)(10 + r) 100 + 5r
x2 = + .
10 + r (1 + r)(10 + r) − 1 10 + r
3. (a)
Fr (x1 , x2 ) = 2(x1 − 3)2 − x1 x2 + (x2 − 5)2 + r max{0, x21 + x22 − 1}2
+r max{0, −x1 }2 + r max{0, x1 − 2}2 + r max{0, −x2 }2
(b) The function Fr is convex for r ≥ 0 (because it is a sum of four convex functions,
as some elementary calculus shows.
(c) The unconstrained minimum of the original objective is (2, −4), which is not feasi-
ble. The penalty functions added to the original objective will ”push” the minimum
towards the set of feasible points for r −→ +∞, but for finite r the minimum of Fr
will stay outside.
(d) Easily done with the Matlab files provided.
1