Misplaced Pages

Pontryagin's maximum principle

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Pontryagin's Minimum Principle) Principle in optimal control theory for best way to change state in a dynamical system

Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian. These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions.

The maximum principle was formulated in 1956 by the Russian mathematician Lev Pontryagin and his students, and its initial application was to the maximization of the terminal speed of a rocket. The result was derived using ideas from the classical calculus of variations. After a slight perturbation of the optimal control, one considers the first-order term of a Taylor expansion with respect to the perturbation; sending the perturbation to zero leads to a variational inequality from which the maximum principle follows.

Widely regarded as a milestone in optimal control theory, the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite-dimensional control problem; rather than maximizing over a function space, the problem is converted to a pointwise optimization. A similar logic leads to Bellman's principle of optimality, a related approach to optimal control problems which states that the optimal trajectory remains optimal at intermediate points in time. The resulting Hamilton–Jacobi–Bellman equation provides a necessary and sufficient condition for an optimum, and admits a straightforward extension to stochastic optimal control problems, whereas the maximum principle does not. However, in contrast to the Hamilton–Jacobi–Bellman equation, which needs to hold over the entire state space to be valid, Pontryagin's Maximum Principle is potentially more computationally efficient in that the conditions which it specifies only need to hold over a particular trajectory.

Notation

For set U {\displaystyle {\mathcal {U}}} and functions

Ψ : R n R {\displaystyle \Psi :\mathbb {R} ^{n}\to \mathbb {R} } ,
H : R n × U × R n × R R {\displaystyle H:\mathbb {R} ^{n}\times {\mathcal {U}}\times \mathbb {R} ^{n}\times \mathbb {R} \to \mathbb {R} } ,
L : R n × U R {\displaystyle L:\mathbb {R} ^{n}\times {\mathcal {U}}\to \mathbb {R} } ,
f : R n × U R n {\displaystyle f:\mathbb {R} ^{n}\times {\mathcal {U}}\to \mathbb {R} ^{n}} ,

we use the following notation:

Ψ T ( x ( T ) ) = Ψ ( x ) T | x = x ( T ) {\displaystyle \Psi _{T}(x(T))=\left.{\frac {\partial \Psi (x)}{\partial T}}\right|_{x=x(T)}\,} ,
Ψ x ( x ( T ) ) = [ Ψ ( x ) x 1 | x = x ( T ) Ψ ( x ) x n | x = x ( T ) ] {\displaystyle \Psi _{x}(x(T))={\begin{bmatrix}\left.{\frac {\partial \Psi (x)}{\partial x_{1}}}\right|_{x=x(T)}&\cdots &\left.{\frac {\partial \Psi (x)}{\partial x_{n}}}\right|_{x=x(T)}\end{bmatrix}}} ,
H x ( x , u , λ , t ) = [ H x 1 | x = x , u = u , λ = λ H x n | x = x , u = u , λ = λ ] {\displaystyle H_{x}(x^{*},u^{*},\lambda ^{*},t)={\begin{bmatrix}\left.{\frac {\partial H}{\partial x_{1}}}\right|_{x=x^{*},u=u^{*},\lambda =\lambda ^{*}}&\cdots &\left.{\frac {\partial H}{\partial x_{n}}}\right|_{x=x^{*},u=u^{*},\lambda =\lambda ^{*}}\end{bmatrix}}} ,
L x ( x , u ) = [ L x 1 | x = x , u = u L x n | x = x , u = u ] {\displaystyle L_{x}(x^{*},u^{*})={\begin{bmatrix}\left.{\frac {\partial L}{\partial x_{1}}}\right|_{x=x^{*},u=u^{*}}&\cdots &\left.{\frac {\partial L}{\partial x_{n}}}\right|_{x=x^{*},u=u^{*}}\end{bmatrix}}} ,
f x ( x , u ) = [ f 1 x 1 | x = x , u = u f 1 x n | x = x , u = u f n x 1 | x = x , u = u f n x n | x = x , u = u ] {\displaystyle f_{x}(x^{*},u^{*})={\begin{bmatrix}\left.{\frac {\partial f_{1}}{\partial x_{1}}}\right|_{x=x^{*},u=u^{*}}&\cdots &\left.{\frac {\partial f_{1}}{\partial x_{n}}}\right|_{x=x^{*},u=u^{*}}\\\vdots &\ddots &\vdots \\\left.{\frac {\partial f_{n}}{\partial x_{1}}}\right|_{x=x^{*},u=u^{*}}&\ldots &\left.{\frac {\partial f_{n}}{\partial x_{n}}}\right|_{x=x^{*},u=u^{*}}\end{bmatrix}}} .

Formal statement of necessary conditions for minimization problems

Here the necessary conditions are shown for minimization of a functional.

Consider an n-dimensional dynamical system, with state variable x R n {\displaystyle x\in \mathbb {R} ^{n}} , and control variable u U {\displaystyle u\in {\mathcal {U}}} , where U {\displaystyle {\mathcal {U}}} is the set of admissible controls. The evolution of the system is determined by the state and the control, according to the differential equation x ˙ = f ( x , u ) {\displaystyle {\dot {x}}=f(x,u)} . Let the system's initial state be x 0 {\displaystyle x_{0}} and let the system's evolution be controlled over the time-period with values t [ 0 , T ] {\displaystyle t\in } . The latter is determined by the following differential equation:

x ˙ = f ( x , u ) , x ( 0 ) = x 0 , u ( t ) U , t [ 0 , T ] {\displaystyle {\dot {x}}=f(x,u),\quad x(0)=x_{0},\quad u(t)\in {\mathcal {U}},\quad t\in }

The control trajectory u : [ 0 , T ] U {\displaystyle u:\to {\mathcal {U}}} is to be chosen according to an objective. The objective is a functional J {\displaystyle J} defined by

J = Ψ ( x ( T ) ) + 0 T L ( x ( t ) , u ( t ) ) d t {\displaystyle J=\Psi (x(T))+\int _{0}^{T}L{\big (}x(t),u(t){\big )}\,dt} ,

where L ( x , u ) {\displaystyle L(x,u)} can be interpreted as the rate of cost for exerting control u {\displaystyle u} in state x {\displaystyle x} , and Ψ ( x ) {\displaystyle \Psi (x)} can be interpreted as the cost for ending up at state x {\displaystyle x} . The specific choice of L , Ψ {\displaystyle L,\Psi } depends on the application.

The constraints on the system dynamics can be adjoined to the Lagrangian L {\displaystyle L} by introducing time-varying Lagrange multiplier vector λ {\displaystyle \lambda } , whose elements are called the costates of the system. This motivates the construction of the Hamiltonian H {\displaystyle H} defined for all t [ 0 , T ] {\displaystyle t\in } by:

H ( x ( t ) , u ( t ) , λ ( t ) , t ) = λ T ( t ) f ( x ( t ) , u ( t ) ) + L ( x ( t ) , u ( t ) ) {\displaystyle H{\big (}x(t),u(t),\lambda (t),t{\big )}=\lambda ^{\rm {T}}(t)\cdot f{\big (}x(t),u(t){\big )}+L{\big (}x(t),u(t){\big )}}

where λ T {\displaystyle \lambda ^{\rm {T}}} is the transpose of λ {\displaystyle \lambda } .

Pontryagin's minimum principle states that the optimal state trajectory x {\displaystyle x^{*}} , optimal control u {\displaystyle u^{*}} , and corresponding Lagrange multiplier vector λ {\displaystyle \lambda ^{*}} must minimize the Hamiltonian H {\displaystyle H} so that

H ( x ( t ) , u ( t ) , λ ( t ) , t ) H ( x ( t ) , u , λ ( t ) , t ) {\displaystyle H{\big (}x^{*}(t),u^{*}(t),\lambda ^{*}(t),t{\big )}\leq H{\big (}x(t),u,\lambda (t),t{\big )}} (1)

for all time t [ 0 , T ] {\displaystyle t\in } and for all permissible control inputs u U {\displaystyle u\in {\mathcal {U}}} . Here, the trajectory of the Lagrangian multiplier vector λ {\displaystyle \lambda } is the solution to the costate equation and its terminal conditions:

λ ˙ T ( t ) = H x ( x ( t ) , u ( t ) , λ ( t ) , t ) = λ T ( t ) f x ( x ( t ) , u ( t ) ) + L x ( x ( t ) , u ( t ) ) {\displaystyle -{\dot {\lambda }}^{\rm {T}}(t)=H_{x}{\big (}x^{*}(t),u^{*}(t),\lambda (t),t{\big )}=\lambda ^{\rm {T}}(t)\cdot f_{x}{\big (}x^{*}(t),u^{*}(t){\big )}+L_{x}{\big (}x^{*}(t),u^{*}(t){\big )}} (2)
λ T ( T ) = Ψ x ( x ( T ) ) {\displaystyle \lambda ^{\rm {T}}(T)=\Psi _{x}(x(T))} (3)

If x ( T ) {\displaystyle x(T)} is fixed, then these three conditions in (1)-(3) are the necessary conditions for an optimal control.

If the final state x ( T ) {\displaystyle x(T)} is not fixed (i.e., its differential variation is not zero), there is an additional condition

Ψ T ( x ( T ) ) + H ( T ) = 0 {\displaystyle \Psi _{T}(x(T))+H(T)=0} (4)

These four conditions in (1)-(4) are the necessary conditions for an optimal control.

See also

Notes

  1. Whether the extreme value is maximum or minimum depends on the sign convention used for defining the Hamiltonian. The historic convention leads to a maximum, hence maximum principle. In recent years, it is more commonly referred to as simply Pontryagin's Principle, without the use of the adjectives, maximum or minimum.

References

  1. Mangasarian, O. L. (1966). "Sufficient Conditions for the Optimal Control of Nonlinear Systems". SIAM Journal on Control. 4 (1): 139–152. doi:10.1137/0304013.
  2. Kamien, Morton I.; Schwartz, Nancy L. (1971). "Sufficient Conditions in Optimal Control Theory". Journal of Economic Theory. 3 (2): 207–214. doi:10.1016/0022-0531(71)90018-4.
  3. Boltyanski, V.; Martini, H.; Soltan, V. (1998). "The Maximum Principle – How it came to be?". Geometric Methods and Optimization Problems. New York: Springer. pp. 204–227. ISBN 0-7923-5454-0.
  4. Gamkrelidze, R. V. (1999). "Discovery of the Maximum Principle". Journal of Dynamical and Control Systems. 5 (4): 437–451. doi:10.1023/A:1021783020548. S2CID 122690986. Reprinted in Bolibruch, A. A.; et al., eds. (2006). Mathematical Events of the Twentieth Century. Berlin: Springer. pp. 85–99. ISBN 3-540-23235-4.
  5. For first published works, see references in Fuller, A. T. (1963). "Bibliography of Pontryagin's Maximum Principle". J. Electronics & Control. 15 (5): 513–517. doi:10.1080/00207216308937602.
  6. McShane, E. J. (1989). "The Calculus of Variations from the Beginning Through Optimal Control Theory". SIAM J. Control Optim. 27 (5): 916–939. doi:10.1137/0327049.
  7. ^ Yong, J.; Zhou, X. Y. (1999). "Maximum Principle and Stochastic Hamiltonian Systems". Stochastic Controls: Hamiltonian Systems and HJB Equations. New York: Springer. pp. 101–156. ISBN 0-387-98723-1.
  8. Sastry, Shankar (March 29, 2009). "Lecture Notes 8. Optimal Control and Dynamic Games" (PDF).
  9. Zhou, X. Y. (1990). "Maximum Principle, Dynamic Programming, and their Connection in Deterministic Control". Journal of Optimization Theory and Applications. 65 (2): 363–373. doi:10.1007/BF01102352. S2CID 122333807.

Further reading

External links

Categories: