This introduction to optimal control theory is intended for undergraduate mathematicians and for engineers and scientists with some knowledge of classical analysis. It includes sections on classical optimization and the calculus of variations. All the important theorems are carefully proved. There are many worked examples and exercises for the reader to attempt.
A paperback edition of this successful textbook for final year undergraduate mathematicians and control engineering students, this book contains exercises and many worked examples, with complete solutions and hints making it ideal not only as a class textbook but also for individual study. The intorduction to optimal control begins by considering the problem of minimizing a function of many variables, before moving on to the main subject: the optimal control ofsystems governed by ordinary differential equations.
Enid R. Pinch is at University of Manchester.
1: Introduction1.1: The maxima and minima of functions1.2: The calculus of variations1.3: Optimal controlPart 2: Optimization in2.1: Functions of one variable2.2: Critical points, end-points, and points of discontinuity2.3: Functions of several variables2.4: Minimization with constraints2.5: A geometrical interpretation2.6: Distinguishing maxima from minimaPart 3: The calculus of variations3.1: Problems in which the end-points are not fixed3.2: Finding minimizing curves3.3: Isoperimetric problems3.4: Sufficiency conditions3.5: Fields of extremals3.6: Hilbert's invariant integral3.7: Semi-fields and the Jacobi conditionPart 4: Optimal Control I: Theory4.1: Introduction4.2: Control of a simple first-order system4.3: Systems governed by ordinary differential equations4.4: The optimal control problem4.5: The Pontryagin maximum principle4.6: Optimal control to target curvesPart 5: Optimal Control II: Applications5.1: Time-optimal control of linear systems5.2: Optimal control to target curves5.3: Singular controls5.4: Fuel-optimal controls5.5: Problems where the cost depends on X (t l)5.6: Linear systems with quadratic cost5.7: The steady-state Riccai equation5.8: The calculus of variations revisitedPart 6: Proof of the Maximum Principle of Pontryagin6.1: Convex sets in6.2: The linearized state equations6.3: Behaviour of H on an optimal path6.4: Sufficiency conditions for optimal controlAppendix: Answers and hints for the exercisesBibliographyIndex
"The author has achieved his aim. Anyone who is curious to know what optimal control theory is all about, or who wishes to begin specializing in this field, would benefit by having this book close at hand. Technical libraries should acquire it, too. . . . highly recommended." --Applied Mechanics Review
A paperback edition of this successful textbook for final year undergraduate mathematicians and control engineering students, this book contains exercises and many worked examples, with complete solutions and hints making it ideal not only as a class textbook but also for individual study. The intorduction to optimal control begins by considering the problem of minimizing a function of many variables, before moving on to the main subject: the optimal control of
systems governed by ordinary differential equations.
"The author has achieved his aim. Anyone who is curious to know what optimal control theory is all about, or who wishes to begin specializing in this field, would benefit by having this book close at hand. Technical libraries should acquire it, too. . . . highly recommended." --Applied MechanicsReview
A paperback edition of this successful textbook for final year undergraduates in mathematics and control engineering
Includes many illustrative examples