Optimal control is an important topic in modern control theory. In recent years, with the demand for engineering applications and the rise of artificial intelligence, methods for seeking approximate optimal control when the system model is unknown or partially unknown have gradually emerged. The first volume of this book includes two parts: the foundation of optimal control and the mathematical theory of optimal control, focusing on the introduction of classical variational methods, Pontryagin\'s minimum principle, and dynamic programming methods; the second volume focuses on intelligent methods of optimal control, including reinforcement learning and adaptive dynamic programming, numerical methods of optimal control, model predictive control, differential games, and parallel control. In order to meet the talent needs of the \"intelligent era\", we have opened a graduate professional course including mathematical theory and intelligent methods of optimal control at the School of Computer Science and Control and the School of Artificial Intelligence of the University of Chinese Academy of Sciences, and compiled this book based on the course handouts. The first volume of this book can be used as a textbook for the optimal control course for senior undergraduates or graduate students. The combination of the first and second volumes can be used as a reference for students, researchers, and professional and technical personnel in the fields of control theory, artificial intelligence, and management. Part 1 Introduction to Optimal Control Chapter 1 Foundations of Optimal Control 311 Introduction 412 Variational Problems 5121 Brachistocentre Problem 5122 Isoperimetric Problem 7123 The Birth of the Calculus of Variations 913 Optimal Control Problems 13131 Early Explorations of Optimal Control Problems 13132 The Foundation of Mathematical Theory of Optimal Control Problems 16133 Optimal Control Problems without Determined Models: Intelligent Methods 26 Summary 34 Chapter 2 Optimal Control Methods 3521 Variational Methods and Stationary Point Conditions for Optimal Control 36211 Euler\'s Geometric Method 36212 Lagrange\'s Ω Method 39213 Lagrange Multiplier Method 43214 Hestenes\' Classical Variational Method for Optimal Control 44215 Variational Method for Optimal Control Examples 4522 Pontryagin Minimum Principle and Necessary Conditions for Optimal Control 48221 Weierstrass-Erdmann Condition 48222 Weierstrass Condition 50223 Pontryagin Minimum Principle 51224 Minimum Principle to Solve Optimal Control Example 5323 Dynamic Programming and Sufficient Conditions for Optimal Control 54231 Hamilton-Jacobi Equation 54232 Bellman\'s Dynamic Programming Method 55233 Dynamic Programming to Solve Optimal Control Example 5724 Differential Game and Equilibrium Conditions for Optimal Control 59241 Game and Equilibrium 60242 Isaac\'s Differential Game 6325 Adaptive Dynamic Programming 66251 Neural Network and Back Propagation Algorithm 66252 Discrete Time Adaptive Dynamic Programming 69253 Continuous Time Adaptive Dynamic Programming ... [2] 254 Neural Network and Control 74 255 Adaptive Dynamic Programming to Solve Optimal Control Example 74 26 Model Predictive Control 77 261 Numerical Methods for Optimal Control 78 262 Model Predictive Control to Solve Optimal Control Example 79 27 Parallel Control 81 271 Basic Concepts of ACP Method 82 272 Basic Framework and Principles of Parallel Control 82 Summary 85 Part 2 Mathematical Theory of Optimal Control Chapter 3 Variational Methods for Optimal Control 89 31 Function Extremum Problem 90 311 Function Extremum and Taylor Expansion 90 312 Necessary and Sufficient Conditions for Function Extremum 92 32 Introduction to Variational Methods: From Function Extremum to Functional Extremum 95 321 Functional and Its Norm 96 322 From Function Extremum to Functional Extremum 98 323 Necessary Conditions for Functional Extremum 103 324 Solution of Euler-Lagrange Equation 110 325 Euler-Lagrange Equation and Hamilton Equation System 116 33 Treatment of Equality Constraints 119 331 Review of Lagrange Multiplier Method 119 332 Functional Extremum with Differential Constraints 121 333 Functional Extremum with Integral Constraints 126 34 Treatment of Objective Sets 130 341 The Brothers’ Bet: Variational Problems with Variable Endpoints 130 342 The terminal time of the target set is fixed, and the terminal state is free 131 343 The terminal time of the target set is free, and the terminal state is fixed 135 344 The terminal time and state of the target set are free and independent 141 345 Transformation of performance indicators and processing of general target sets 143 35 From variational method to optimal control 149 351 Variational method for solving optimal control problems: an initial exploration of the minimum principle 150 352 Optimal control problems with general target sets 154 353 Piecewise continuous differentiable optimal control 157 354 Weierstrass-Erdmann condition and Weierstrass condition 167 355 Hamiltonian function of steady-state system 169 Summary 172 Chapter 4 Pontryagin minimum principle 173 41 Foundations of Pontryagin minimum principle 174 411 Statement of Pontryagin minimum principle 174 412 Proof of steady-state Mayer form minimum principle 179 413 Proof of the Steady-State Bolza Form Minimum Principle 191 414 Proof of the Minimum Principle for Time-Varying Systems 195 415 Processing of General Objective Sets 198 42 Examples of Optimal Control Solved by the Minimum Principle 201 421 Unconstrained Optimal Control Solved by the Minimum Principle 202 422 Constrained Optimal Control Solved by the Minimum Principle 206 43 Time Minimum Control and Fuel Efficiency Control 213 431 Bang-Bang Control Principle for Time Minimum Control 213 432 Examples of Time Minimum Control for Linear Steady-State Systems 218 433 Fuel Efficiency Control and Bang-off-Bang Control Principle 227 434 Examples of Time and Fuel Weighted Optimal Control 233 44 Linear Quadratic Optimal Control 243 441 Linear Quadratic Optimal Control and Ricatti Equation 243 442 Examples of Linear Quadratic Optimal Control Solved by the Minimum Principle 247 Summary 251 Chapter 5 Dynamic Programming 253 51 Optimality Principle 254 511 Optimality Principle for Multi-stage Decision Making 254 512 Example of Shortest Path Solved by Dynamic Programming 256 52 Discrete Optimal Control Solved by Dynamic Programming 259 521 Discrete-time Optimal Control Problem 259 522 Bellman Equation 262 523 Example of Discrete Optimal Control Solved by Dynamic Programming 263 524 The Curse of the “Dimensionality Curse” 281 53 Continuous Optimal Control Solved by Dynamic Programming 282 531 Hamilton-Jacobi-Bellman Equation 282 532 Relationship between Dynamic Programming and Minimum Principle 289 533 Example of Continuous Optimal Control Solved by Dynamic Programming 291 54 Linear Quadratic Optimal Control Solved by Dynamic Programming 296 541 Discrete-time Linear Quadratic Optimal Control 296 542 Continuous-time Linear Quadratic Optimal Control 302 543 Parameters of Quadratic Performance Indicators 305 Summary 308 References 309 Index 321
You Might Like
Recommended ContentMore
Open source project More
Popular Components
Searched by Users
Just Take a LookMore
Trending Downloads
Trending ArticlesMore