Optimal control of LQR control algorithm

Publisher:创意航海Latest update time:2023-09-28 Source: elecfansKeywords:LQR Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

picture

When it reaches the extreme value, the control function u(t) is called the optimal control function, denoted by u* (t), and the corresponding x(t) is called the optimal trajectory, denoted by x*(t). The performance index J at this time is called the optimal performance index.

It can be seen that optimal control belongs to the category of system integration and design. The task of optimal control is to design a corresponding control system given a controlled system or controlled process (including relevant constraints and boundary conditions) and a performance index, so that its performance index reaches an extreme value (maximum or minimum) while satisfying the constraints and boundary conditions.

2.6 Mathematical Description of Optimal Control Problems for Discrete Systems

Assume that the difference equation of the known system is

picture

The initial conditions and terminal states satisfy

picture

The control function is

picture

Performance Indicators

picture

The optimal control problem of a discrete system is to find an admissible control u(k) that transfers the system state x(k) from a given initial value 0 to a final state x(k) ∈S and makes the performance index J reach an extreme value.


If the above optimal control problem has a solution u* (k), then u*(k) is called the optimal control function, the corresponding trajectory x*(k) is called the optimal trajectory, and the performance index J at this time is called the optimal performance index


3 Development of Optimal Control

Optimal control theory is an important part of modern control theory, and its development is inseparable from the development of modern control theory. So far, the development of control theory has gone through two important development stages, namely classical control theory and modern control theory, and has entered the third stage, namely large system theory and intelligent control theory.


The automatic control theory developed after World War II is effective in designing and analyzing linear steady-state systems with single input and single output. However, with the development of production, especially the development of space technology, control systems are becoming more and more complex, and their accuracy requirements are getting higher and higher. Therefore, the automatic control theory based on transfer function and frequency characteristics, that is, what we usually call classical control theory, is increasingly showing its limitations. This limitation is first manifested in that for time-varying systems, the transfer function cannot be defined at all; even for linear steady-state systems, when there are multiple inputs and multiple outputs, its transfer function becomes a function matrix, so that the engineering conclusions drawn from the concept of transfer function become very complicated and difficult to apply here. Secondly, it is also manifested in that the frequency method is essentially an engineering method. The correction characteristics obtained by the frequency method can only be realized by a simple network, and the determination of network parameters must go through a debugging process. When the system is very complex and the accuracy requirements are very high, this semi-empirical method is not very applicable. Therefore, people returned to the time domain and established a modern control theory based on the concept of state space.


Modern control theory can handle a wide range of problems. In principle, it can be used to handle time-varying systems, nonlinear systems, multi-input multi-output systems, and distributed parameter systems. It is also very convenient to use it to handle random system problems and discrete system problems.


As early as the early 1950s, some people published articles on the study of the shortest time control problem from an engineering point of view. Although the proof of optimality was inspired by geometric figures, it provided the first practical models for the development of modern control theory. Subsequently, the in-depth study of the optimal control problem and the urgent need for space technology attracted the close attention of a large number of mathematicians. Through research, people found that from a mathematical point of view, the optimal control problem is to solve a type of functional extremum problem with constraints, and its essence is a variational calculus problem. However, classical variational theory can only solve a type of optimal control problem whose admissible control belongs to an open set. In engineering practice, most of the optimal control problems encountered are those whose admissible control belongs to a closed set. Classical variational theory is powerless, which requires people to explore new ways to solve the optimal control problem.


Among the various new methods, two are the most effective. One is the "minimum principle" of the former Soviet scholar Pontryagin (LC Pontryagin); the other is the "dynamic programming" of the American scholar Bellman (RE Bellman).

Inspired by Hamilton's principle in mechanics, Pontryagin and others first proposed the "minimum principle" as a conjecture, and soon provided a rigorous proof, which was first read out at the International Mathematics Conference held in Edinburgh in 1958. The "minimum principle" developed the classical variational principle and became a powerful tool for dealing with closed-set constrained variational problems.


"Dynamic rules" were gradually created by Bellman between 1953 and 1957. Based on the optimality principle, he developed the Hamilton-Jacobi theory in the calculus of variations and formed "dynamic programming", which is a method applicable to computer calculations and a wider range of problem processing. In the formation and development of modern control theory, the minimum principle, dynamic programming and Kalman's optimal estimation theory have played an important role in promoting it. While modern control theory is developing rapidly, digital computers are also developing rapidly and are widely used. The increase in digital computer computing speed, the increase in storage capacity, the reduction in size and the widespread application of software have made digital computers not only a powerful tool for control system analysis and design, but also gradually become one of the main components of automatic control systems. The "online" participation of computers in control makes many complex control methods that do not require the controller to be reduced to a simple correction network or a closed analytical solution possible in practical engineering applications. Therefore, the advent of high-speed, large-capacity, and hardware-software combined computers, on the one hand, made the engineering realization of modern control theory possible, and on the other hand, in turn raised many new theories and problems, leading to the emergence of a large number of research results such as direct and indirect calculations of optimal control, further promoting the development of modern control theory.


In the past 20 years, in the application of modern control theory and modern control engineering, many achievements of modern mathematics have been absorbed, and great progress has been made, which has penetrated into many fields such as production, life, national defense, urban planning, intelligent transportation, management, etc., and has played an increasingly important role. The development achievements of optimal control mainly include optimal control of distributed parameters, stochastic optimal control, adaptive control, optimal control of large systems, differential countermeasures, etc. The optimal control theory has formed a relatively complete theoretical system, which has made a relatively sufficient theoretical preparation for modern control engineering. It should be pointed out in particular that with the application and development of high-performance embedded systems, the research on optimal control theory will be a very active research field, and the application of optimal control theory in practical engineering will become more and more extensive.


[1] [2]
Keywords:LQR Reference address:Optimal control of LQR control algorithm

Previous article:Brief Introduction of FOC of BLDC Motor Control Algorithm
Next article:How does the inverter convert frequency?

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号