◆ There are different classification methods for various control systems from different viewpoints: usually they are divided according to the following methods.
1. Linear control system and nonlinear control system
If the components that make up the control system have linear characteristics, then the system is called a linear control system. The relationship between the input and output of this system is generally described by differential equations, transfer functions, or by state-space expressions. The main characteristics of a linear system are homogeneity and the applicability of the superposition principle. If the parameters in a linear system do not change with time, it is called a linear steady-state system; otherwise, it is called a linear time-varying system. This book mainly discusses linear steady-state systems.
In a control system, if at least one element has nonlinear characteristics, the system is called a nonlinear control system. Nonlinear systems are generally not homogeneous, nor do they apply the superposition principle, and their output response and stability are closely related to their initial state.
Strictly speaking, absolutely linear control systems (or components) do not exist, because the physical systems and components used have nonlinear characteristics to varying degrees. In order to simplify the analysis and design of the system, under certain conditions, it can be studied using the theory and methods of analyzing linear systems.
In engineering, in order to improve the performance of the control system, some nonlinear elements are often artificially introduced. For example, in order to achieve the shortest time control, a switch type (Bang-Bang) control method is adopted; in the DC speed control system of the rectifier device composed of thyristors, in order to improve the dynamic characteristics of the system and limit the maximum current of the motor, people consciously design the speed regulator and current regulator to have saturated nonlinear characteristics.
2. Constant value control system and follow-up system
The reference input of the constant value control system is a constant, and it is required that the controlled quantity can recover (or approach) the original steady-state value as soon as possible under any disturbance. Since this type of system can automatically eliminate the influence of various disturbances on the controlled quantity, it is also called a self-stabilizing system. The reference input of the servo system is a variable quantity, which is generally random, and requires that the controlled quantity of the system can quickly and accurately track the changes of the reference input signal.
3. Continuous control system and discrete control system
If the signals of each part of the control system are continuous functions of time t, then this type of system is called a continuous control system. The liquid level control system and servo system mentioned above belong to this type of control system.
As long as there is one discrete signal in the signals of various parts of the control system, the system is called a discrete control system.
It should be pointed out that the above classification is only based on the common classification method. There are also other classification methods: such as concentrated parameter system and distributed parameter system; single input and output system and multiple input and output system; time-varying system and non-time-varying system; system with static error and system without static error; optimal control and suboptimal control system; adaptive system and self-stabilizing control system, etc.
|