A review of research on optimal power flow algorithms for power systems

Publisher:Yinyue1314Latest update time:2012-06-17 Source: 21IC Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

introduction

The optimal power flow problem (OPF) is how to obtain the optimal operating state of a power system under the premise of satisfying system operation and safety constraints. OPF is a typical nonlinear programming problem, and the usual mathematical description is:

Objective function: min f (x)(1)

Constraint: g(x) = 0

h(x) ≤0

Where: f is the objective function for optimization, which can be the system's power generation cost function, power generation fuel, system active network loss, economic benefits of reactive power compensation, etc. g is the equality constraint, that is, the node injection power balance equation. h is various safety constraints of the system, including node voltage constraints, active and reactive power constraints of generator nodes, branch power constraints, transformer ratio, variable capacitor constraints, etc.

In the early 1960s, French scholar J. Carpentier first proposed an optimal power flow model for power systems based on a rigorous mathematical model. Since then, OPF has been a research field of concern for many scholars, and a series of research results have been achieved. The first relatively successful practical algorithm was the simplified gradient method proposed by Dommel and Tinney in 1968, which is still cited as a successful method. The optimization algorithm based on Newton's method has better convergence characteristics. In addition, the quadratic programming algorithm has also been proposed for power flow optimization. The interior point method has overcome the difficulty of determining the constraint set of Newton's method and has received widespread attention. Intelligent algorithms such as genetic algorithms have received increasing attention due to their global convergence and their ability to handle discrete variable optimization problems. They are optimization methods with great potential.

1 Classical algorithm for optimal power flow

The classical solution methods for the optimal power flow of power systems mainly refer to the solution methods based on linear programming, nonlinear programming and decoupling principles, represented by the simplified gradient method, Newton's method, interior point method and decoupling method. They are the most studied optimal power flow algorithms. The characteristic of this type of algorithm is that the first-order or second-order gradient is used as the main information for finding the optimal solution.

1.1 Simplified Gradient Method The simplified gradient method proposed by Dommel and Tinney in 1968 was the first algorithm that could successfully solve large-scale optimal power flow problems and was widely used.

The gradient method is decomposed into two steps. The first step is to perform gradient optimization without constraints. The second step is to correct the results and add a possible voltage limit penalty function to the objective function. This method can handle a larger network scale, but the calculation results do not conform to the actual engineering situation. On the basis of the gradient method, the conjugate gradient method is used to improve the original search direction, thereby obtaining a better convergence effect than the conventional simplified gradient method.

The main disadvantages of the simplified gradient method are: poor convergence, especially when close to the optimal point, the convergence is very slow; in addition, each time the control variable is corrected, the power flow must be recalculated, which requires a large amount of calculation. The selection of the correction step size of the control variable is also one of the difficulties of the simplified gradient method, which will directly affect the convergence of the algorithm. In short, the simplified gradient method is inherent in mathematics, so it is not suitable for application in large-scale power systems. The selection has a great influence on the convergence speed of the algorithm, etc. At present, there are few studies on this method for optimal power flow.

1.2 Newton's method

The advantage of the Newton method for optimal power flow over the simplified gradient method is that it is an algorithm with a second-order convergence rate. In addition to using the first-order derivative of the objective function, it also uses the second-order derivative of the objective function and considers the trend of gradient change. Therefore, the search direction obtained is better than the gradient method and can find the optimal point more quickly. This algorithm does not distinguish between state variables and control variables, makes full use of the physical characteristics of the power network, uses sparse solution technology, and directly performs Newton method iteration to solve the Kuhn-Tucker conditions of the Lagrangian function. It converges quickly and greatly promotes the practical application of optimal power flow. At present, the research on the Newton method for optimal power flow has entered the practical stage. Estimating the effective inequality constraint set is the key to implementing the Newton method. Using special linear programming technology to deal with inequality constraints can make the Newton method for optimal power flow converge after a few main iterations. Reference [1] uses an improved soft penalty strategy to deal with the "ill-conditioned" problem of the basic iteration matrix in the Newton method, proposes a heuristic prediction strategy that considers the power grid topology to deal with the voltage inequality constraints that are in effect, and conducts an effectiveness analysis of the experimental iteration, and proposes a finite termination scheme. The above measures improve the numerical stability, convergence and calculation speed of the Newton OPF algorithm. Reference [2] proposes a new optimal power flow discrete control variable processing method based on a positive curvature quadratic penalty function, which uses the virtual cost generated by the quadratic penalty function to force the discrete control variable to reach one of its levels. The method has a simple mechanism and good convergence and accuracy.

1.3 Interior Point Algorithm (IP) The interior point method was originally proposed as a linear programming algorithm to solve the problem that the amount of calculation of the simplex method increases sharply with the scale of variables. Starting from the initial interior point, the interior point method finds the subsequent interior point that makes the objective function value decrease along the feasible direction, and finds the interior point that makes the objective function value decrease along another feasible direction. Repeat the above steps, iterate from the inside of the feasible domain to the optimal solution, and obtain a sequence composed of interior points, so that the objective function value strictly decreases monotonically. Its characteristic is that the number of iterations is independent of the system scale. The interior point method was originally used to solve linear programming problems. Now this method has been extended to solve quadratic programming and nonlinear programming models, and can be used to solve optimal power flow problems. Compared with the Newton method, since the interior point method iterates to the optimal solution within the feasible domain, there is no difficulty in identifying the effective constraint set.

There are three types of interior point methods: projection scaling method, affine transformation method, and path tracking method. The performance of the projection scaling method in OPF problems is poor and it is rarely used in practical applications; while the affine scaling method and the primal-dual interior point algorithm are widely used. Since the dual affine scaling method is more complicated in determining the initial interior point feasible solution and converges slowly near the optimal point, the application of this method in solving OPF problems is limited; while the primal-dual interior point algorithm is the most studied interior point algorithm at present because of its rapid convergence, strong robustness, and insensitivity to the choice of initial values. This algorithm has now been extended to the field of quadratic programming and is being further developed to study general nonlinear programming problems.

1.4 Optimal Power Flow Decoupling Algorithm

The optimal power flow decoupling algorithm utilizes the weak coupling relationship between active power and reactive power in the steady-state operation of the power system. From the perspective of the problem itself or the model of the problem, the optimal power flow as a whole is decomposed into two sub-optimization problems: active optimization and reactive optimization. The problem is solved by alternating iterations, and finally the active and reactive comprehensive optimization is achieved. The two sub-problems can be solved by different optimization methods. This method turns a large-scale problem into two smaller sub-problems that are solved serially iteratively, which can save memory and greatly improve the calculation speed. However, some constraints (such as branch power flow constraints) are often related to both active variables and reactive variables. In this way, the optimal power flow problem should not be decoupled into two sub-problems, and the accuracy of this algorithm is not high.

2 Intelligent Optimization Algorithm for Optimal Power Flow

2.1 Genetic Algorithm

Genetic algorithm is a new type of optimization algorithm that appeared in the 1980s and has developed rapidly in recent years. Its mechanism originates from the selection and inheritance of biological evolution in nature. Through core operations such as selection, crossover and mutation, it realizes "survival of the fittest". Its main features are: it can start from multiple initial value points and search along multiple paths to achieve global or quasi-global optimality; it can easily handle mixed integer discrete problems; it is an effective adaptive optimization method. When GA is applied to power flow optimization problems, the general steps are: first, a set of initial power flow solutions are randomly given, subject to various constraints, and then their advantages and disadvantages are evaluated through the objective function, and then they are encoded, and through genetic operations - selection, crossover and mutation, they are recombined, and those with low evaluation values ​​are discarded. Only those with high evaluation values ​​have the opportunity to iterate their features to the next round of solutions. Finally, the solution corresponding to this code string will tend to be optimized.

The advantage of genetic algorithms is that they have good global optimization capabilities, and the optimization results are generally better than traditional optimization methods. The disadvantage is that the amount of calculation is relatively large and the calculation time is long. At present, the research on genetic algorithms mainly focuses on the following two aspects: improving the calculation speed by improving the objective function calculation method, and improving the overall convergence and optimization performance by improving the operation of genetic algorithms.

In the research of genetic algorithm operation, reference [11] studied the influence of different operator parameters on the number of iterations and optimization results on a 103-node system, and also studied the influence of control variable constraints, suggesting that the solution space should be continuously reduced during the optimization process. Reference [12] studied a variety of methods for improving the efficiency and accuracy of GA, indicating that GA with variable penalty factors and variable weight factors is the most effective in economic scheduling, and it can best guarantee convergence accuracy, although it sacrifices some convergence time. Reference [13] used guided mutation operations to reduce the population size and improve the calculation speed.

2.2 Simulated Annealing Algorithm

The optimal power flow simulated annealing algorithm (SA) is a random search algorithm based on the principles of thermodynamics. Reference [14] applied mean field theory to solve the optimal power flow problem. First, the optimal power flow problem was described as a mixed integer programming problem. On this basis, an SA algorithm that takes into account the characteristics of the problem was proposed, and the effectiveness of this method for small power systems was verified by multiple examples. Reference [15] proposed an optimal power flow proxy constraint algorithm based on entropy theory. A large number of inequality constraints in the optimal power flow problem were handled by a proxy constraint inequality. This method reduced the scale and dimension of the optimal power flow problem and was very suitable for the SA algorithm at low temperatures. However, the proxy constraint algorithm has two defects: first, this method is difficult to converge when there are a large number of effective inequality constraints; second, when the initial point is not an interior point, it is also difficult to converge or converge to an infeasible solution.

3 Comparison of various optimal power flow algorithms

This paper mainly classifies and compares existing algorithms from the perspective of derivative and non-derivative based optimization.

The simplified gradient method, Newton method and interior point method in the classical methods of optimal power flow calculation of power system are all optimization methods based on derivatives. However, a common feature of modern optimization methods such as evolutionary algorithms and simulated annealing algorithms is that they do not use gradient as the main information for finding the optimal solution, and are non-derivative optimization methods.

The main advantages of the former are: ① It can determine the search direction according to the derivative information of the objective function, so the calculation speed is fast; ② The algorithm is relatively mature, widely used, the analytical process is clear, and the results are highly credible. Its disadvantages are: ① There are certain restrictions on the objective function and constraints, such as continuity, differentiability, etc., and simplification and approximation are required when necessary; ② The "curse of dimensionality" problem is difficult to solve; ③ In many cases, it will fall into a local minimum or it is difficult to converge when approaching the optimal solution; ④ The handling of discrete control variables is not ideal.

The main advantages of the latter are: ① Independence from derivatives. The objective functions of many optimization problems in engineering are not differentiable. If the former method is adopted, it can only be assumed and approximated, which obviously affects the authenticity of the solution; if the non-derivative optimization method is adopted, it is not necessary to know the derivative information of the function, and it only depends on the repeated evaluation of the objective function; ② Its flexibility, no derivative means no requirement for the differentiability of the objective function, so we can use the complex objective functions required for special application problems without paying too much extra programming and computing time; ③ Its randomness, it is easy to jump out of local extreme points, they are a kind of global optimization algorithm, especially suitable for nonlinear large-scale problems and the situation where the solution space of the problem is irregularly distributed; ④ Its inherent parallelism, its operation object is a set of feasible solutions, not a single feasible solution, and there are multiple search tracks, not a single one. This inherent parallel processing greatly improves the speed of processing complex optimization problems. The development of its inherent parallelism can overcome its performance deficiencies to a certain extent.

Its disadvantages are: ① unstable performance, the algorithm will have different effects in different instances of the same problem, resulting in low credibility of the calculation results; ② operating according to probability, it cannot guarantee 100% optimal solution, and the solution usually obtained is a suboptimal solution very close to the optimal solution, but it can achieve enough accuracy to meet engineering needs; ③ some control parameters in the algorithm need to be given artificially based on experience, which requires a certain amount of experiments or expert experience. Although the optimal power flow decoupling algorithm has a faster execution speed, it is difficult to use in situations where decoupling is not suitable, so its scope of application and versatility are subject to certain restrictions.

The optimal power flow parallel algorithm uses distributed processing and parallel computing technology, which can greatly improve the algorithm execution efficiency and the ability to handle large-scale problems, providing powerful help for solving large-scale optimal power flow problems. Other methods of optimal power flow calculation are also useful explorations of this problem, but no recognized satisfactory results have been achieved.

4 Conclusion

People have conducted a lot of research on optimal power flow, and proposed various algorithms according to different conditions. However, with the emergence of power system network interconnection, real-time control, FACTS and power market, new requirements have been put forward for optimal power flow. The optimization algorithm should be designed from the overall perspective according to the characteristics of the optimal power flow problem, and a reasonable optimization strategy should be adopted. Based on the modern optimization algorithm based on non-derivatives, the optimal power flow algorithm is designed with the idea of ​​"multi-point randomized global search + problem-oriented local optimization". According to the characteristics of the optimal power flow problem, other methods are combined, and modern computer technologies such as distributed processing and parallel computing are fully utilized. It is a potential research direction for solving the optimal power flow problem. Therefore, in future research, it is necessary to analyze the advantages and disadvantages of various algorithms according to the actual situation and characteristics of the problem being studied, reasonably integrate different algorithms, take their strengths, and study algorithms with fast calculation and reliable convergence to meet the needs of the development of the power system under the new situation.

References

[1] Zhao Jinquan et al. Study on strategies to improve the effectiveness of the optimal power flow Newton algorithm. Proceedings of the CSEE, 1999, 19 (12): 70-75

[2] Zhao Jinquan et al. A new method for processing discrete control variables in Newton optimal power flow algorithm. Automation of Electric Power Systems, 1999, 23 (23): 37-40.

[3]PonnambalamK,Quintana VH, VannelliA. A Fast Algorithm for Power System Optimization Industry Using an InteriorPointMethod 3932400.

[4] Granville S. Optimal Reactive Dispatch through InteriorPoint Method [ J ]. IEEE Trans on Power Systems, 1994, 9 (1): 1362146.

[5] WU Yuchi, DebsA S, Marsten R E. A DirectNonlinear PredictorCorrector PrimalDual Interior Point Algorithm for Optimal Power Flow[J]. IEEE Trans on PowerSystems, 1994, 9 (2): 8762883. (6): 4092412.HAO Yuguo, LIUguangyi, YU Erkeng. A New OPFAlgorithm Based on Karmarkar′s Interior PointMethod[J]. Proceedings of the CSEE, 1996, 16 (6): 4092412.

[6]Momoh JA, Guo SX, Ogbuobiri EC, et al. The Quadratic Interior PointMethod Solving Power System Optimization Problems [J]. IEEE Trans on Power Systems, 1994, 9 (3): 132721336.

[7] Zhang Xiaoping, Chen Zhaohui. Security-constrained economic dispatch based on interior point method[J]. Automation of Electric Power Systems, 1997, 21 (6): 272-29.

[8]Momoh JA, Zhu J Z. Imp roved Interior PointMethodforOPF Problems[J]. IEEE Trans on Power Systems, 1999, 14 (3): 111421120.

[9] Liu Mingbo, Chen Xuejun. Reactive power optimization algorithm for power system based on primal-dual affine scaling interior point method[J]. Power System Technology, 1998, 22 (3): 33236.

[10] Liu Mingbo, Chen Xuejun. Improved interior point algorithm for reactive power optimization of power system[J]. Automation of Electric Power Systems, 1998, 22 (2): 33236.

[11] Zhou Shuangxi, Yang Bin. Factors affecting the performance of genetic algorithms and improvement measures [J]. Automation of Electric Power Systems, 1996, 20 (7): 24227.

[12] Sheble GB, Kristin. Refined Genetic AlgorithmEconomic Dispatch Example [J]. IEEE Trans on PowerSystems, 1995, 10 (1).

[13] MENG Xiangpin, LiangZhishan, ZhaoHuanguan. Fast Synthetic Genetic Algorithm and Its Application to Optimal Control of Reactive Power Flow [ J ]. IEEETrans on Power Systems, 1998, 13: 145421458.

[14] Chen L, Suzuki H, Katou K. Mean field theory for optimal power flow. IEEE Trans on PS, 1997, 12(4): 1481~1486

[15] Chen L, Matoba S, Inabe H, O kabe T. Surrogate constraint method for optimal power flow. IEEETrans on PS, 1998, 13 (3): 1084~1089

[16] Momoh JA, DiasL G, Guo SX, et al. Economic Operation and Planning of Multi area Interconnected

Reference address:A review of research on optimal power flow algorithms for power systems

Previous article:Measures to prevent generator from oscillation and loss of step during operation
Next article:Application experience of low voltage reactive power compensation device in distribution network

Latest Power Management Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号