This paper proposes a new method for solving the global optimal solution of nonlinear constrained optimization - it is a new method based on the use of nonlinear complementary functions and the continuous addition of new constraints to repeatedly solve the nonlinear equations of the Kuhn-Tucker condition. Because the Kuhn-Tucker condition is a necessary condition for nonlinear constrained optimization, the solution obtained may not be the global optimal solution of nonlinear constrained optimization. For this reason, this paper first gives a method of limiting the range of the global optimal solution by using the prior knowledge of the optimization problem and continuously adding constraints. Some simulation examples show that the proposed method and theory are effective and feasible. With the over-exploitation and utilization of resources in today\'s world, it has become increasingly scarce. How to effectively use existing resources has become one of the most concerned hotspots in the world. And the problem of effective use of resources is actually an optimization problem. Practical optimization problems are almost always constrained. There are three main methods for constrained optimization problems: one is to construct a constraint function to transform the constrained optimization problem into an unconstrained optimization problem, which includes two aspects: how to construct the constraint function and how to obtain its optimal solution. Many scholars have focused on this aspect and have achieved many good results [1]-[5], such as the Penalty function method in the literature and the use of optimization algorithms such as GA to obtain global or local convergence to a point that satisfies the Kuhn-Tucker condition. In fact, judging from the results, this is just another way to solve the Kuhn-Tucker condition system of equations. The second method is to use the constraints and the objective function to construct new feasible solution exploration conditions to solve the problem, but in the end it is also a point that satisfies the Kuhn-Tucker condition system of equations, such as the QP in the literature. method[6,7]; whether it is the first energy function method or the second feasible domain exploration method, the ultimate goal is to find the point that satisfies the Kuhn-Tucker condition system of equations, that is, the last method, which directly uses the Kuhn-Tucker condition and nonlinear complementary function[11-13] to transform the constrained optimization problem into a problem of solving a nonlinear system of equations, and uses the existing methods for solving nonlinear systems of equations, such as the continuation algorithm with large-scale convergence (Embedding method), to solve it[8-16]. However, because the Kuhn-Tucker condition is a necessary condition for nonlinear constrained optimization, its solution may not be the optimal solution for nonlinear constrained optimization. This leads to a problem: on the one hand, for non-convex constrained optimization problems, the global optimal solution is very important; on the other hand, only one set of solutions can be obtained by solving the system of equations, and it is usually not the global optimal solution. Of course, it is also possible to solve the global optimal solution by continuously selecting different initial values, but it takes a lot of time. Another method is to construct a new objective function to turn a non-convex optimization problem into a convex optimization problem, but this is often difficult. For this reason, this paper attempts to solve this problem from another approach, that is, by continuously adding prior information to limit the range of the global optimal solution and obtain the global optimal solution, but this restriction is divided in one dimension, that is, the multi-dimensional constraint range is projected onto a function with the same dimension, and different regions are divided according to their value size, such as adding new constraints according to the value of the objective function, thereby obtaining new Kuhn-Tucker conditions and corresponding nonlinear program groups. For large-scale optimization problems, this method hardly increases the amount of calculation, so the time required mainly depends on the number of one-dimensional divisions of a function and the algorithm used to solve the nonlinear equations. If the approximate value range of the function is known, the number of times the nonlinear equations are solved can be greatly reduced. In addition, the speed of some methods for solving nonlinear equations can already meet some practical needs. With the development of theory and technology for solving nonlinear equations, the speed of the algorithm will become faster and faster. In this way, this method can not only obtain the global optimal solution of constrained optimization, but also the time required can be very short.
You Might Like
Recommended ContentMore
Open source project More
Popular Components
Searched by Users
Just Take a LookMore
Trending Downloads
Trending ArticlesMore