What is an automatic parking system? Analysis of automatic parking path planning and tracking technology

Publisher:Tianran2021Latest update time:2023-03-06 Source: elecfans Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Reinforcement learning algorithms can autonomously explore and obtain samples, but due to the randomness of the strategy in the early stage of training and the large number of invalid samples, the contribution of successful samples to the change of weights of the neural network is easily overwhelmed, resulting in low sample utilization and even failure to converge. For fixed positions and starting positions with different heading angles, the control sequence of manual parking is used to pre-train the agent (the terms such as vehicle, agent, and algorithm model in this article have the same meaning in the context) so that the agent can obtain samples with high return values ​​without exploration in the early stage. The failed and successful exploration experiences are stored separately, and the sampling ratio that changes with the number of training rounds is set so that the agent can always learn from successful samples. The Monte Carlo tree search method in AlphaGo is used to generate parking data, and the reward function is used to evaluate the data quality to select the optimal data for training the agent, avoiding the impact of low-quality data on the agent during random exploration. The TD error is used as the priority of the sample, and the SumTree data structure is used to store the samples. Based on the priority sampling, the samples that contribute more to the gradient calculation are more likely to be sampled. When studying the decision-making control of high-speed intelligent driving of vehicles, the exploration strategy is divided into lane keeping exploration strategy and overtaking and obstacle avoidance exploration strategy. On the basis of the original action, the correction value based on the improved strategy is added to reduce invalid exploration.

In reinforcement learning, the design of the reward function is directly related to whether the model can converge. For robot path planning, a reward function that only includes collision penalties and reaching the end point is designed, which belongs to the problem of sparse rewards. The agent is trained with sparse rewards and dense rewards respectively. The results show that the agent trained with dense rewards has a higher parking success rate. For the path planning problem of indoor mobile robots, the reward is designed to exceed the specified time by -0.05 to prevent the robot from staying in place because of timidity. The reward is set to the negative of the distance between the current position of the vehicle and the target position. While guiding the vehicle to approach the target position, it urges the agent to arrive as soon as possible. In addition, some scholars have improved the convergence of deep reinforcement learning algorithms from the perspective of improving training methods. Based on course learning, the convergence is accelerated by gradually adding obstacles to the training method. Course learning was proposed by the leader of machine learning . Its essence is to set a series of courses from easy to difficult for the model based on prior knowledge to accelerate the convergence speed. The fixed heading angle discretization training method is adopted. The working condition with a heading angle of 30° is trained first, and then gradually expanded to the initial heading angle of 0°~90° after convergence. This coincides with the idea of ​​learning the course from easy to difficult.

In summary, the automatic parking path planning algorithm based on deep reinforcement learning still has some shortcomings. During the training process, the learning efficiency of the agent is not high and the convergence speed is slow. Reinforcement learning requires the agent to interact with the environment based on the current strategy to obtain the samples required for learning, and the quality of the samples will affect the strategy update. The two are interdependent, and the algorithm is prone to fall into local optimality. Compared with robots, cars are non-complete systems with lateral and longitudinal coupling , and the parking space is small. For given initial conditions, the parking path and control sequence are very sparse. In order to reduce the difficulty of learning, the common method is to fix the starting posture training and relax the parking space restrictions, but this also leads to the training of the agent. Compared with the traditional planning method, the planning ability is not strong, and it cannot meet the actual application requirements of automatic parking. If the above shortcomings can be effectively improved, it will have a positive role in promoting the automatic parking method based on deep reinforcement learning. First, the automatic parking motion planning method based on deep reinforcement learning is introduced. Improvements are made on the basis of considering convergence and stability. The agent is trained by building a simulation platform, and then the performance of the agent is analyzed and evaluated from multiple angles such as robustness, planning ability, and safety.

1 Establishing vehicle dynamics model

1.1 Deep reinforcement learning algorithm model

Reinforcement learning is a Markov decision process. Based on the current state s, the agent selects action a, the environment returns reward r and the next state s′. Through continuous attempts, the agent learns the optimal strategy. The deep deterministic policy gradient (DDPG) based on the Ator-Critic framework, on the basis of deterministic policy gradient (DPG), combines the advantages of the DQN (deep q-learning) algorithm, including dual networks, experience replay pool, etc., and has achieved good results on many problems.

In traditional reinforcement learning, value-based methods use tables to record all action values, but for continuous state spaces, the number of states is huge, and using table methods will lead to dimensionality disaster. Therefore, a neural network is used to approximate the action value Qπ(s, a):

8d966904-b254-11ed-bfe3-dac502259ad0.jpg

Where w represents the weight of the neural network.

Similarly, when parking, the vehicle has very high control accuracy requirements, which cannot be met by discrete actions. Therefore, the strategy is also approximated by a neural network (actor network), as shown in equation (2). Ornstein-Uhlenbeck noise (OU noise) is added to the network output to increase the exploratory nature in the early stages of training. This equation describes a mapping from state space to action space, and outputs the best action given a certain state.

8da52912-b254-11ed-bfe3-dac502259ad0.jpg

In DQN, the loss function of the critic network is defined as follows:

8db9ba62-b254-11ed-bfe3-dac502259ad0.jpg

From formula (3), we can see that the gradient update of the critic network depends on the action value calculated by the actor network and the target Q value calculated by its own network, while the gradient update of the actor network depends on the Q value calculated by the critic network. The correlation between the two networks and between the target value and the current value is too strong, which leads to the instability of the algorithm. In order to reduce this correlation, a copy is created for the actor network and the critic network, namely the target critic network and the target actor network, to calculate the target action and target value. The improved current critic network and current actor network loss function are shown in formula (4), where the current critic network gradient update is changed to depend on the action value calculated by the target actor network and the target Q value calculated by the target critic network.

8ded4274-b254-11ed-bfe3-dac502259ad0.jpg

The target network is updated by slowly tracking the current network (soft update), as shown in equation (5), where α is the current network weight and α′ is the target network weight. The target value can be regarded as constant in the short term, similar to the sample label in supervised learning, which greatly improves the stability of learning.

8e0e6738-b254-11ed-bfe3-dac502259ad0.jpg

1.2 Vehicle kinematic model

In the low-speed parking condition, the side slip of the tire is not considered, and the nonlinear state space model of the vehicle is shown in Equation (6). Where x and y are the coordinates of the center of the vehicle's rear axle, the heading angle θ is the angle between the vehicle's longitudinal axis and the x-axis, (x, y, θ) is the vehicle's posture; v is the linear velocity of the midpoint of the vehicle's rear axle, L is the wheelbase, and δ is the front wheel turning angle.

8e1f2c1c-b254-11ed-bfe3-dac502259ad0.jpg

1.3 Definition of reinforcement learning elements

The state of the vehicle at a certain moment needs to be clearly distinguishable and able to characterize the relationship between the vehicle and the environment, preferably related to the control quantity. The vehicle's x, y coordinates, heading angle θ, steering wheel angle sw, and the minimum distance d between the vehicle body and surrounding obstacles are selected as the state:

8e2e4f12-b254-11ed-bfe3-dac502259ad0.jpg

When parking, lateral control is more important than longitudinal control, so the longitudinal speed is set to a constant value. Action a is defined as the target steering wheel angle sw target at the next moment, and its range is [-540, 540]. At the same time, in order to ensure sports comfort and avoid excessive changes in the steering wheel angle, the steering wheel angle change is limited to 20 (°) / Δt when input into the vehicle kinematic model:

8e471966-b254-11ed-bfe3-dac502259ad0.jpg

Based on the Pytorch framework, the algorithm block diagram of the automatic parking simulation platform was established, as shown in Figure 1. The entire framework consists of three parts: the first is the interaction process between the agent and the environment. The agent (that is, the vehicle) determines the target steering wheel angle a according to the current vehicle state s, adds OU noise and inputs it into the vehicle kinematic model, calculates and returns the next state s′ to the agent, and repeats until the vehicle collides with an obstacle or completes parking. The second is the process of storing samples in the experience pool. After each interaction between the agent and the environment, the reward function calculates the reward r according to the next state s, and then stores the (s, a, r, s′) tuple in the experience pool. The third is the training process of the agent. A batch of data is randomly sampled from the experience pool, the loss function of the current critic network and the current actor network is calculated and stochastic gradient descent is performed, and the parameters of the target critic network and the target actor network are updated by soft update.

[1].. [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
Reference address:What is an automatic parking system? Analysis of automatic parking path planning and tracking technology

Previous article:Analysis of the research and application of commercial vehicle drive-by-wire chassis technology
Next article:Six technical routes for in-vehicle gesture interaction research

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号