The overall design concept and function package of ROS Navigation Stack

Publisher:DelightWish123Latest update time:2023-02-01 Source: 深蓝AIAuthor: Lemontree Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Preface

Navigation Stack is a set of two-dimensional navigation function packages provided by ROS. It inputs odometer, information and target position, and outputs safe speed instructions for controlling the reaching of the target state.



ROS Navigation Stack provides a good reference for navigation planning of mobile robots. By implementing the interfaces provided by the function package set , you can also easily apply your own applications to mobile robots. This article will help you understand the design ideas of ROS Navigation Stack and explain each function package .

Click to view the larger image Through this image on the ROS wiki, we can clearly see the overall design idea of ​​the ROS Navigation Stack: the entire function package is centered on move_base, which inputs odometer information, sensor information, positioning information, maps, and target points to move_base, which will output speed instructions after planning. move_base includes three key parts: global_planner, local_planner, and recovery_behaviors.

These three parts are implemented in the form of plug-ins, and the plug-in mechanism can easily switch planners implemented by different algorithms. The recovery behavior will be triggered when an abnormal state occurs during the robot's movement, with the purpose of helping the robot get rid of the abnormal state. In addition, move_base also includes global_costmap (global cost map) and local_costmap (local cost map), and the planner needs to perform navigation planning on the cost map. Let's take a closer look at the contents of the above mentioned parts.

0 2 odometry

In simple terms, the role of the odometer is to estimate the distance and speed of the robot's movement. By reading the source code, we can know that in the ROS Navigation Stack, the odometer information has two functions. One function is to provide it to the local planner. When the local planner selects the optimal path and determines whether the robot should stop, it will use the odometer's speed information. The other function is to use the estimated pose information for positioning.

Odometry information is generally obtained from the wheels of the robot chassis. Of course, depending on the robot, you can also choose to use a visual odometry. You can also use the extended Kalman filter to fuse the wheel odometry and IMU data to obtain a more accurate pose estimate. The message type nav_msgs/Odometry includes the robot's pose and velocity and their respective covariances.

nav_msgs/Odometry.msg

0 3 sensor Sensor data generally comes from lidar, IMU and depth camera, which can be used for positioning and obstacle avoidance. The use of sensors requires setting the coordinate transformation relationship between the sensor reference system and the robot reference system, which is often called TF transformation. This is done to represent the relationship between the environment perceived by the sensor and the robot reference system. If the amcl algorithm is used, the lidar data will be used to match the static map, correct the robot's position and posture, and obtain more accurate positioning.

LiDAR can also sense the location of obstacles in the environment and avoid them by adding them to the cost map. The specific types of sensors used depend on the robot platform used. In theory, the more types of sensors used, the better the positioning and obstacle avoidance effects.

0 4 tf

tf is a functional package that allows users to track multiple reference frames over time. It uses a tree data structure to cache and maintain the coordinate transformation relationship between multiple reference frames based on time. It can help users complete the coordinate transformation of data such as points and vectors between two reference frames at any time.

Robot systems usually have many three-dimensional reference frames that change over time, such as the world reference frame and the robot reference frame. TF tracks these reference frames over time. To implement autonomous navigation of mobile robots based on the ROS Navigation Stack, a complete TF tree must be maintained, namely map->odom->base_link->sensor_link. In fact, what we call positioning is the process of maintaining the relationship between map->base_link. The TF tree records the relationship between the robot reference frame and the map reference frame, which means the robot's position in the map is obtained. The TF tree also records the relationship between the sensor reference frame and the robot reference frame, which means the relationship between the perceived data and the robot is obtained.

Click to view full image

From the above picture, we can intuitively understand what tf transformation is. The tf tree helps us manage the coordinate transformation relationship between the lidar and the robot chassis. When the lidar senses that there is an obstacle at a certain position, the distance between the obstacle and the robot chassis can be obtained through tf transformation.

0 5 map_server

map_server is optional in ROS Navigation Stack, and its main function is to provide maps for robot navigation. There are two ways to provide maps, one is to provide real-time maps through SLAM, and the other is to provide maps that SLAM has created and saved in advance or created by other methods. Commonly used SLAM algorithms include gmapng and hector_slam.

Generally, in relatively regular scenes, high-precision maps can be made and provided to robots, which will have better positioning and planning effects. To provide real-time maps through SLAM, the real-time maps need to be published in the form of . The map provided by map_server is in pgm format. By loading the yaml configuration file, the map is loaded into the system in the form of a topic. In the yaml file, you can configure the resolution, origin, and probability of occupation/freeness of the map. The following is the content of the yaml configuration file:

The default reference frame of the map is map. The probability of occ = (255 – color_avg) / 255.0, where color_avg is the average value of the pixel RGB.

0 6 amcl (Adaptive Monte Carlo Localization)

amcl is the only specified positioning algorithm in the ROS Navigation Stack. Its full name is Adaptive Monte Carlo Positioning. It is a probabilistic positioning system for robots moving in a two-dimensional environment. To put it simply, its principle is to spread particles in the global map, which can be understood as the possible positions of the robot.

Particles are scored according to evaluation criteria, such as the degree of match between lidar data and the map. The higher the score, the greater the possibility that the robot is at this location. The particles that remain after passing by are the particles with high scores.

After multiple scattering of particles, the particles will be concentrated in the places where the robot is most likely to be located, which is called particle convergence. In fact, adaptability can be simply understood as increasing or decreasing the number of particles based on the average score of particles or whether the particles have converged. It can effectively solve the robot kidnapping problem and the problem of fixed number of particles.

The red particle clusters in the above figure are the particles scattered by amcl in the global map. It can be seen that the particle clusters are concentrated around the given starting position at the beginning. At this time, the particle clusters are still relatively scattered. As the robot moves, the particle clusters gradually converge, and the positioning effect is still good.

Click to view full image

The role of amcl in the ROS Navigation Stack is to output the tf transformation of map->odom to compensate for the drift error of the odometer. It requires the existence of odometer pose estimation in the robot's positioning system, that is, the tf transformation of odom->base_link, and the given starting pose and input sensor data.

0 7 costmap_2d (cost map)

The costmap_2d package provides a two-dimensional costmap implementation that takes sensor data from the actual environment and builds a two-dimensional or three-dimensional grid occupancy map, as well as a two-dimensional costmap based on the occupancy grid map and a user-defined expansion radius. The package also supports initializing costmaps based on map_server, costmaps based on rolling windows, and subscribing to and configuring sensor topics.

In the ROS Navigation Stack, the cost map is divided into a global cost map and a local cost map. The global cost map uses a cost map initialized based on map_server, which is the Static Map Layer, and the local cost map is a cost map based on a scrolling window. The cost map also includes the Obstacle Map Layer and the Inflation Layer. Sometimes, user-defined layers can be added according to the needs of the application scenario. User-defined layers can be implemented using plug-ins. The obstacle layer adds obstacles sensed by the sensor to the cost map. When planning, we will regard the robot as a particle and do not consider the actual model of the robot. Therefore, an inflation layer is needed in the cost map to try to ensure that the planned path will not cause the robot to collide with obstacles.

Click to view full image

The above picture is an example from the ROS wiki. You can see that the gray part is the static map, the red part is the obstacle perceived by the sensor, and the blue part is the expansion layer. The red polygon represents the shape of the robot. To avoid collision, the robot shape should not intersect with the red part, and the robot center point should not intersect with the blue part.

0 8 move_base

[1] [2]
Reference address:The overall design concept and function package of ROS Navigation Stack

Previous article:Kecong mobile robot control system builds a submersible and jacking AMR to help the rapid development of the rail transportation equipment industry
Next article:Yuejiang starts listing guidance, is the domestic collaborative robot reaching a new turning point?

Latest robot Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号