Three positioning technology solutions for autonomous driving technology

Publisher:温柔的心情Latest update time:2020-04-10 Source: 智车科技 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Among the issues of autonomous driving technology, positioning technology (self-positioning and the perception of the relative position of the surrounding environment) seems to be the most difficult technology for driverless cars to master, which is related to the dynamic nature of cities. For example, construction roads, closed roads, new signs and missing road signs are all examples of this dynamism and uncertainty. Humans will be confused by the above factors that may change at any time and anywhere, let alone machines. There is currently no way to perfectly solve the positioning problem of autonomous vehicles, but the following are the most effective and promising solutions to the problem. Different companies have different preferences, and this article will introduce three positioning methods based on specific company strategies.

Since the emergence of companies such as Tesla and Waymo, car companies have paid more and more attention to autonomous driving technology. This situation has become even more serious in 2018, which has accelerated the possibility of driverless cars landing as soon as possible. For example, General Motors employees in San Francisco have used cruise without steering wheels or pedals; Ford, Volkswagen, Toyota and Mercedes-Benz are also among the autonomous driving competition; last month, Tesla announced at a chip launch conference that they will build fully autonomous vehicles by 2020.


The realization of driverless driving has a bright future, but the road is tortuous. At present, autonomous driving still faces many challenges, such as the need to analyze data streams quickly and continuously, and the need to do some trivial things that are tedious for machines but trivial for humans. Specifically, object detection, distance, speed, positioning, and traffic regulations are all factors that need to be considered when making decisions during driving. In order to reach the L5 level in the SAE standard, the computer driving system needs to be able to perform all the above basic tasks and find technical solutions for different problems.


Positioning (locating oneself and the knowledge of one’s position relative to the surrounding environment) seems to be the most difficult technology for self-driving cars to master, which is related to the dynamic nature of cities. For example, road construction, road closures, new signs and missing road signs are all examples of this dynamism and uncertainty. Humans are confused by the above factors that can change at any time and anywhere, let alone machines. There is currently no way to perfectly solve the positioning problem of self-driving cars, but the following are the most effective and promising solutions to the problem. Different companies have different preferences, and this article will introduce three positioning methods based on specific company strategies.


Companies such as Tesla tend to use vision-based Visual SLAM (VSLAM) technology for positioning. They put as many visual sensors as possible into cars. Instead of relying on pre-recorded maps, they hope to combine image processing and machine learning to enable Tesla vehicles to have real-time understanding of the surrounding environment. Tesla vehicles are learning and sharing knowledge with other vehicles anytime and anywhere. They rely on real-time environmental data around them rather than historical data, and there is no risk of errors caused by relying on outdated maps.

Several related ideas for solving the positioning problem in autonomous driving systems

Tesla's goal is very clear, which is to build vehicles that can be driven in any conditions regardless of the surrounding environment. At Tesla's chip launch conference some time ago, Musk's diss of LiDAR caused an uproar. Musk said that the positioning method using LiDAR got rid of the "ugly, expensive and unnecessary" mapping equipment, and the price paid for this was a greater reliance on cameras and software when dealing with uncertainty. Andrej Karpathy, senior director of Tesla's artificial intelligence, emphasized that the role of physical data is irreplaceable. Compared with using LiDAR to build virtual high-precision maps, Tesla believes more in real physical data, and it is more realistic to look at pictures than radar.


At present, autonomous driving vehicles that use VSLAM for positioning are mainly equipped with three types of sensors: monocular, binocular (or multi-eye), and RGBD. In addition, there are special cameras such as fisheye and panoramic cameras, which are not introduced here because they are rare in research and products. In terms of implementation difficulty, the difficulty of these three methods is from difficult to easy: monocular vision, binocular vision, and RGBD. In the positioning process, the VSLAM autonomous driving vehicle starts from an unknown location in an unknown environment. During the movement, it observes and locates its own position, posture, and motion trajectory through the above visual sensors, and then incrementally builds a map based on its own position, thereby achieving the purpose of simultaneous positioning and map construction. Positioning and mapping are two complementary processes. Maps can provide better positioning, and positioning can also further expand the map.


General Motors and Mercedes-Benz are both optimistic about positioning through lidar or high-precision maps pre-made by GPS . General Motors acquired its own lidar supplier in 2017. Ford cooperated with Baidu to invest $150 million in Velodyne, a lidar supplier, and Mercedes-Benz also signed a lidar supply contract with Velodyne.


LiDAR is a very traditional positioning sensor. It can provide distance information between the robot itself and obstacles in the surrounding environment. Common LiDARs include SICK, Velodyne, Rplidar, etc. Using LiDAR to make high-precision maps is actually a technology that uses laser point cloud fusion technology to scan LiDAR and return scene distribution points. There are two types of laser point cloud fusion technology. One is based on point cloud fusion algorithm, which has a wide range of application scenarios and is not limited to GPS scenarios. The second is based on relatively accurate differential GPS and precise inertial navigation. It has a strong dependence on the scene and must be used in relatively open scenes. It is not effective for scenes with weak GPS signals such as viaducts. The accuracy of solutions based on image and GPS technology is relatively poor, and they are mainly used to make L2 and L3 ADAS maps, while laser point cloud can meet the needs of L4 and L5.


Such vehicles rely on pre-recorded 3D high-resolution maps that are pre-captured using lidar-equipped vehicles. Autonomous vehicles can then use their own lidar equipment to obtain information about the surrounding environment, compare it with the pre-made high-precision map, determine whether the environment has changed, and then achieve autonomous driving in the area covered by the map. This shows a relatively wider range of autonomous driving strategies. In order to maintain the accuracy of the maps and the usability of the vehicles, broader cooperation between municipalities and automakers is needed to create and maintain the latest high-precision maps for vehicles to use.


Take Cadillac's Super Cruise system as an example. Only when the information obtained by the vehicle body radar is consistent with the HD map information and passes the safety inspection, can the vehicle navigate on the highway with HD maps pre-stored. This method provides a high degree of reliability and predictability, but the amount of data that needs to be processed by HD maps is huge, and data center computer clusters must be used for processing, and a lot of parallel calculations and processing are involved, which are a great test of data processing capabilities; real-time updates of HD maps are also very important. The road environment may be constantly changing, and fast and effective updates are needed. All of the above efforts required to record maps and use lidar equipment to manufacture vehicles make the cost of achieving driverless driving relatively higher.


Another approach to autonomous driving positioning does not focus on how to make the car more flexible to the environment, but on how to make the environment serve the autonomous car, that is, to create a smarter environment. This reduces the burden on the vehicle, allowing it to find out all the uncertainties in its environment. In this case, the changing environmental factors will "find the door" by themselves, allowing the vehicle to understand the surrounding environment more accurately. Buildings can directly "tell" the entering car the specific location of the construction area and temporary lanes.


Volkswagen has been working hard to establish itself as a pioneer in V2X technology. In 2017, Volkswagen announced that all 2019 models would be equipped with a full suite of V2X capabilities . These connections will provide information about traffic conditions, incidents and other traffic conditions shared with the local environment within approximately 500 meters, or even further.

Several related ideas for solving the positioning problem in autonomous driving systems

Vehicles that are positioned through the Internet of Vehicles collect their own environment and status information in a huge interactive network consisting of information such as vehicle location, speed and route through devices such as GPS, RFID , sensors, and camera image processing. In the Internet information database, all vehicles transmit their various information to the central processor to achieve interactive sharing of location information. At present, the state requires that all operating vehicles must be equipped with on-board positioning terminals, connected to relevant enterprise service platforms, and finally connected to provincial and ministerial service platforms.


Specifically, the positioning and location perception technology of vehicle nodes is the technical core of the Internet of Vehicles. Positioning emphasizes the uniqueness of location information, that is, positioning obtains real geographic coordinate information, while location perception focuses on the relativity of the positions between nodes, reflecting the trajectory of mobile nodes in the time and space dimensions.

[1] [2]
Reference address:Three positioning technology solutions for autonomous driving technology

Previous article:How to break through the "ceiling" of in-car voice?
Next article:Car screens: a necessity or a false proposition? (Part 1)

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号