As we enter 2020, autonomous driving technology has reached the point where it needs to be commercialized on a large scale to prove its value.
Whether it is closed or semi-closed scenes such as mining areas, ports and industrial parks, or RoboTaxi and RoboTruck on public roads, technology is the basis for the commercialization of autonomous driving in different scenarios.
This report covers discussions on the technical areas required for autonomous vehicles, including perception, mapping and positioning, sensor fusion, machine learning methods, data collection and processing, path planning, autonomous driving architecture, passenger experience, interaction between autonomous vehicles and the outside world, challenges posed by autonomous driving to automotive components (such as power consumption, size, weight, etc.), communication and connectivity (vehicle-road collaboration, cloud management platform), and provides corresponding implementation cases from various autonomous driving companies.
This report is a comprehensive explanation of the hardware and software technologies of autonomous driving technology by autonomous driving experts from different countries and regions around the world, including the United States, China, Israel, Canada, and the United Kingdom. It helps readers understand the latest technological developments from a technical perspective and gain a comprehensive understanding of autonomous driving vehicles.
Most of the cases in this report come from the automotive field, which is also the hottest application scenario in the current autonomous driving industry. However, cars that serve personal travel are not the industry where autonomous driving technology has a profound impact. Other industries, such as public transportation, freight, agriculture, mining and other fields, are also widely used in autonomous driving technology.
01 Various sensors
Various sensors are used by self-driving cars to perceive the environment, just like human eyes, and are the basic components of self-driving cars. There are five main types of sensors for self-driving cars, including: 1. Long range RADAR; 2. Camera; 3. LIDAR; 4. Short/Medium range RADAR; 5. Ultrasound;
These different sensors are mainly used to perceive objects of different distances and types, providing the most important source of information for self-driving cars to judge their surrounding environment. In addition, another source of environmental perception information is the source of vehicle-road collaboration, which is also explained in the report.
The selection of sensors is mainly based on the following technical factors:
1. Scanning range, which determines the time you have to react to the sensed object;
Resolution, which determines the level of detail the sensor can provide to the autonomous vehicle.
3. Field of view or angular resolution, which determines the number of sensors required to cover and sense the area;
4. Refresh rate, which determines how often the information from the sensor is updated;
5. Perceive the number of objects, be able to distinguish the number of static objects and dynamic objects in 3D, and determine the number of objects that need to be tracked;
6. Reliability and accuracy, the overall reliability and accuracy of the sensor in different environments;
7. Cost, size and software compatibility, which is one of the technical conditions for mass production;
8. The amount of data generated determines the amount of calculation required by the on-board computing unit. Today, sensors tend to be smart sensors, which not only sense, but also distinguish information and transmit the most important data affecting vehicle driving to the on-board computing unit, thereby reducing its computing load.
Below is a schematic diagram of the sensor solutions of Waymo, Volvo-Uber, and Tesla:
Because the sensor is always exposed to the environment, it is easy to be polluted by the environment, which will affect the working efficiency of the sensor. Therefore, the sensor needs to be cleaned.
1. Tesla's sensors have heating functions to resist frost and fog;
2. Volvo's sensors are equipped with a water spray cleaning system to clean dust;
3. The sensors in the Chrysler Pacifica used by Waymo include a water spray system and wipers.
02 SLAM and Sensor Fusion
SLAM is a complex process because localization requires a map, and mapping requires a good position estimate. Although it has long been considered a fundamental "chicken or egg" problem for robots to become autonomous, breakthrough research in the 1980s and mid-1990s solved SLAM conceptually and theoretically. Since then, a variety of SLAM methods have been developed, most of which use probabilistic concepts.
To perform SLAM more accurately, sensor fusion comes into play. Sensor fusion is the process of combining data from multiple sensors and databases to obtain improved information. It is a multi-level process that deals with the association, correlation, and combination of data, which can achieve cheaper, higher quality, or more relevant information than using only a single data source.
For all the processing and decision-making required to get from sensor data to motion, two different AI approaches are typically used:
1. Sequentially, decomposing the actuation process into components of a hierarchical pipeline, where each step (sensing, localization, path planning, motion control) is handled by a specific software element and each component of the pipeline feeds data to the next;
2. An end-to-end solution based on deep learning that is responsible for all these functions.
The question of which approach is best for AVs is an area of ongoing debate. The traditional and most common approach involves breaking down the autonomous driving problem into multiple sub-problems and solving each sub-problem in turn using specialized machine learning algorithm techniques, including computer vision, sensor fusion, localization, control theory, and path planning.
End-to-end (e2e) learning, which applies iterative learning to an entire complex system and has been popularized in the context of deep learning, has gained increasing attention as a solution to the challenges of complex AI systems for self-driving cars.
03 Three Machine Deep Learning Methods
Currently, different types of machine learning algorithms are used for different applications in self-driving cars. Essentially, machine learning maps a set of inputs to a set of outputs based on a set of training data provided. 1. Convolutional Neural Network (CNN); 2. Recurrent Neural Network (RNN); 3. Deep Reinforcement Learning (DRL); are the most common deep learning methods applied to self-driving cars.
CNNs — are primarily used to process images and spatial information to extract features of interest and recognize objects in the environment. These neural networks consist of convolutional layers: a collection of convolutional filters that try to distinguish between image elements or input data to label them. The output of this convolutional layer is fed into an algorithm that combines them to predict the best description of the image. The final software component is often called an object classifier because it can classify objects in an image, such as street signs or other cars.
RNNs – RNNs are powerful tools when processing temporal information such as videos. In these networks, the output of the previous steps is fed into the network as input, allowing the information and knowledge to persist in the network and be contextualized.
DRL – combines deep learning (DL) and reinforcement learning. The DRL approach enables software-defined “agents” to learn optimal actions in a virtual environment using a reward function to achieve their goals. These goal-oriented algorithms learn how to achieve a goal, or how to maximize along a specific dimension over multiple steps. Despite its promise, the challenge with DRL is designing the right reward function for driving a vehicle. In self-driving cars, deep reinforcement learning is still considered to be in its early stages.
These methods don’t necessarily exist in isolation. Companies like Tesla, for example, rely on hybrid forms that try to use multiple methods together to improve accuracy and reduce computational requirements.
Training a network on multiple tasks at once is a common practice in deep learning, often called multi-task training or auxiliary task training. This is to avoid overfitting, a common problem with neural networks. When a machine learning algorithm is trained on a specific task, it becomes so focused on mimicking the data it was trained on that its output becomes unrealistic when trying to interpolate or extrapolate.
By training a machine learning algorithm on multiple tasks, the core of the network will focus on discovering general features that are useful for all purposes, rather than just focusing on one task. This can make the output more realistic and useful for the application.
Previous article:Electric vehicles in crisis mode
Next article:Tesla applies for patent to obtain fleet data to train self-driving neural networks
- Popular Resources
- Popular amplifiers
- A new chapter in Great Wall Motors R&D: solid-state battery technology leads the future
- Naxin Micro provides full-scenario GaN driver IC solutions
- Interpreting Huawei’s new solid-state battery patent, will it challenge CATL in 2030?
- Are pure electric/plug-in hybrid vehicles going crazy? A Chinese company has launched the world's first -40℃ dischargeable hybrid battery that is not afraid of cold
- How much do you know about intelligent driving domain control: low-end and mid-end models are accelerating their introduction, with integrated driving and parking solutions accounting for the majority
- Foresight Launches Six Advanced Stereo Sensor Suite to Revolutionize Industrial and Automotive 3D Perception
- OPTIMA launches new ORANGETOP QH6 lithium battery to adapt to extreme temperature conditions
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions
- TDK launches second generation 6-axis IMU for automotive safety applications
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- From probes to power supplies, Tektronix is leading the way in comprehensive innovation in power electronics testing
- From probes to power supplies, Tektronix is leading the way in comprehensive innovation in power electronics testing
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
- Playing with Zynq Serial 31——[ex53] EMIO control based on Zynq PS
- Great! Tektronix oscilloscopes have been completely upgraded, come and unlock them! If you don't read it, you will miss out on 100 million!
- TMS320C6000 Basic Learning (3)——CCS v5 Software Development Environment Construction
- Microcontrollers: The Basics
- 【Home Smart Dashboard】 Development and Construction of ESP-IDF under Linux
- [ESP32-S2-Kaluga-1 Review] 2. LCD display pictures
- 【Video Class】Introduction to Antenna Eigenmode Analysis
- Can't see the downloaded file
- 【AT-START-F425 Review】No.05 FLASH
- Recommend several introductory books on the Internet of Things