In the last article "2021 Global Autonomous Driving Report Card - See which company is more arrogant, more diligent, less involved, and the main problems of autonomous driving", I found a very interesting phenomenon. Cruise ranked second in the number of vehicles tested on autonomous driving roads, mileage, and mileage per intervention. The data is relatively stable and consistent. So from various indicators, it can be inferred that Cruise should be at the forefront in autonomous driving. Then I received many inquiries from readers who wanted to know who Cruise is? What are the advantages of its technology? And a series of questions.
So I found relevant videos and materials to discuss:
Who is Cruise? What is its autonomous driving solution?
Cruise Autonomous Driving Technology
- Cruise perception algorithm - Cruise decision algorithm - Cruise's autonomous driving metaverse verification and development - Cruise's autonomous driving tool chain and process Cruise's future direction
I hope to give everyone a comprehensive understanding, and also hope to give people in the autonomous driving industry some thoughts or inspiration. Cruise is quite interesting. There is a survey at the end of the article to see how many people are optimistic about Cruise.
Who is Cruise? Cruise was co-founded by Kyle Vogt and Dan Kan in San Francisco, USA in 2013. At the beginning, their goal was to develop the RP-1 autonomous driving high-speed autonomous driving kit and then promote it to more mass-produced vehicles (this scenario was mentioned in my previous article "The Pioneer of Autonomous Driving - Logistics and Transportation Industry"). At the beginning, they successfully enabled Audi A4 and S4 to have high-speed autonomous driving functions. Then they entered the field of urban autonomous driving. Then in 2016, they were spotted by General Motors.
All along, it can be seen that it is basically GM's wise vision or GM Capital's boost. Later, Honda, SoftBank, and Microsoft joined, directly pushing the market value to 30 billion US dollars. In 2022, GM purchased SoftBank's shares for 2.1 billion, so now it is basically the absolute majority shareholder of GM and Honda, and GM even has absolute dominance in personnel and funds.
Therefore, the test cars that Cruise is running on the road now are all from General Motors' electric car Bolt. The technical solution is multiple sensor redundancy (5 lidars, 14 cameras, 3 wide-angle radars, 8 long-range radars, 10 ultrasonics) + high-precision maps + AI processors.
Obviously, from the types and quantity of sensors, these are comparable to the new forces in China (to learn about the number and types of sensors of China's new forces, please click Vision is King - Xiaopeng and Tesla's autonomous driving solutions). The 5 lidars are only available through the Guangzhou Auto Show - See the autonomous driving lidar. The Great Wall Mecha Dragon mentioned in the article has at most 4 and 11 millimeter-wave radars. This performance of pushing materials and competing parameters is unmatched even by China's new forces, so Cruise said that its next focus of autonomous driving work is:
Reduce and integrate various sensors to reduce costs. Reduce reliance on high-precision maps, or even not use them in the future like Tesla's solution. Design common tools and processes for algorithm and virtual verification to make autonomous driving development fast, efficient and easy to scale.
Therefore, Cruise concludes that its core technological advantages are software and algorithms, and its tool processes are easy to scale. In 2023, it will expand its self-developed processing chips to be assembled on its autonomous driving platform vehicle Origin. Through cooperation with GM and Honda, it will build an autonomous driving platform with cost advantages. For example, when it adopts the fourth generation chip, the cost will be reduced by 10 times.
In the future, Cruise will rely on the Origin platform jointly developed by General Motors and Honda to achieve autonomous driving travel (including transporting people and goods).
Cruise's current plan is to use its software algorithms and development tool chain to easily scale features, reduce the cost of autonomous vehicles, and expand on a large scale.
Therefore, Cruise currently mainly conducts commercial testing through Bolt, and Robotaix will be the future product direction. Of course, according to the original intention of founder Kyle, Cruise's software algorithm tool chain is also easy to transplant and apply to other vehicles and companies.
Now that I have a general understanding of Cruise, let me take a deeper look at its autonomous driving technology. To evaluate an autonomous driving development technology company, we need to consider the following four indicators:
Perception algorithm capabilities
Decision-making algorithm capabilities
Virtual Validation Capabilities
Development tools and processes
So let's analyze and discuss these four aspects.
After getting to know Cruise, let me take a closer look at its autonomous driving technology. To evaluate an autonomous driving technology company, we need to consider the following four indicators:
Perception algorithm capabilities
Decision-making algorithm capabilities
Virtual Validation Capabilities
Development tools and processes
Through these four capabilities, we can judge the capabilities of the current algorithm, whether it is easy to expand quickly in the future, and whether it is easy to update continuously in the future, so we will analyze and discuss these four aspects. Perception In fact, with the development of autonomous driving, many common perceptions are not a big problem. The difficulties of environmental perception are hidden in the long tail theory of real-life scenarios:
Unknown objects on the road, such as a cat lying on the road, oversized trucks for carrying goods, and other abnormal driving, such as special vehicles on the road, such as police cars, ambulances, and parking lots. It is not enough to just follow traffic rules, such as parking lots, driving on other roads while other vehicles merge in.
Although these driving conditions are long-tail problems, Cruise has calculated that in a busy city like San Francisco, although they are long-tail driving conditions, they occur frequently. For example, a bicycle driving in a hurry occurs about once an hour, and a car driving in a hurry occurs about once every 20 minutes. Therefore, these long-tail problems of autonomous driving need to be paid special attention to, otherwise there will be too many safety risks. Cruise's current senior manager of perception is a Chinese, Zhang Yun, who previously worked for Didi Autonomous Driving. She introduced that Cruise perception is divided into four parts:
Camera
LiDAR
Millimeter wave radar
Sound is quite special and probably no one in China has mentioned this way of perception.
The above four environmental perception sensors sense inputs and send them to the AI backbone algorithm for processing. In addition to the backbone algorithm, the algorithms currently used by Cruise include segmentation, texture classification, attribute understanding, target following, prediction, occlusion reasoning, etc.
First of all, the four sensors complement each other to cover various scenarios and features. For example, the driving of fire trucks is very unusual and unreasonable, so the features of such vehicles are identified by the video head, and the opening of doors is detected by the lidar (the lidar can better identify features such as edges. For details, please click Guangzhou Auto Show - See the autonomous driving lidar, so this is also the reason why everyone adds lidar to urban autonomous driving)
Then, the sensor can recognize whether it is moving away or approaching through hearing. (This is actually the same as Mercedes-Benz’s point of view. By adding sound recognition to assist in identifying the target, click on Mercedes-Benz’s L3 autonomous driving-functions and hardware.) If it is rainy or foggy, radar can be used to assist in identification. The above four environmental perception sensors sense the input and send it to the AI backbone algorithm for processing. Then it is assigned to different algorithm tasks, such as
Segmentation is used to identify objects such as trash cans. Texture classification is used to identify more obvious vehicles. Attribute understanding is used to identify taillights and door switches. Target following prediction behavior occlusion reasoning is used to reason about invisible areas and drive carefully.
Here are some of the highlights and features that Cruise believes are there:
For predicting behavior, a deep learning model is used to achieve target tracking instead of traditional algorithms. The deep learning algorithm can identify objects 0.7 seconds faster than the Kalman filter algorithm and better predict movement. So Cruise gave an example, that is, a vehicle parked on the roadside suddenly drove out of the parking space. At this time, the automatic driving using the Kalman filter algorithm generally chooses to stop and give way. But the automatic driving using the deep learning algorithm can avoid driving continuously.
In addition, autonomous driving through deep learning algorithms can also more accurately predict driving behavior. For example, when many people turn around at an intersection, they first turn the steering wheel to the right to increase the turning radius, and then turn left to turn around. At this time, the general algorithm will predict that he is turning right.
These are mainly based on the following Cruise's deep learning prediction neuron structure, which is mainly divided into the following three parts:
The encoding is done through scene encoding, target history encoding, target-to-target graphics, and Cruise "Mixture of experts" gating (what Mixture of experts is will be introduced later). The main body of decoding is the initial trajectory, the refinement of the long-term trajectory, accompanied by auxiliary tasks such as intersection trajectory prediction, occupancy prediction, multi-modal uncertainty, interaction recognition and uncertainty. Self-supervision, if the entire decoding and encoding is successful, it is self-affirmed, behavior self-labeling, and interaction self-labeling.
Previous article:A look at China's automotive-grade chip landscape from the first car chip stock
Next article:Vivo announces autonomous driving patent, will it enter the automotive industry? Official response in the near future
- Popular Resources
- Popular amplifiers
- Multimodal perception parameterized decision making for autonomous driving
- Evaluating Roadside Perception for Autonomous Vehicles: Insights from Field Testing
- Investigation of occupancy perception in autonomous driving: An information fusion perspective
- Lithium battery SOC estimation based on genetic algorithm optimized dual Kalman filter
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications