Perception strategies and solutions for automotive, machine vision, and edge AI
Click on the blue words to follow us
Source: Electronic Engineering Times
Author: Liu Yuwei
The automobile industry is a very traditional industry, but in recent years, electrification and intelligence have given the industry new impetus. In particular, the application of various sensing products has made cars safer and more comfortable;
Machine vision is also an industry with a long history. The Industrial 4.0 era, coupled with the introduction of automation and artificial intelligence, has given the industry new impetus and vitality.
Edge artificial intelligence is an emerging market, which mainly develops new applications after the introduction of new technologies such as artificial intelligence, 5G, and IoT.
From the mechanization of the first industrial revolution, to the electrification of the second industrial revolution, to the computerization of the third industrial revolution, to the artificial intelligence and informatization of the fourth industrial revolution, the impact and changes of artificial intelligence on all aspects of human society will far exceed the computer digital industrial revolution.
Andrew Ng, the "Father of Artificial Intelligence" and an expert in artificial intelligence at Stanford University, believes that artificial intelligence is like the electricity of the Second Industrial Revolution, which will bring a profound change to the entire human race. Data is the power and engine of artificial intelligence, and perception is the fuel of data.
"Automotive, machine vision and edge AI are also our three main market directions." Yi Jihui, vice president of global marketing and application engineering of the Intelligent Sensing Division of ON Semiconductor, said, "We have made great investments in deep sensing fields including image sensing, multispectral/hyperspectral sensing, lidar sensing, millimeter-wave radar sensing, sensor fusion, etc. to promote the progress of AI and the Fourth Industrial Revolution."
Sammy Yi, Vice President of Global Marketing and Application Engineering, Intelligent Sensing Division, ON Semiconductor
Challenges and opportunities for automotive perception
The car of the future is a computer on four wheels with extremely powerful perception capabilities. In recent years, we have found that the best car perception systems have far surpassed human perception and can monitor the surrounding environment in real time and tirelessly, which is far beyond the reach of human drivers.
Given the improvement of car perception, the continuous improvement of autonomous driving technology, and the emergence of new models such as shared cars, the reliability of cars is no longer just required to be driven for two or three hours a day and parked in the garage most of the time as it is now, but to be driven most of the time, with only one or two hours for charging and maintenance. Then there are also requirements for the safety of the car's autonomous driving function and network security.
These new requirements also bring many challenges to automotive imaging.
"In the past, solutions were all solved using software, but now we have chip-level hardware solutions," said Yi Jihui.
The picture above is a schematic diagram of an image sensor. It has a total of six layers, all made of semiconductors.
Because "GPUs and CPUs mainly deal with electrons, but in image sensors, they have to deal with both electrons and photons. The combination of photons and electrons makes image sensors very complex semiconductors. In the future, even artificial intelligence can be directly placed in image sensors."
The above picture explains why wide dynamic range is very important. The left picture shows a poor wide dynamic range, which takes care of the dark parts but makes the bright parts invisible.
On the right, with good wide dynamic range, both dark and bright objects can be seen clearly, which can provide accurate and comprehensive information to the artificial intelligence algorithm and make accurate and safe judgments.
We often encounter the situation where the oncoming car uses high beam, which is very annoying. If the wide dynamic range of the headlights in this scene is 102dB, but the sensor is only 70dB, it may not be able to see the pedestrians or objects next to the oncoming car, resulting in a car accident.
At this time, using a 110dB sensor can avoid this tragedy. In addition, in night vision scenes with almost no lights, whether or not the near infrared + (NIR+) technology is equipped can make a world of difference between being able to see and being invisible when driving at night.
The smart cockpit is also an application that is receiving increasing attention in China.
According to reports, ON Semiconductor has two megapixel IVEC modules for smart cockpit solutions , which are very popular in the industry. Among them, AR0144AT is the most widely used one in the industry.
A common design challenge facing smart cockpits is that current cameras are too large — 18×18cm3, with the smallest being 3×3cm3. “We have developed a small camera of 0.5×0.5cm3 with our partners, which can basically be invisible to passengers and drivers in the car,” said Yi Jihui.
It has only been established for 6 years, but it has 40 years of "internal strength"?
Yi Jihui said that ON Semiconductor not only provides a single product, but also provides a complete hardware solution including diodes, LDOs, and power management ICs .
Therefore, when designing the next generation of image processors, we can cooperate with other product departments of the company to develop supporting electronic products and devices to optimize camera performance.
It is reported that the automotive regulations and functional safety of other components of ON Semiconductor have been able to achieve ASIL-B level . All components are in accordance with the company's unified quality system and have unified reliability performance.
ON Semiconductor has currently developed more than 50 partners, ranging from optical lenses to signal processors, I/O, Interface, SoC processors, and software systems.
It is reported that although ON Semiconductor's Intelligent Sensing Division was established only six years ago, the company has a history of more than 40 years in the image sensor industry.
It has grown mainly through several important strategic mergers and acquisitions, for example: TRUESENSE, formerly Kodak Imaging, whose core technology comes from Bell Labs CCD Imaging; Aptina, formerly NASA's Jet Propulsion Laboratory, developed the world's first CMOS image sensor for the Apollo moon landing in 1993.
Starting three years ago, ON Semiconductor successively acquired IBM's millimeter-wave radar R&D center in Israel and SensL, an Irish company focused on the development of time-of-flight (ToF) lidar sensors.
According to the research results of TSR (Techno Systems Research), a Japanese third-party market research company, image sensors in the automotive field are mainly divided into two markets: automotive imaging and automotive perception.
Automotive imaging applications are mainly for human eyes, such as driver, passenger, rear view, surround view, and electronic rearview mirror. ON Semiconductor has a global market share of more than 60%;
Automotive perception is a perception system used for artificial intelligence and machine vision. ON Semiconductor has a market share of more than 80% in this market, and its market share is expanding year by year.
Yi Jihui revealed that ON Semiconductor sold nearly 100 million sensors in the automotive market in 2019 , while global car sales were 65 million .
Autonomous driving in China
China's domestic autonomous driving classification is basically similar to that of foreign countries, both of which are six levels, gradually increasing from L0 to L5, and the degree of automation is gradually expanding.
The biggest dividing point is from L3 to L4. The main difference is that L3 requires the driver to control it himself in some extreme scenarios, while L4 theoretically does not require the driver to control it at all, as it is all operated by the machine.
Simply put,
1
L1 and L0 are nothing. L1 is called feet off, which means you don’t need to step on the accelerator or brake.
2
L2 is hands off, you can let go of the steering wheel;
3
L3 is eyes off, the driver does not need to look, but needs to turn back when the car gives you a warning;
4
The fourth level is mind off, where you don’t have to think at all because there is no steering wheel.
5
The fifth level is called limit off. There are no restrictions on the scenarios. Autonomous driving is possible in any scenario.
From L1 to L5, both the number and types of sensors are constantly expanding. The biggest difference is between L3 and L4. L4 must require LiDAR, while many companies do not use LiDAR for L3, and only use it as an optional reference because its cost is very high. This is also the result of past industry discussions. As for whether autonomous driving will gradually evolve from L3 to L4, or jump directly to L4?
In addition, electric vehicle charging piles are also one of ON Semiconductor's strategic markets for silicon carbide. There are many solutions for implementing charging piles, and consumers are most interested in DC fast charging. DC fast charging charging piles require very large charging power and very high charging efficiency, which all require high voltage to achieve.
The main reason is that the driver still needs to control the car in L3, and people still need to be relied on in extreme scenarios. Therefore, not only the degree of automation design, but also the number and type of perception applications must take into account the cost issue. Artificial intelligence algorithms and computing platforms are designed for limited autonomous driving, not all scenarios.
L4 does not need to consider the driver at all, so it must use LiDAR and HD maps. These are two completely different platforms. L3 is like addition and subtraction, while L4 is calculus: addition and subtraction can never calculate the limit, only calculus can calculate its limit value, that is, in the case of L4, all extreme situations can be covered.
Judging from the trend of domestic autonomous driving, some high-end brand cars launched by domestic automakers already have L2 and L3 functions. One difference from foreign countries is that the large-scale use and growth of L2 and L3, or advanced driver assistance systems (ADAS), in foreign countries, especially in Europe, is driven by regulations and safety standards, rather than by consumers or automakers.
At present, there are no corresponding laws and safety standards in China. Car manufacturers mainly focus on creating their own technology and brand effects, but consumers are not yet uniform in their acceptance. It is completely different in the United States. Sometimes car owners trust the car's automatic driving too much and use L2 cars as L4, which has caused many car accidents.
Regarding L4 and L5 autonomous driving, Yi Jihui believes that the business model for implementation is now very clear. "In the past, it started with B2C for consumer private cars, but now many foreign automakers have either suspended or stopped some of their private car L4 projects. Now it is basically B2B, and it will be implemented first in the commercial vehicle model."
The reason is that the cost of intelligent sensing systems is currently very high, and only commercial vehicles can afford such costs. Private cars that are driven for two hours a day and usually parked in the garage cannot afford the cost at all.
01
First, the development of self-driving taxis (Robotaxi) in China is very fast, not inferior to the United States at all, and many domestic self-driving car manufacturers have obtained licenses in the United States at the same time, and have R&D centers in both China and the United States.
According to past reports from Electronic Engineering Journal, 18 cities in China have issued autonomous driving licenses, and nearly 500 fully autonomous taxis are being tested on the road, with Baidu alone having nearly 200 of them.
02
Second, logistics robots . After the coronavirus outbreak, the application of logistics robots has become more and more popular, especially in some communities, campuses, industrial parks, hospitals, and airports.
03
Third, large commercial vehicles , because many serious accidents on highways are caused by large vehicles, especially driver fatigue. After the sensors are installed, 360-degree real-time monitoring can be carried out, and the perception ability is no less than that of the driver, so commercial vehicle automatic driving has gradually begun.
Yi Jihui said that according to market research results, ON Semiconductor's market share in this field is 80%, and it accounts for 90% in the Chinese market.
Industrial machine vision and edge artificial intelligence
With the deepening of Industry 4.0 and automation, artificial intelligence has enabled the rapid development of the machine vision market, and edge artificial intelligence has been continuously applied in some new fields, such as new retail, smart agriculture, animal husbandry and agriculture have begun to become intelligent.
Several major trends have emerged after the COVID-19 pandemic. One is remote operation. Remote teaching and remote medical care will become more and more common. The other is unmanned operation, such as unmanned delivery trucks and unmanned stores, which will reduce contact between people, that is, contactless operation.
When it comes to industrial machine vision, flat panel inspection is an important application for image sensors. Especially in recent years, the number of pixels has been gradually increasing from 1K, 2K, 4K to 8K.
The detection process is divided into two steps: the first step is dark detection, which mainly detects fingerprints, scratches and other physical problems before power-on; the second step is pixel detection after power-on, especially for OLED and the latest AMOLED (Active Matrix OLED).
As we all know, LED has a bright board behind it as the light source, while each pixel of OLED is a separate light source.
The intensity of light and the uniformity of color between pixels must be detected very accurately. In the past, 9 pixels (3×3) were needed to detect one pixel on an LED panel, while OLED required 16 (4×4) or even 25 (5×5) pixels.
Therefore, the pixel requirements for image sensors are getting higher and higher, from 45 million to 150 million pixels, or even more than 200 million pixels.
In addition, in industry, image sensors are also used in monitoring/broadcasting, PCB inspection, etc.
So have the development trends of image sensors kept up in recent years? The following figure takes the development trend of 1.3-inch fixed-size image sensors as an example. It can be seen that the resolution of image sensors is increasing year by year; the second is the noise derivative, which is equivalent to image quality, which is also constantly improving with the increase of pixels; the third is bandwidth, which is also increasing year by year.
A good example is that the 29×29mm2 standard industrial camera may have only 2 million pixels ten years ago, and then gradually increased to 3 million, 5 million, 12 million, and this year it can already use 16 million pixels.
Some key technological advances are also very meaningful:
Global shutter, which can prevent image smearing at high speed;
In-pixel correction, which used to be done in the system through software correction, can now be done directly in hardware, performing image correction inside the pixel;
The process nodes have gone from 110nm to 65nm, and then to 45nm and even smaller, taking full advantage of Moore’s Law, which states that cost, size, and power consumption are decreasing year by year;
Back-illuminated (BSI): With the same size, the resolution is getting higher and higher, and the pixel size may be getting smaller and smaller, which may lead to a decrease in light sensitivity in low light. Back-illuminated can be used to improve light sensitivity.
Stack architecture says goodbye to two-dimensional architecture and enters three-dimensional space. Stacking, double stacking and triple stacking are all possible.
"In the future, we can not only put analog and digital signals on the second layer, but also put artificial intelligence algorithms on the third layer, making image sensors into highly intelligent image sensors." Yi Jihui thought.
3D imaging, hyperspectral and multispectral imaging
Yi Jihui said: " 3D imaging, hyperspectral and multispectral imaging are all the future directions of ON Semiconductor. Existing solutions are all solved at the system level, and our idea is to solve these difficulties and problems at the semiconductor level using Moore's Law. Once the problem can be solved using Moore's Law, it will naturally bring the benefits of reducing costs, size and power consumption."
For depth imaging (3D), ON Semiconductor has developed a phase detection solution a few years ago. By making some changes to the pixels, it can achieve 1.1% to 2% accuracy within 1-5 meters, while providing image information such as depth and color. The principle is to make a diffraction grating on the image sensor so that its phase difference can be separated to form depth information.
There are two types of ToF: iTOF, which is indirect ToF, and dTOF, which is direct ToF.
ON Semiconductor's solution is Long-distance dToF, which uses single-photon avalanche diodes (SPADs) and silicon photomultipliers (SiPMs) to send out photons and then receive them back, and can very accurately determine the distance of distant objects, up to 250 or even 300 meters away. iTOF generally calculates the distance indirectly.
In terms of hyperspectral and multispectral imaging, ON Semiconductor is also solving the problem of spectral separation at the semiconductor level.
The spectrum separated by plasma waveguide filter technology is not so complete and accurate, but it is suitable for some application scenarios; the Fabry-Perot (FP) spectroscopic filter adds some reflectors to the semiconductor, which can separate the spectrum very accurately, but there are some problems with cost and reliability.
LiDAR
The traditional technology used in mechanical radar is avalanche photodiode (APD), which has the disadvantages of large size, high power consumption, limited detection range and poor consistency.
The shelf life of a general mechanical radar is only one year. Because it needs to rotate continuously, it suffers great wear and tear. In addition, its current output is relatively small and cannot be mass-produced.
The technology used by ON Semiconductor LiDAR is called Silicon Photomultiplier (SiPM) , which is made up of SPADs connected together using the same process.
Dr. Yolanda XI, Greater China Marketing Director of ON Semiconductor's Intelligent Sensing Division, said, "Its advantages are that its gain is 10,000 times that of APD, its sensitivity is 2,000 times that of APD, and its operating voltage requirement is only 30V, while APD requires 250V. In addition, its consistency is very good, which is conducive to mass production."
Dr. Yolanda XI, Marketing Director of Greater China, Intelligent Sensing Division, ON Semiconductor
Xi Yunxia said that in many industries, especially the automotive industry, ON Semiconductor was the first to provide customers with SiPM and SPAD process technology. This technology has been widely used in the medical field and is already in mass production. Compared with other competitors, the focus of this technology is:
01
Automotive standards. The reason why many lidars are not reliable is that they do not have automotive-standard components, so the entire system cannot meet automotive standards. This problem needs to be solved at the semiconductor level.
02
ON Semiconductor has launched RDM, where “M” stands for Micro Lens. Originally, this micro lens was used in image sensors, but now it is used in LiDAR .
The advantage is that there is a particularly important indicator in the laser radar detector: PDE, namely Photon Detection Efficiency, which is equivalent to the QE indicator of the image sensor. The higher this indicator is, the higher the efficiency of converting photons into electrons. After adding micromirror technology to the RDM series, its transmittance is higher, and the converted PDE is also greatly improved.
03
Using mature CMOS technology . If the shipment volume reaches the level of 10,000, the previous APD technology cannot be calibrated manually. The CMOS process can achieve real low cost, low power consumption, and optimized size, etc., which can realize the real implementation of LiDAR and the implementation of automotive regulations.
ON Semiconductor also provides a comprehensive and systematic solution for LiDAR.
Millimeter wave radar
The application scope of millimeter-wave radar covers L1, L2, L3, L4, and L5 levels of autonomous driving, and it has different applications at different levels.
Yi Jihui said that ON Semiconductor's next-generation millimeter-wave radar focuses on L3, using a technology called "MIMO+" that can provide four-dimensional (4D) information (R is distance, V is speed, A is angle, and E is height).
Yi Jihui said that compared with competitors' technologies, MIMO+ has twice as many channels, which means that for millimeter-wave radars with the same performance, ON Semiconductor's solution can save 50% of mmIC devices, and can also reduce and optimize controllers and circuit boards, thereby reducing overall costs.
At the same time, ON Semiconductor will also develop radar signal processing and customize external connection interfaces according to industry standards, whether existing standards or future development standards.
Xi Yunxia said that ON Semiconductor has now started millimeter-wave radar projects with some key customers in Greater China, providing them with some chips, and the customer products are already in the research and development stage.
One chart summarizing ON Semiconductor’s product lineup for automotive sensing.
Featured Posts