To
join the "Smart Car Expert Car Camera Industry Exchange Group", please add WeChat 18512119620 (this number is full). For new friends, please add WeChat 13816528214 (the previous WeChat has been added, no need to add this number again)
The demand for single-vehicle cameras will continue to increase as the level of autonomous driving technology increases. We estimate that the global automotive CIS may impact the market space of tens of billions of dollars in the future. ADAS will drive the rapid development of vehicle-mounted lenses. According to Roland Berger's forecast, only 14% of vehicles in the world will not have ADAS in 2025. At the same time, we expect that the number of lenses installed on a single vehicle will reach 7-9 by 2025.
Ideal ONE is slightly inferior in terms of camera configuration. Only one of the two cameras on the windshield is a monocular camera for assisted driving perception, the other is a road information collection camera, and there are 4 cameras for 360-degree surround view.
Xiaopeng Motors defines P7 as L2.5 driving, with the largest number of 14 cameras among its peers.
LiDAR: A must-have sensor for high-level autonomous driving. In low-light environments such as tunnels and garages, there are certain technical flaws in achieving L3 or even higher levels of autonomous driving through camera algorithms, but LiDAR can effectively solve these problems. According to TrendForce data, automotive LiDAR will be the main application scenario for LiDAR in the future, with a market share of 60.0% and 83.0% in 2020 and 2025 respectively. Its market size will increase from US$409 million in 2020 to US$2.434 billion in 2025, with a compound annual growth rate of 42.9%.
Source
: Guosheng Securities Research Institute
If you want to get the full PDF report, click the blue word "
Smart Car Expert
" above and reply "Car Camera 9" to receive it.
In-vehicle CIS: A 100 billion yuan track under intelligent driving
Environmental perception is one of the most critical links in achieving autonomous driving. The core of environmental perception is the sensor. Currently, there are two main types of sensors: cameras and radars. The difference is that cameras perceive information through third-party waves (light), while radars perceive information through their own waves. Radars are divided into ultrasonic radars, millimeter-wave radars, and laser radars (LiDAR) according to their detection distance and resolution.
Lidar has the advantages of long-range and high-resolution, but it is expensive; millimeter-wave radar is small in size, highly adaptable to weather, and much cheaper than Lidar. It is mainly divided into 24GHz and 77GHz/79GHz. The latter has a longer range and a more difficult manufacturing process. Its limitation is that the analysis accuracy of stationary objects is not enough; cameras have the lowest cost, but are easily affected by weather and require complex algorithm support. According to Yole, the ADAS camera module market size is expected to reach US$8.1 billion in 2025.
The iterative upgrade of smart cars is unstoppable, and cars will be a high-growth market for CMOS image sensors in the future. Car cameras were originally used mainly in reversing systems. With the commercialization of 5G and the rapid popularization of ADAS (Advanced Driving Assistance System), cars are accelerating their pace of intelligence. As a core of the development of autonomous driving technology, perception technology has catalyzed the increase in both volume and price of automotive image sensors.
According to Omdia, automotive cameras and industrial vision are expected to become the two fastest growing downstream areas of image sensors from 2020 to 2030, among which the average annual compound growth rate of automobiles is expected to reach nearly 20% in the next ten years.
New car manufacturers are more aggressive in camera configuration, which is expected to accelerate the process of CIS installation. New car manufacturers have always shown a more active attitude in promoting technological change. Unlike traditional car companies that gradually improve the level of automation, new car manufacturers such as Weilai mostly adopt a "one-step" technology development route, skipping L1 and L2 levels and accelerating the mass production of L3 and L4 models. Naturally, they are also one step ahead in the autonomous driving sensor layer and are the first to "arrange" more cameras "on board".
According to statistics, the Audi A8 and Mercedes-Benz S, both of which are L3 levels, are equipped with 5 and 6 cameras respectively, while the L2+ level autonomous driving cars of the "new forces in car manufacturing" Tesla, NIO, Ideal, and Xpeng are mostly equipped with more than 8 cameras. NIO's latest L4 level luxury model ET7 is equipped with 11 8-megapixel cameras, and Sony's concept electric car Vision-S is equipped with 18 cameras.
The in-vehicle CIS is showing a trend towards high resolution, and its value is expected to continue to increase.
Low-level L1-L2 smart cars do not require high resolution for CIS, but as the level of autonomous driving increases, the driving tasks undertaken by the car become more complex. Whether from a functional or safety perspective, it is required to achieve higher object recognition accuracy, which means that the car must use a higher-resolution CIS.
According to TSR, VGA and 2-megapixel CIS are still the mainstream of automotive CIS shipments, but the proportion of 2-megapixel and above CIS will accelerate in the future. It is estimated that by 2023, the shipment volume of 2-megapixel and 5-megapixel and above CIS will reach 1.042 billion and 154 million, respectively.
In the long run, autonomous driving is a major development trend in the automotive industry and its application promotion is accelerating. In-vehicle CIS is a potential market worth tens of billions of dollars. At present, the average price of automotive image sensors is about 4-5 US dollars. Analogous to the development trend of the mobile phone market, we believe that the high-end development of in-vehicle cameras in the future will also drive the value of CIS to gradually increase.
According to our estimates, the global automotive CIS market size will be US$1.22 billion in 2020 and is expected to reach US$5.4 billion by 2025, with a CAGR of 34.7%. In the long run, we assume that the annual global automobile production will be between 80 million and 100 million vehicles. In the future, if the average car is equipped with 13 cameras, the value of CIS per vehicle is expected to exceed US$100. By calculation, the global automotive image sensor market space will reach nearly US$10 billion!
OmniVision has been working on automotive CIS chips for more than 15 years and has now become one of the world's top two suppliers, accounting for 29% of the global market share in 2021. OmniVision began mass production of its first automotive image sensor in 2008, 10 years before Sony began its layout in the automotive field. In 2009, it achieved mass production of the first generation of high dynamic range split pixel technology (Split Pixel), and then completed two rounds of Split Pixel technology iterations in 2012 and 2016. In 2018, OmniVision achieved mass production of the first generation of Deep Well pixel architecture CIS, and the second generation was released in 2019.
Currently, many of its solutions have been widely used in vehicle systems such as rear-view camera (RVC), surround view system (SVS), camera monitoring system (CMS), ADAS (driver assistance system), e-Mirror (electronic rearview mirror) and DMS. Its downstream customers include mainstream car manufacturers such as Mercedes-Benz, BMW, Audi and GM.
OmniVision has currently launched a number of automotive CIS products based on the leading Nyxel® near-infrared (NIR) technology, LFM and PureCel®Plus-S stacked pixel architecture technology, which have excellent performance in dynamic range, LFM performance, power consumption, etc. It is expected to increase its market share in line with the high growth trend of the automotive track in the future.
➢ On January 15, 2022, OmniVision demonstrated OX08B40, the first 8-megapixel image sensor system for automotive front-view camera systems. This concept demonstration depicts the future L4 or even L5 vehicle-mounted sensors to the world. The sensor will provide higher image quality and up to 4K/2K resolution. When applied to automotive front-view, it will greatly increase vehicle safety.
➢ On May 19, 2021, OmniVision Technologies released the high-performance OAX4000 ASIC image signal processor, which is a supporting ISP for the company's HDR sensor, designed to provide a complete multi-camera viewing application solution with fully processed YUV output. It is capable of processing up to four 140dB HDR camera modules, with industry-leading LED flicker suppression (LFM) functionality and 8-megapixel high resolution. The product supports a variety of color filter array (CFA) modes, including Bayer, RCCB, RGB-IR and RYYCy. In addition, the OAX4000 consumes more than 30% less power than the previous generation. It is suitable for a variety of automotive applications, including surround view systems, electronic rearview mirrors, in-car and autonomous driving cameras.
➢ On June 2, 2020, OmniVision Technologies released the OX03C10, the world's first automotive image sensor that integrates 3.0um large pixels, 140dB high dynamic range (HDR) and industry-leading LFM functions, ensuring high image quality for automotive observation applications such as rear view, surround view, camera monitoring system (CMS) and electronic rearview mirrors.
➢ On May 19, 2020, OmniVision Technologies released the 2.5-megapixel ASIL-B grade sensor OX03A2S, the automotive industry's first image sensor equipped with Nyxel® near-infrared (NIR) technology. It is designed for external imaging applications and can be used in low-light or even no-light environments within 2 meters around the vehicle body. It can also improve RGB image capture performance in bright environments by increasing sensitivity.
CCC module powers the world's smallest automotive camera. In June 2020, OmniVision launched the world's first automotive wafer-level camera, the OVM9284 CameraCubeChip™ module, which is the world's smallest automotive camera and provides a one-stop service for driver monitoring hardware with high-quality imaging performance in dark environments.
This 1-megapixel module has a compact size of 6.5x6.5mm, making it possible to install the module in a place in the car that is difficult for the driver to notice. In addition, the module has the lowest power consumption among all automotive camera modules, which is more than 50% lower than the competing products with similar performance, so it can operate continuously in a small space and low temperature to achieve high image quality.
ADAS is accelerating its penetration, and the in-vehicle camera industry is benefiting greatly
New energy vehicles are accelerating their penetration, and automotive lenses are widely used
In-vehicle cameras are widely used in the automotive field. From the early use of driving records, reversing images, and parking surround views, they have gradually expanded to driver records, parking assistance, night vision, cabin monitoring, and ADAS assisted driving. According to the different positions of the cameras, cameras can be divided into front vision, rear view, surround view, side view, and in-cabin cameras.
The operating environment of automotive lens products is complex. "Automotive-grade" lenses must consider many requirements such as the stability of the optical focal plane, thermal compensation of the optical focal plane and the camera, and damage to the reliability of some products. Therefore, the durability parameters are higher than those of cameras used in smart mobile devices. At the same time, the automotive industry supply chain is closed and has formed a relatively stable supply system. The certification cycle for products entering the market is relatively long, and the industry barriers are relatively high.
The momentum of automotive lenses is strong, and it is expected to benefit related companies in the industry chain in the future. Let's take Sunny Optical's automotive lens shipments as an example:
Sunny Optical shipped a total of 67.98 million automotive lenses in 2021, a year-on-year increase of 21.02% over the same period last year. In January 2022, Sunny Optical shipped 7.527 million automotive lenses, a year-on-year increase of 2.5% and a month-on-month increase of 55.3%. It became the company's highest monthly shipment since it started its automotive lens business.
Although the shipment volume in some months decreased year-on-year in 2021 due to the repeated epidemic and the shortage of some parts, it did not affect the significant increase in overall shipment volume. This was due to the multi-scenario application of automotive lenses in smart cars and the gradual increase in downstream smart cars. At the same time, maintaining a growth trend in an uncertain external environment also indirectly confirms the high certainty of automotive lenses in the future.
In the future, with the steady increase in sales of smart cars and the increase in the number of on-board lenses installed on each vehicle, on-board lenses will maintain a rapid growth trend and will be deeply beneficial to related companies in the industrial chain.
The penetration rate of ADAS is accelerating, driving the rapid development of vehicle cameras
The growth in demand for automotive cameras is mainly due to the development and popularization of ADAS systems
. ADAS is the mainstream application technology solution for autonomous driving. The key is the visual system, which increases driver visibility by sensing the road environment and responds to dangerous situations when the driver is negligent, thereby increasing the protection of driving safety. In the next five years, the shipment of autonomous vehicles will maintain rapid growth, driving the increase in the volume of in-vehicle cameras.
According to IDC, the global total shipments of autonomous vehicles are expected to increase from 27.735 million in 2020 to 54.247 million in 2024, with a penetration rate expected to exceed 50%. The CAGR from 2020 to 2024 will reach 18.3%, of which L3 shipments in 2024 may reach about 690,000 units.
Compared with traditional fuel vehicles, electric vehicles are more suitable for the application of autonomous driving technology. The advantages are:
1) The motor has faster response speed and higher safety;
2) Autonomous driving requires additional electrical equipment such as cameras and radars. When electric vehicles use these devices, there is no need for oil-to-electric conversion, and the energy loss is low;
3) The LIN and CAN bus networks of traditional fuel vehicles are no longer able to cope with autonomous driving and need to be upgraded to faster MOST and vehicle Ethernet buses. Fuel vehicles are difficult to rebuild from scratch due to the platformization and modular reuse of fuel vehicles, which involve many aspects.
Leading companies in the electric vehicle field at home and abroad have established their brand image through the spirit of the Internet, paying more attention to the sense of technology in product creation. Electric vehicles have a high degree of electrification and are more daring to apply advanced intelligent driving technology. On-board lenses have benefited from this great wave of electric vehicle development.
my country divides the autonomous driving of smart cars into five stages: assisted driving (DA), partial autonomous driving (PA), conditional autonomous driving (CA), highly autonomous driving (HA) and fully autonomous driving (FA). The "Technical Roadmap 2.0 for Intelligent Connected Vehicles" released in 2020 states:
➢ In 2025, the market share of PA and CA-level intelligent connected vehicles in my country should exceed 50%. (L2+L3>50%);
➢ By 2030, the share of PA and CA-class connected cars will exceed 70%, and the share of HA-class connected cars will reach 20%. (L2+L3>70%, L4>20%);
➢ By 2035, the Chinese intelligent connected vehicle industry system will be more complete, and various types of connected highly automated driving vehicles will be widely used in large areas of China. (L3 and above connected vehicles will be widely used)
According to Statista data, the global ADAS market size is expected to increase from US$764 million in 2015 to US$3.195 billion in 2023, with a compound annual growth rate of 19.58%. According to HIS Markit data, the penetration rate of L2-level connected cars in China in 2021 was 20%, and that of L3 was 0. If the above conditions are to be achieved in the future: the combined share of L2 and L3 exceeds 50% in 2025 and exceeds 70% in 2030, there is still a large market space.
The global ADAS penetration rate is accelerating, and only 14% of vehicles in the world will not have ADAS in 2025. According to Roland Berger's research forecast, by 2025, 40% of vehicles in all regions of the world are expected to have L1 functions, and the proportion of vehicles with L2 and higher functions will reach 45%. Only 14% of vehicles worldwide will not have ADAS functions.
In terms of specific ADAS functions, according to Roland Berger data, the penetration rate of L1~L2 functions in 2025 will be significantly higher than that in 2020, and L3 and above ADAS functions will enter the public's field of vision, among which the penetration rate of HWP and remote parking will reach 9%, and the penetration rate of fully automatic driving will also reach 1%. The acceleration of global ADAS penetration will inevitably drive the prosperity of upstream and downstream industries such as vehicle-mounted cameras and lidar, and companies in the industry chain may benefit greatly from it.
Models of leading electric vehicle companies at home and abroad all use a large number of on-board cameras.
Tesla's entire series of models have 8 cameras + 12 radars, including a three-lens camera in the front of the car, a reversing camera in the back, and two side-view cameras on each side of the car body. The NIO ES6 is equipped with the NIO Pilot automatic assisted driving system and 23 perception hardware, including 8 cameras + 12 radars.
Ideal ONE is slightly inferior in terms of camera configuration. Of the two cameras on the windshield, only one monocular camera is involved in assisted driving perception, the other is a road information collection camera, and there are 4 cameras that form a 360-degree surround view.
Xiaopeng Motors defines P7 as L2.5 driving, with the largest number of 14 cameras among its peers.
At present, major car manufacturers have formulated their autonomous driving vehicle development plans. After global mainstream car manufacturers such as Mercedes-Benz, BMW, and Volkswagen launch high-level ADAS cars, it is expected to drive the pace of autonomous driving development of other fuel car manufacturers. The ADAS penetration rate will further increase, which will deeply benefit the market for automotive lenses and lidar.
The number of cameras installed on bicycles is increasing
It is expected that the number of cameras equipped on a single L4+ vehicle will reach 11-16 in the future.
According to the installation position, vehicle-mounted cameras can be divided into front view, surround view, rear view, side view and interior view. The upgrade of autonomous driving technology requires higher and more comprehensive perception. The demand for cameras in vehicles will continue to increase as the functional areas of the autonomous driving system are enriched and the level is increased.
We judge that at the L4/L5 autonomous driving level, forward vision requires 1-3 pairs of eyes depending on the level, side vision requires 2-4 pairs of eyes, rear vision and reversing requires 1 pair of eyes, surround view and automatic parking assistance system will require 4 pairs of eyes, in-cabin driver monitoring will require 1-2 pairs of eyes, and passenger monitoring will also increase the demand by 1 pair of eyes in the future. In addition, car driving recorders or event recorders will also generate a rigid demand for 1 pair of eyes. Based on the above analysis, we predict that the demand for L4 and above autonomous driving cameras in the future may reach 11-16 pairs per vehicle.
It is expected that the number of cameras installed on a single vehicle will reach 7-9 by 2025. Currently, the autonomous driving level of Tesla Model 3 is L2, and the number of on-board cameras is 8, which is at a medium level in terms of autonomous driving level and the number of cameras installed.
According to the research data of automobile OEMs and Roland Berger, we found that by 2025, L2+ autonomous driving vehicles will account for a large proportion of new energy vehicles worldwide. We expect that vehicles with L4 and above autonomous driving levels will be equipped with 11-16 cameras, and L2 and above vehicles will be equipped with at least 6 cameras. Therefore, the number of cameras installed on new energy vehicles worldwide in 2025 is expected to be 7-9.
In addition, the multi-camera iteration of smartphones can also provide a certain reference for the development of automotive lenses:
two years after Huawei and Apple released their first dual-camera mobile phones in 2016, dual-camera mobile phones have become the standard for smartphones. In the following 4-5 years, the number of rear cameras on mobile phones has increased from two to nearly four. According to Counterpoint statistics, the average number of rear cameras on smartphones worldwide in 2020 was 3.7, of which smartphones with 4 or more cameras accounted for 29% of the market share.
Tesla
is the leader in the global smart car industry. Since the release of its Model 3 in 2016, new energy vehicles have completely entered the mass market. After the hardware upgrade, the Model 3 is currently equipped with 8 on-board cameras (3 front cameras, 2 side front cameras, 2 side rear cameras, and 1 rear camera). Based on Tesla's position and leadership in the new energy vehicle industry, we believe that the industry trend will continue to move closer to it in recent years, and even more than 30% of manufacturers will exceed this standard, that is, carry more on-board cameras (sensors) to achieve high-level autonomous driving.
LiDAR - an essential sensor for high-level autonomous driving
Facing complex environments, LiDAR has advantages
For autonomous driving, there are currently two solutions on the market:
➢ Vision-based solution: mainly based on cameras, which can perceive a rich external environment and relatively completely identify the overall shape and structure of objects, but are easily affected by external ambient light. Currently, the main car company is Tesla.
➢ LiDAR solution: LiDAR is the main solution, which uses laser to detect the surrounding environment and form a high-resolution three-dimensional image, and then cooperates with millimeter-wave radar, cameras and other equipment to complete autonomous driving. The advantages are that the monitoring distance is longer than the visual solution, the accuracy is higher, and it is not affected by external ambient light. However, when encountering extreme rain, snow, fog and haze weather, it will affect its emission beam, thereby affecting the internal three-dimensional composition. At the same time, the later maintenance cost of LiDAR is high.
It is undeniable that when facing relatively complex scenes, LiDAR has absolute advantages and is difficult to replace. In low-light environments such as tunnels and garages, there are certain technical defects in the use of camera algorithms to achieve L3 or even higher levels of autonomous driving, which LiDAR can effectively solve. At the same time, the combination of camera + millimeter wave also has certain recognition barriers for non-standard static objects when dealing with high-speed car scenes, which is why Tesla occasionally has some accidents caused by autonomous driving around the world.
According to the structure, laser radar can be divided into mechanical laser radar, hybrid solid-state laser radar (MEMS), and solid-state laser radar (OPA & FLASH):
Mechanical LiDAR technology is relatively mature at present. Its transmitting and receiving systems realize the transformation from wired to surface lasers by rotating the transmitting head, and form a multi-faceted laser arrangement in multiple vertical directions to achieve the purpose of dynamic scanning and dynamic reception. However, due to its high cost, complex assembly, and the existence of processes such as optical path debugging, as well as its constant rotation, it lacks sufficient reliability in driving environments, making it difficult to meet automotive regulations in the early stages of development.
Solid-state LiDARs include optical phased array (OPA) and FLASH. Compared with hybrid solid-state LiDARs, all-solid-state LiDARs remove rotating parts from their structure, achieving a smaller volume while ensuring high-speed data acquisition and high-definition resolution. Among them:
➢ The optical phased array (OPA) uses the principle of coherence to form a matrix through multiple light sources. After different light beams are superimposed on each other, some directions will cancel each other out while others will be enhanced, thereby realizing the main beam in a specific direction and controlling the main beam to scan in different directions. Since it completely removes the mechanical mechanism and does not need to rotate itself, OPA has the characteristics of fast scanning speed, high precision, good controllability, and small size.
➢ Flash solid-state laser radar, unlike MEMS and OPA, can quickly emit a large area of laser in a short time, and receive it through a highly sensitive receiver to complete the mapping of the surrounding environment. Its advantages are fast and efficient, but at the same time, the short detection distance caused by its principle is difficult to avoid in practical applications.
Sensors essential for high-level autonomous driving
As an essential sensor for new energy vehicles to achieve L4 or even L5 in the future, LiDAR will play a vital role in the new energy vehicle industry chain in the future as it passes certification and related projects are gradually implemented.
At present, the global LiDAR market can be divided into three major application scenarios: automotive applications (ADAS + autonomous driving), industry and transportation, and smart cities. According to TrendForce data, the total market size of the three major application scenarios in the world in 2020 was US$682 million, and it is expected to grow to US$2.932 billion in 2025, with a compound annual growth rate of approximately 33.9%;
Among them, automotive is the main application scenario of lidar in the world, with a market share of 60.0% and 83.0% in 2020 and 2025 respectively. Its market size will increase from US$409 million in 2020 to US$2.434 billion in 2025, with a compound annual growth rate of 42.9%.
Currently in the field of autonomous driving, L2 and below levels can be achieved without relying on lidar (such as Tesla Model 3), so we believe that lidar is not a necessary sensor in L2 and below levels. LiDAR solutions began to be used in L3 and became popular in L4 and above levels.
As the penetration rate of L3 and above autonomous driving is still low worldwide, and only a few automakers have launched their own models equipped with LiDAR, the LiDAR industry has not yet reached its peak. We expect that in the next three years, LiDAR will achieve rapid development as the level of autonomous driving improves and the world gradually unifies the view that "high-level autonomous driving cannot do without LiDAR".
At present, the global LiDAR field is still in the early stages of competition, and the industry is flourishing.
According to Yole's statistics, there are at least 80 companies in the world that specialize in LiDAR, of which more than 60 focus on the automotive LiDAR market. As of Q3 2021, 14 companies have received orders for related automotive LiDARs.
The current global situation is still unclear. According to Yole's statistics, in 2021, the largest market share in the global automotive and industrial lidar market is held by France's Valeo, with a market share of 28%. RoboSense, DJI, Huawei, and Hesai Technology have market shares of 10%, 7%, 3%, and 3%, respectively.
Among them, Valeo LiDAR Scala is the only ADAS vehicle
LiDAR that has achieved mass production. It has entered models such as Audi A8, Mercedes-Benz S-Class, Honda Legend, etc.
Velodyne, the world's leading LiDAR company, has not made much progress in the original equipment market due to factors such as the life of mechanical LiDAR and difficulty in meeting vehicle regulations. However, with the company's recently proposed MEMS semi-solid solution, it is expected to seize a certain share in the automotive market in the future. Hesai Technology, a domestic company, has deployed both mechanical and MEMS semi-solid LiDARs. At present, the company's products are the main LiDARs in driverless cars and are favored by Baidu, Bosch, and Daimler.
Good news for the laser radar optical industry chain.
The optical system is crucial for laser radar. While considering the photoelectric conversion efficiency, it is necessary to conduct relevant adjustments according to different types of laser radars, so that the parameters such as spot size, divergence angle, and aperture can meet the requirements of vehicle regulations, which requires a strong optical foundation. At present, the main domestic laser radar component suppliers include Yongxin Optics and Sunny Optical.
➢ Among them, Sunny Optical relies on its deep foundation in optics and has the ability to manufacture LiDAR lenses, rotating mirrors, windows and other
components. It has also joined the Leddar ecosystem and provided first-class LiDAR solutions together with LeddarTech.
➢ In 2018, Novo Optics cooperated with Quanergy and reached an order for 2.5w laser ranging lenses, entering the laser radar market. Currently, Novo Optics cooperates with Innoviz in the prism, rotating mirror, window and other components of MEMS semi-solid laser radar.
To be continued...
To get the full version PDF document
Click to follow the blue word "
Smart Car Expert
"
above
Reply
"Car Camera 9"
to receive
2022 Cockpit Monitoring System Monitoring System IMS (DMS, OMS) Conference Event Preview:
(Click on the picture to get conference information)
Smart Car Expert and Yimao Information Technology
will hold the 2022 (Second) Cockpit Monitoring System (IMS) Forward-looking Technology Exhibition and Exchange Conference in Shanghai on March 25-26, 2022.
More than 300 industry experts from well-known automakers, Tier1, system integrators, module companies, core component suppliers, packaging and testing, and scientific research institutes will discuss topics such as industry trends, innovative applications, and technological development of automotive cockpit monitoring systems. Strive to create a comprehensive platform for information sharing, experience exchange, technical support, and product display.
Click "Read original text" to register for the conference & reserve a booth