Article count:1109 Read by:1582604

Account Entry

The 5th EAC2024 Automotive Vision Camera Forward-looking Technology Exhibition and Exchange Conference came to a successful conclusion! A review of the speeches!

Latest update time:2024-08-01
    Reads:

Conference Review

EAC2024 (Fifth) Automotive Vision Camera Forward-looking Technology Exhibition and Exchange Conference was held by Yimao Auto and major automotive vision camera upstream and downstream industry chain companies at Suzhou International Expo Center. With the strong support of colleagues in the industry, it was held on June 22. It continued to work with more than 300 experts including industry OEMs, Tier1, visual solution manufacturers, camera module manufacturers, lens manufacturers, image sensor manufacturers, optical component manufacturers, material companies, testing and verification companies, and third-party institutions to explore the development and future of automotive camera sensor technology applications in the fields of autonomous driving perception.


Preview of core topics



Future What is the development trend of automotive vision camera technology? Revenue forecast of automotive vision cameras, image sensors, and lens groups from 2023 to 2029, how to manage the camera product life cycle ?

How can cameras capture video in extremely low light and HDR environments ? How are multispectral sensors used in nighttime autonomous driving technology ?

◆ What are the requirements for camera design from AI development What is the importance and method of internal reference calibration for automotive vision

In-vehicle camera High-precision assembly and testing process Introduction; What can be called Excellent high-computing power integrated driving and parking Product? What relevant KPIs are used to evaluate the performance of automotive computer vision?

Around the above topics, technology leaders and product experts from Yole, Cognizant, Volvo, Mobileye, iVision, Magic Vision, General Motors, KemVision, SenseTime, Conrad, Visionary.ai, DXOMark, Image Algorithmics and other companies shared their in-depth exploration of the industry.


Review of wonderful speeches


June 22, all day

Automotive Vision Camera Session





On the morning of the 22nd, the first speaker of the automotive vision camera special session was Mr. Raphaël da Silva, an analyst from Yole Group, a global leading analysis agency, who gave a wonderful analysis on the theme of "Imaging for automotive market overview". He pointed out that in the next few years, improving resolution will continue to be a trend to achieve more precise autonomous driving functions. The current front ADAS camera system resolution is between 5-8 MP, and ADAS cameras are expected to exceed 8 million pixels because more objects at a longer distance need to be identified and more detailed scenes need to be obtained. In order to obtain accurate depth development trends, 3D imaging is required, which increases the complexity of the system, and the use of 2D NIR or RGB-IR image sensors also brings a more complex software level.


It said that the transition of hybrid lens group technology trend from all-glass to hybrid glass-plastic lens groups in automotive vision camera modules has already occurred, and cost pressure will ensure that it continues. The Chinese automotive camera ecosystem is becoming more vertical, with component suppliers providing the entire camera module. At the same time, it made revenue forecasts for automotive vision camera types from 2023 to 2029, in which the camera market and image sensor market are expected to grow at a compound annual growth rate of 6.9% and 5.4%, respectively, reaching US$8.4 billion and US$3.2 billion by 2029; the lens group accounts for one-third of the camera module price, and its value is expected to grow from US$1.7 billion to US$2.4 billion in 2029. It also stated that DMS is the fastest penetrating application from 2023 to 2029.





The special guest who appeared next was Dr. Kathrin Kind, Chief Data Scientist / AIA. Director Nordics, Cognizant Technology Solutions, who gave a wonderful speech titled "Application and Performance Optimization of Multi-spectral Sensors in Nighttime Autonomous Driving". He discussed in depth the application of multi-spectral sensors in nighttime autonomous driving technology. The enhancement of sensor performance, the implementation of predictive analysis, and the scalability and adaptability of the model are all key factors to ensure the stable operation of the autonomous driving system in complex nighttime environments.


Advanced data analysis and machine learning techniques are used to create a predictive model that predicts sensor measurements based on historical data, enabling the autonomous driving system to make informed decisions when sensors are temporarily unavailable. At the same time, it mentions the design of a virtual sensor mode in scalability and adaptability, which can be easily expanded and adapted to different autonomous driving scenarios and night environments, ensuring reliable performance under various driving conditions and sensor configurations. Finally, it elaborates on the effectiveness of multispectral sensors in identifying objects in night environments and explores ways to optimize performance to meet different lighting conditions in rural areas.





The success of AD/ADAS depends heavily on perception sensors used to observe the drivable road conditions to enable accurate decision making and optimal motion control. A major component of drivable environmental conditions is the road surface conditions, which vary greatly with inclement weather, irregularities based on ground texture, and road debris.

The third guest speaker of the visual camera session was Ms. Derong Yang, Technical Expert Vehicle Motion Control, Volvo Car Group, who systematically expounded on the theme of "AI-based road condition estimation using on-board perception sensors". She said that climate and accident data collected in Sweden showed that the risk of accidents increased by 3-30 times when snow or ice existed compared to dry roads. In the United States, about 25% of car accidents are weather-related each year, and thousands of people are killed or injured in car accidents on snowy, muddy or icy roads. In order to solve this safety problem of low road friction, Volvo Cars has spent more than 15 years on friction estimation.


Today, Volvo Cars has already started production of a slippery road warning function that uses friction estimation based on a physical model, using tires as sensors. To further improve the estimation capabilities, Volvo Cars has studied non-contact methods using onboard perception sensors such as cameras and lidar. AI-based approaches will focus more on data-driven models to reduce the calibration burden and achieve more accurate modeling, thereby increasing the availability of friction data. This preview friction information is needed for active safety systems to make more informed decisions in a timely manner and adjust the feasible probability of AVs. Finally, it systematically explains the potential and challenges of applying AI methods to predict road conditions.





When driving a normal car, the driver needs to keep their eyes on the road and steer the vehicle. But fully autonomous vehicles will take over both of these tasks from human drivers. With Mobileye SuperVision, the driver still needs to keep their eyes on the road, but can take their hands off the steering wheel and let the system do most of the driving.


The next guest speaker was Mr. Yang Mingyang, head of strategy and business development from Mobileye, who gave a wonderful introduction on the topic of "Mobileye SuperVision™: A Bridge to Consumer-grade Autonomous Vehicles". He first pointed out that Mobileye SuperVision is essentially a mass-produced solution for vision-based autonomous driving systems. For many years, it has been actually tested and run-in on open roads in some of the most challenging driving environments in the world (from Jerusalem to New York, from Paris to Tokyo), and the relevant technology has been integrated into more than 125 million vehicles. Based on this rich experience, Mobileye SuperVision has standard driving functions that require attention and can be taken off the hands at speeds of up to 80 miles/hour (130 kilometers/hour) on all regular roads. In other words, a vehicle equipped with Mobileye SuperVision can operate like an autonomous vehicle to a large extent, but it still requires driver supervision.


To achieve such autonomous driving functions, Mobileye has developed a series of technologies including sensors, map technology, driving strategies and processors. Mobileye Chauffeur™ 63, as a standard feature of high-end brands, can achieve eyes-off driving on highways and will start mass production in 2025-2026. Mobileye SuperVision integrates all of the above technologies, including 11 cameras, Mobileye Roadbook™ maps driven by Road Information Management™ (REM™) technology, driving strategies based on Responsibility Sensitive Safety Model (RSS™), and two latest EyeQ™ system integrated chips integrated in the ECU. Through the OTA function, users can upgrade the system along with the development process. Based on the Mobileye DXP driving experience platform, customers can not only define and develop unique driving styles and strategies, but also do not need to develop highly complex and high-risk common elements from scratch.

It is concluded that Mobileye SuperVision™ is the ultimate evolution of ADAS, which can achieve hands-free/look ahead, and is trusted by automakers and well received by end users for its leading performance. Using Mobileye SuperVision as a baseline and bridge, a safe and practical eyes-free system - Mobileye Chauffeur™ can be gradually introduced. Mobileye looks forward to continuing to carry out more and deeper co-creation and win-win with more OEMs and partners in the field of ADAS and autonomous driving.





From the development history of car cameras, one direction is more application scenarios; the other direction is perception, which requires higher resolution and smaller pixel units. The requirements for lenses are also increasing. Now the lenses of car cameras cover more focal lengths, more field of view angles, and longer focusing distances. The testing of car cameras is also more stringent. Currently, there are high and low temperature tests or Flare tests. Cameras used for autonomous driving also need to do internal reference calibration. Usually these tests are only reflected on the module side, and the lens side test does not change much.

Mr. Kangwei Jing, senior product manager of Suzhou Aiwei Vision Technology Co., Ltd., who was specially invited by the conference, elaborated on the development trend of vehicle-mounted cameras in detail based on the direction of "High-precision assembly test solutions to help visual perception sensors quickly mass-produce". He said that the Chinese market occupies an important share in the automotive camera market in the Asia-Pacific region, and the compound annual growth rate is expected to exceed 17% during the forecast period (2020-2025). The development trend of vehicle-mounted camera technology is safer, more multifunctional, high-definition, intelligent, and integrated. At present, there are four main application scenarios of vehicle-mounted cameras: in-cabin, side rear view, surround view, and front view; the cameras used in each scenario have different functions and different requirements for lenses. For example, the in-cabin camera needs to be imaged under visible light and infrared light; the front view camera needs to be imaged through the windshield, which is a relatively special application in the current lens testing.


The main cost components of vehicle-mounted cameras include: image sensors, optical lenses and module packaging. Among them, optical lenses account for about 20% of the module cost. Although the price is not as high as that of image sensors, they are the most important because optical lenses are the first step in the imaging of vehicle-mounted cameras. 70% of the quality of camera imaging is determined by the lens. In the next few years, vehicle-mounted cameras will continue to develop. It is believed that whether considering the functionality, specificity and cost of vehicle-mounted lenses, it is very necessary to conduct targeted testing of vehicle-mounted cameras.


It said that Aiweishi is committed to developing high-performance automatic line assembly and testing systems for cameras, millimeter-wave radars, lidars and other components in the field of image sensing applications, providing related testing and R&D services for automotive, smart phones, security monitoring, 3C and other fields, and focusing on the intelligent manufacturing and R&D of vehicle perception module sensor components at a time when the intelligent driving industry is booming, providing precision design and assembly solutions for safety perception devices such as automotive cameras and radars. At present, the company's automotive camera automatic focus, active alignment (AA), testing, calibration equipment and production assembly lines have developed into a technology leader in the automotive camera assembly and testing industry based on long-term cooperation with major domestic and foreign automotive camera manufacturers, and continue to develop image sensor component assembly and testing solutions that meet emerging functional requirements and new application fields.





Driven by both the demand side and the supply side, China is expected to produce a world-class leader in mass production of autonomous driving. The future development trend of automotive electrical architecture is a gradual process of continuous reduction and deep integration of ECUs. Domain integration is already a general trend. Based on the development of automotive electrical architecture and related advantages, driving and parking integration will surely become the mainstream of the future market.


The next guest of honor for the visual camera is Mr. Zhang Zheng, Deputy General Manager of Passenger Vehicle Products of Magic Vision Intelligent Technology (Shanghai) Co., Ltd., who made a wonderful analysis on the theme of "Moshi Intelligent Driving and Parking Integrated Mass Production Road". He pointed out the evolution trend of the core elements of intelligent driving mass production: a full-stack platform with four major capabilities, original full-stack algorithm capabilities, full-stack system capabilities, full-scenario landing capabilities, and complete data closed-loop capabilities. The excellent medium-computing driving and parking integrated products can not only deeply reuse sensors and realize driving and parking with a single SOC, but also make full use of multi-core heterogeneous SOC computing resources, save MCU, and minimize costs.


It said that Magic Vision Intelligence has full-stack autonomous driving core technology with independent intellectual property rights, including all core algorithms such as environmental perception, multi-sensor fusion, high-precision vehicle positioning, path planning, vehicle control, and driving decision-making, supporting L1-L4 autonomous driving. Magic Vision Intelligence's unique deep learning framework fully supports the six major international and domestic embedded chip platforms, realizing a highly optimized and accurate artificial intelligence engine. Excellent high-computing-power integrated driving and parking products can make full use of computing resources to cope with the underlying characteristics of multi-sensors, multi-modal input, multi-task processing, and multi-scale analysis in autonomous driving tasks. At the same time, the Transformer Based BEV network and Occupancy Vetwork occupancy grid network are integrated into a unified multi-task large-scale neural network design, which is uniformly oriented to the integrated driving and parking task scenarios. Specifically, in terms of driving visual perception, Magic Vision Intelligence's deep learning algorithm can support 2 million/8 million camera input; and in parking surround perception, Magic Vision Intelligence's deep learning algorithm can support 1 million/2 million camera input, and can accurately output the perception results required by the fully automatic parking system, such as parking space locks, moving obstacles, vehicles, pedestrians, traffic signs, wheel stoppers, and drivable areas.





On the morning of the 22nd, the last special guest of the visual camera special session was Mr. Dan Levi, Research Group Manager from General Motors, who gave a wonderful interpretation of the theme of "Object-centric Open-vocabulary Image Retrieval with Aggregated Features" . In view of the high cost and time-consuming data annotation in the current field of autonomous driving, their research is committed to reducing the reliance on large-scale annotated data, promoting the pursuit of efficient and robust perception systems.


Levi's research team The study introduces a new object-centric open-vocabulary image retrieval method. This method combines the scalability of the image retrieval pipeline and the effective object recognition of dense detection methods by aggregating dense embeddings from CLIP into a compact form. The method significantly outperforms global feature methods on three datasets, with an accuracy improvement of up to 15 mAP points. The research also includes incorporating this method into a large-scale retrieval framework, demonstrating advantages in scalability and interpretability.





The first special guest speaker for the afternoon session of visual cameras was Mr. Fu Bingkai, General Manager of Shanghai Kem Vision Technology Co., Ltd., who answered questions on the theme of "Coordinated Development of Vehicle-mounted Cameras and Vehicle Intelligence". Mr. Fu first introduced the requirements for intelligent AI cameras, including safety level ASILB, high-quality image quality, failure risks (shock absorption, waterproofing, dustproofing, high temperature and high pressure, etc.), and software reusability. He then talked about the culprits that restrict AI performance are perspective deformation and magnification, and said that 3.3.3 in ISO16505 has clear requirements for the parallel installation of the center of the camera optical axis and the longitudinal axis of the vehicle body. Domestic products are opposite to Actors products, and become outward eight-shaped, which increases the learning cost of drivers. However, the software cannot correct magnification, aspect ratio, grayscale, and depth of field.


He then talked about optical design becoming the technical core of the development of vehicle-mounted cameras. The core of reshaping, reconstruction and reconstruction is (product requirements, definitions, intellectual property rights). Advanced emergency braking systems, intelligent speed assistance, emergency lane keeping systems, driver fatigue and attention warnings, advanced driver distraction warnings and reversing detection and other safety systems all have the potential to significantly reduce the number of casualties. Some safety systems form the basis of the technology for the deployment of autonomous vehicles, and it is necessary to ensure that these systems can be used safely throughout the life cycle of the vehicle. He said that AI algorithm components and system suppliers have begun to form an alliance, and the ADAS functions required by the upcoming EU General Safety Regulation (GSR) are mandatory to protect vulnerable road users outside the vehicle.


Finally, Mr. Fu introduced Shanghai Keim Vision as a technology research and development enterprise integrating optical design, automotive camera module assembly, production and sales. The technology is derived from Sham's law commonly used in large-format technical cameras. It combines the three-dimensional data of the vehicle, the relationship between the glass reflector and the human eye's observation angle, and the field of view observation area of ​​the -15984 regulations to calculate the optical design parameters of the CMS lens. Through big data model modeling, it is possible to accurately calculate the angle between the focal plane of the lens and the CMOS sensor and use CNC technology to process the shell. The perspective is corrected and the depth of field is enhanced through structural parts, thereby reducing the computing power purchase cost and operating power consumption of the main control chip. At the same time, it is also committed to improving the installation reusability of AI algorithm cameras on different models, solving the magnification, aspect ratio, point light source, and grayscale problems caused by non-positive shooting perspective deformation, enhancing the depth of field while shortening the development cycle of graphics processing software, reducing software operation BUGs, and ensuring the safety, reliability and long life of the product throughout its life cycle. Kem Company has completely independent intellectual property rights for this technology and has obtained 7 patent authorizations. In addition, 2 PCT international patents have been submitted to Guofen for implementation. There are two utility model and invention patents in China that are being publicized. The products are used in the fields of CMS, DMS, BSD, AVM, and ADAS products for in-vehicle AI algorithms.





The pain points and difficulties of the calibration of the internal parameters of vehicle-mounted cameras lie in how to achieve high-precision calibration to meet the requirements of intelligent driving? How can the calibration device meet the needs of miniaturized and efficient calibration equipment required by the production line? How can the calibration evaluation accurately evaluate the accuracy of the calibration parameters?

The conference specially invited Dr. Shao Shuangyun, Deputy General Manager of Sichuan Shenruishi Technology Co., Ltd., to give a systematic exposition on the theme of "High-precision Calibration Equipment and Test Scheme for Vehicle-mounted Cameras". Shao Bo introduced the principle of internal reference calibration in detail, pointing out that internal reference calibration is a key step in vision, accurately estimating the internal parameters of the camera, such as focal length, principal point coordinates and distortion coefficient, and establishing a mapping relationship between pixel coordinates in the image and actual world coordinates for positioning and size measurement.


He then said that in response to the pain points in the field of vehicle camera calibration, Deep Vision has made breakthroughs in key technologies such as special structure target design, special stereo target design, teleconverter design, and artificial intelligence-assisted vehicle camera calibration technology, forming a series of vehicle camera calibration equipment. It is also the first calibration system supplier in Asia Pacific and the third in the world that meets the Mobileye standard for autonomous driving, recognized by Mobileye, a global leading smart driving solution provider. In order to meet more customer needs, expand production capacity, and improve product reliability and consistency, Deep Vision's vehicle camera automated production line runs through the entire vehicle camera production process, including AA machines, baking machines, airtightness detection machines, final inspection machines, and automated loading and unloading machines.





As the development of autonomous vehicles progresses, camera ECUs (electronic control units) play an increasingly important role. For example, multifunctional front cameras and surround cameras. In order to develop and manufacture these complex cameras, rigorous testing activities are necessary to meet specific requirements such as functional safety and OEM-specific cybersecurity standards.

On the afternoon of the 22nd, the third speaker of the visual camera special session was Mr. Lee Jinjong, Product Development Director of the Automotive Division at the German headquarters of Conrad Technology, who gave a wonderful speech on the theme of "Camera Product Lifecycle Management: From R&D to Automated Mass Production Testing". He first introduced the comprehensive functional verification method used for camera function verification testing, from software development to production testing (offline testing). As a global company with 30 years of experience in the testing field, Conrad Technology has the ability to complete the testing of the entire life cycle of camera products to ensure that the products can achieve the best safety and performance standards.

Conrad 's testing solutions run through the entire life cycle of users in different industries, including electronic manufacturing, radio frequency technology, optics, etc., providing safe and efficient product development, testing and production processes for consumer electronics, automobile manufacturers, automotive parts suppliers, automotive electronics research institutions and university laboratories. Since its establishment in 2014, the company has obtained national "high-tech enterprise" and "specialized and new" enterprise certifications, and has the "GB/T 19001-2016/ISO9001:2015" quality management system certification qualifications. The company's new product solutions have obtained more than 30 national patents.





The next guest speaker was Mr. Benny, VP Business Development from Visionary.ai, who gave a systematic presentation on the topic of "AI based low light and HDR Solution" . He first pointed out that systems such as ADAS, surround sound, electronic rearview mirrors, reversing cameras, and cabin monitoring all rely on stable video streams under various lighting conditions (including low light and high dynamic range). Many of these systems need to work at high frame rates and short exposure times, which makes it more difficult to keep image quality robust enough for human and machine vision.


It showcases a new AI-based solution, the Visionary.ai solution, which can give ordinary cameras near night vision capabilities in the automotive field, enabling cameras to capture video in extremely low light and HDR environments. Using edge AI technology, their image processing software greatly improves the quality of real-time video. Visionary.ai was listed as one of the 100 most promising AI startups by CB Insights.





The next guest speaker was Mr. Pierre-Yves Maitre, Product Owner, lmage Quality, from DXOMark, who gave a systematic exposition on the topic of "Computer Vision: What KPIs for Camera performance evaluation?" He first proposed what relevant KPIs should be used to evaluate the performance of automotive computer vision? The quality of the camera does not necessarily reflect the quality of automotive computer vision. You can imagine shooting a scene with a camera. The images are good and bad. The algorithm behind it will provide DRI performance detection, recognition and identification of computer vision processes through its own performance as a global system. Are there some KPIs that can infer system performance through charts and limited captured images? Computer vision performance is crucial. Under what circumstances will the camera become a limiting factor for the entire system? What points should be paid attention to before integrating the camera into the ADAS computer vision system?


He said that the entire system needs to be considered during testing. The camera plus the algorithm needs to take a large number of pictures, and millions of kilometers of road tests, covering various light and weather conditions, ensure that the detection rate is high enough. If the noise, color and clarity of the new camera are different, it is necessary to test again. Then he proposed a new metric, which was originally based on the frequency analysis of dead leaves patterns. He suggested that some indicators that should be selected before integrating the camera into the ADAS computer vision system include Flicker, MTF, FCR, SNR, CPI, Flare, DR, etc. to select and identify the camera . MTF, signal-to-noise ratio and flare are the most important parts of all at the component level. In fact, after integrating the entire camera, ISP and the link of the detection algorithm in the computer vision algorithm, the frequency of the correct resolution must be further improved. The contrast performance indicators and dynamic range mentioned are both critical KPIs.


Finally, it introduced in detail that DXOMARK is a French technology company and an international leader in multimedia quality assessment for professional and consumer electronics (such as car cameras, machine vision, smartphones, laptops, etc.). As a private independent company, DXOMARK's mission is to help original equipment manufacturers develop high-quality products and enhance the consumer experience of end users. To guide users in choosing electronic products.





Historically, image sensors sensed red directly because the computing power to mimic our eye's processing was not available. This resulted in reduced SNR due to red insensitivity and poor color accuracy, as part of the ideal red spectral response is negative and cannot be achieved by physical filters. As Moore's Law provided us with the necessary computing power, changes in CFA design were needed.

The last heavyweight speaker of the 22nd Automotive Vision Camera Special Session was Mr. Tripurari Singh, Founder and CEO of Image Algorithmics, who gave a wonderful interpretation on the theme of "RGBC - The Best of RCCB and Bayer". He focused on proposing an RGBC color filter array that has the sensitivity of RCCB and the color accuracy of Bayer. It also extends the available dynamic range of dual-exposure sensors by 12 dB. RGBC processing is complex and has been used in consumer applications after generations of efforts. He said that Image Algorithmics focuses on color filter arrays (CFA) and processing algorithms for capturing extremely high-quality images under real-world conditions. Its algorithms support a wide range of CFA modes, including all modes currently in production. Even under challenging lighting conditions, its new CFA mode provides higher signal-to-noise ratio, dynamic range, and accurate color. The wonderful speech brought the entire conference to a successful conclusion.

/ Highlights /


Highlights of some booths




For more exciting photos of the booth meeting, please scan the QR code to view


The success of this EAC exhibition is due to the strong support and cooperation of all parties. Here, we would like to thank all exhibitors, buyers and people from all walks of life for their strong support and attention to this event. EAC20 25 (6th) Automotive Vision Camera Forward-looking Technology Exhibition and Exchange Conference and EAC2025 Yimao Automotive Industry Exhibition will continue to be held on June 4-6, 2025. We look forward to seeing you again!



THE

END







丨The 6th EAC2024 Lidar Forward-looking Technology Exhibition and Exchange Conference came to a successful conclusion! 40+ experts gave a wonderful review of their speeches!







丨The 6th EAC2024 Millimeter Wave Radar Forward-looking Technology Exhibition and Exchange Conference came to a successful conclusion! 40+ experts’ speeches were reviewed!







EAC2024 4th Automotive Cockpit Monitoring System (IMS) Forward-looking Technology Exhibition and Exchange Conference Concluded successfully! A wonderful review of the speeches!







丨The fifth EAC2024 The automotive head-up display HUD forward-looking technology exhibition and exchange meeting came to a successful conclusion! A wonderful review of the speeches!

About SmartCar


Zhichexianjia focuses on the intelligent sharing and interactive communication platform of the intelligent connected vehicle industry. It has established more than 10 industry communication groups and regularly broadcasts online special live broadcasts.


We sincerely invite you to join the Zhichexingjia automotive vision camera WeChat industry exchange group:

The group includes more than 2,000 domestic OEMs, system integrators, etc., including corporate general managers, R&D directors, chief engineers, university professors and experts, etc., and gathers industry elites from the autonomous driving industry chain. Welcome to join the group for communication.

Please add Zhige@智车行家 WeChat: 18512119620 to apply to join the group.

share

Like

look in



Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号