Avnet: Say goodbye to blind men touching an elephant, sensor fusion is the standard for the intelligent society

Publisher:EE小广播Latest update time:2021-10-21 Source: EEWORLDAuthor: 安富利公司Keywords:Avnet Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Today, our lives are highly dependent on sensors. As an extension of the human "five senses", sensors can perceive the world and even observe details that the human body cannot perceive. This ability is also necessary for the future intelligent society.


However, no matter how excellent the performance of a single sensor is, it still cannot meet people's requirements in many scenarios. For example, the expensive laser radar in a car can determine that there is an obstacle ahead based on the generated point cloud, but if you want to know exactly what the obstacle is, you still need the help of the on-board camera to "see" it; if you want to sense the movement of the object, you may also need millimeter-wave radar to help.


This process is like the familiar "blind men and elephant". Each sensor can only see one aspect of the object being measured based on its own characteristics and expertise. Only by combining all the characteristic information can a more complete and accurate insight be formed. This method of integrating multiple sensors together is called "sensor fusion".

image.png


A more rigorous definition of sensor fusion is: the information processing process of using computer technology to automatically analyze and synthesize information and data from multiple sensors or multiple sources under certain criteria to complete the required decision-making and estimation. These sensors as data sources can be the same (isomorphic) or different (heterogeneous), but they are not simply piled together, but are deeply fused at the data level.


In fact, examples of sensor fusion are already common in our lives. In summary, there are three main purposes for using sensor fusion technology:


Get a global understanding. A single sensor has a single function or insufficient performance, and only when they are combined can a higher-level task be completed. For example, the 9-axis MEMS motion sensor unit we are familiar with is actually a combination of a 3-axis acceleration sensor, a 3-axis gyroscope, and a 3-axis electronic compass (geomagnetic sensor). Only through such sensor fusion can accurate motion sensing data be obtained, thereby providing users with a realistic immersive experience in high-end VR or other applications.


Refine the detection granularity. For example, in terms of geographic location perception, satellite positioning technologies such as GPS have a detection accuracy of about ten meters and cannot be used indoors. If we can combine local positioning technologies such as Wi-Fi, Bluetooth, and UWB, or add MEMS inertial units, the positioning and motion monitoring accuracy of indoor objects can be improved by orders of magnitude.


Achieve safety redundancy. In this regard, autonomous driving is the most typical example. The information obtained by each on-board sensor must be backed up and verified with each other to ensure true safety. For example, when the autonomous driving level is upgraded to L3 or above, millimeter-wave radar will be introduced on the basis of the on-board camera. At L4 and L5, laser radar is basically standard, and even the data collected through V2X vehicle networking will be considered for integration.


In short, sensor fusion technology is like a "coach" that can combine sensors with different performances into a team, allowing them to work together to win a game.


After selecting the sensors to be fused, the next step is to consider how to fuse them. The architecture of sensor fusion is divided into three types according to the fusion method:


Centralized: Centralized sensor fusion is to send the raw data obtained by each sensor directly to the central processing unit for fusion processing. The advantages of this are high accuracy and flexible algorithms. However, due to the large amount of data that needs to be processed, higher computing power is required from the central processing unit. The delay in data transmission also needs to be considered, making it difficult to implement.


Distributed: The so-called distributed means that the raw data obtained by each sensor is processed first at a place closer to the sensor end, and then the results are sent to the central processor for information fusion calculation to obtain the final result. This method has low communication bandwidth requirements, fast calculation speed and good reliability, but because the raw data will be filtered and processed, some information will be lost, so in principle the final accuracy is not as high as the centralized method.


Hybrid: As the name implies, it is a combination of the above two methods, with some sensors using centralized fusion and other sensors using distributed fusion. Since it takes into account the advantages of centralized fusion and distributed fusion, the hybrid fusion framework has strong adaptability and high stability, but the overall system structure will be more complex, and additional costs will be incurred in data communication and computing processing.


There is also an idea to classify sensor fusion solutions according to the data information processing stage. Generally speaking, data processing must go through three levels: data acquisition, feature extraction, and recognition and decision-making. Information fusion at different levels has different strategies and application scenarios, and the results are also different.


According to this idea, sensor fusion can be divided into data-level fusion, feature-level fusion and decision-level fusion.


Data-level fusion: After multiple sensors have collected data, the data is fused. However, data-level fusion must be performed on data collected by the same type of sensors, and cannot process heterogeneous data collected by different sensors.


Feature-level fusion: Extract feature vectors that reflect the attributes of the monitored object from the data collected by the sensor. Information fusion of the monitored object features at this level is called feature-level fusion. This method is feasible because some key feature information can replace all data information.


Decision-level fusion: Based on feature extraction, certain discrimination, classification, and simple logical operations are performed to make identification judgments. On this basis, information fusion is completed according to application requirements to make higher-level decisions. This is the so-called decision-level fusion. Decision-level fusion is generally application-oriented.


There is no fixed rule on how to choose the strategy and architecture of sensor fusion. It needs to be determined according to the specific practical application. Of course, it is also necessary to comprehensively consider factors such as computing power, communication, security, and cost to make the right decision.


Regardless of which sensor fusion architecture is used, you may find that sensor fusion is largely a software work, and the main focus and difficulty are in the algorithm. Therefore, developing efficient algorithms based on actual applications has become the top priority of sensor fusion development.


In terms of optimization algorithms, the introduction of artificial intelligence is an obvious development trend of sensor fusion. Through artificial neural networks, the judgment and decision-making process of the human brain can be imitated, and it has the scalable ability of continuous learning and evolution, which undoubtedly provides acceleration for the development of sensor fusion.


Although software is critical, there are opportunities for hardware to play a role in the sensor fusion process. For example, if all sensor fusion algorithm processing is done on the main processor, the processor load will be very large. Therefore, a popular approach in recent years is to introduce a sensor hub, which can independently process sensor data outside the main processor without the involvement of the main processor. Doing so can reduce the load on the main processor on the one hand, and reduce the system power consumption by reducing the working time of the main processor on the other hand, which is very necessary in power-sensitive applications such as wearables and the Internet of Things.


Market research data shows that the demand for sensor fusion systems will grow from $2.62 billion in 2017 to $7.58 billion in 2023, with a compound annual growth rate of approximately 19.4%. It can be predicted that the development of sensor fusion technology and applications in the future will show two obvious trends:


Driven by autonomous driving, the automotive market will be the most important track for sensor fusion technology, and will spawn more new technologies and solutions.


In addition, the trend of application diversification will also accelerate. In addition to the previous applications with high performance and safety requirements, sensor fusion technology will usher in huge development space in the field of consumer electronics.


In short, sensor fusion provides us with a more effective way to gain insight into the world, allowing us to avoid the embarrassment of "blind men touching an elephant" and, based on this insight, shape a smarter future.


Keywords:Avnet Reference address:Avnet: Say goodbye to blind men touching an elephant, sensor fusion is the standard for the intelligent society

Previous article:element14 Launches New Sensor-to-Software Solution Based on NI and Omega Technology
Next article:Infineon Technologies and Rainforest Connection team up to create real-time monitoring system

Recommended ReadingLatest update time:2024-11-16 12:00

51 MCU-Temperature Sensor Code Analysis Ⅱ
1. Conversion less than 0 degrees We know that when the actual temperature measured is below 0 degrees, the situation when temp is not converted is as shown in the figure below We already know that the upper 5 bits being 1 indicates a negative number, so why is it 1111 1111 1111 1000 when it is -0.5 degrees? The
[Microcontroller]
51 MCU-Temperature Sensor Code Analysis Ⅱ
Hardcore | OmniVision Automotive Image Sensor Series Special Topic Launched
In 2022, the intelligentization of automobiles will accelerate. The in-vehicle cameras, known as the "eyes of intelligent driving", need to increase both "quantity" and "quality". Image sensors have become an important growth point in the automotive parts industry chain. According to the lates
[sensor]
Hardcore | OmniVision Automotive Image Sensor Series Special Topic Launched
Avnet and Nezha Automobile signed a strategic cooperation agreement to jointly promote the development of the new energy automobile industry
July 12, 2023, Shanghai, China - Avnet, the world's leading technology distributor and solution provider, and Nezha Automobile, one of the representatives of China's new car manufacturing forces, officially signed a strategic cooperation framework agreement in Shanghai . According to the agreement, both parties will g
[Automotive Electronics]
Avnet and Nezha Automobile signed a strategic cooperation agreement to jointly promote the development of the new energy automobile industry
BMW patents drone car wash system to clean self-driving car sensors
(Image source: www.thedrive.com) According to foreign media reports, BMW is developing a drone car wash application based on a recently published patent application. The application has an automatic system that people can use to make an appointment or summon a car wash drone to automatically wash their car. The
[Automotive Electronics]
BMW patents drone car wash system to clean self-driving car sensors
Mouser Electronics Now Stocking Infineon's XENSIV PAS CO2 Sensor
Mouser Electronics Now Stocking Infineon's XENSIV PAS CO2 Sensor Save 75% board space 2022年4月14日 – 提供超丰富半导体和电子元器件™的业界知名新品引入 (NPI) 分销商贸泽电子 (Mouser Electronics) 即日起备货英飞凌的XENSIV™ PAS CO2二氧化碳传感器。该产品基于光声光谱学 (PAS) 原理,采用高灵敏度MEMS麦克风检测传感器腔内CO2分子产生的压力变化。这种检测方式可以显著减小CO2传感器的尺寸,与市面上其他CO2传感器相比,可以在最终产
[sensor]
Mouser Electronics Now Stocking Infineon's XENSIV PAS CO2 Sensor
Sensata Technologies-EMB Braking Force Sensor丨Confirmed to apply for 2023 Golden Edition Award
Application technology | EMB brake force sensor Application field: Intelligent chassis Product Description: ① Sensata Technologies' EMB braking force sensor is a further exploration and application promotion of new packaging and fields based on existing technologies.
[Automotive Electronics]
Sensata Technologies-EMB Braking Force Sensor丨Confirmed to apply for 2023 Golden Edition Award
Power amplifier based solid-state magnetic field sensor for pulsed eddy current applications
Experiment name: Application of power amplifier in solid-state magnetic field sensor pulse eddy current detection of corrosion defects in ferromagnetic components Purpose of the experiment: Through analytical modeling simulation and experiments, it is found that compared with traditional induced voltage signals
[Embedded]
Power amplifier based solid-state magnetic field sensor for pulsed eddy current applications
OmniVision Automotive Image Sensor Series Special Topic Launched
In 2022, the intelligentization of automobiles will accelerate. The in-vehicle cameras, known as the "eyes of intelligent driving", need to increase both "quantity" and "quality". Image sensors have become an important growth point in the automotive parts industry chain. According to the latest forecast
[Automotive Electronics]
OmniVision Automotive Image Sensor Series Special Topic Launched
Latest sensor Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号