发布日期:2022-04-20 点击率:29
July 15, 2021
AddThis Sharing Buttons
Share to FacebookShare to TwitterShare to linkedIn
Sensors
SEE ALL AVNET ARTICLES
Today, our lives are highly dependent on sensors. Sensors are an extension of our five senses, which allow us to perceive the world,a and they can observe details that our human bodies cannot. This will be a crucial ability for humans in future smart societies.
However, no matter how well a single sensor performs, they still fall short of users’ requirements in many situations. For example, based on the generated point cloud, the costly Lidar sensor in a car can determine whether there is an obstacle ahead. But if you want to know the exact nature of the obstacle, you also need an on-board camera to help you “take a look.” And if you want to sense the motion state of this object, you also need a millimeter-wave (mmWave) radar.
This process calls to mind the famous story of the Blind Men and the Elephant, where a group of blind men, who had never come across an elephant, pictured an elephant by touching a different part of the elephant’s body. Each of them formed a different picture in their mind’s eye of what the elephant looked like, depending on which part they had touched. Just like in this story, a sensor can only “see” limited features of an object, depending on its individual capabilities. When multiple pieces of information on the features of the object are integrated, more complete and accurate pictures can be formed. This method of integrating multiple sensors is called “sensor fusion.”
A more exact definition of sensor fusion is: To use computer technology to automatically analyze and synthesize information and data from multiple sensors or sources under certain criteria to complete the information processing process required for making decisions and estimations. The sensors that serve as data sources can be the same (isomorphic) or different (heterogeneous), but they are not simply stacked together. They must be deeply integrated from the data level.
Already, there are many examples of sensor fusion in our lives. The three main purposes for using sensor fusion technology are:
Get a global picture. While the performance of a single sensor is insufficient, multiple sensors working together can complete a higher level of work. For example, the 9-axis MEMS motion sensor unit we are familiar with is actually a combination of a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis digital compass (geomagnetic sensor). only through such sensor fusion can we obtain accurate motion sensing data. This allows us to provide users with a realistic and immersive experience in high-end VR and other applications.
Refine granularity detection. For example, in terms of setting geographic locations, GPS and other satellite positioning technologies have a detection accuracy of about ten meters and cannot be used indoors. If we integrate local positioning technologies, such as Wi-Fi, Bluetooth, UWB, or add MEMS inertial measurement units, there will be a vast improvement in the accuracy of positioning and motion monitoring of indoor objects.
Achieve safety through redundancy. In this regard, autonomous driving is the best example. The information obtained by various on-board sensors must be backed up and verified by each other in order to be truly safe. For example, when the level of autonomous driving is raised above L3, a millimeter-wave (mmWave) radar will be incorporated into the on-board camera. At L4 and L5, the Lidar sensor is basically standard, and integrating data collected through the V2X IoV will even be considered.
Figure 1: Examples of various in-vehicle sensors used in autonomous driving (Image source: online resources)
In short, sensor fusion technology is just like a "coach" because it is capable of kneading sensors with different abilities into a united team of players that can work together and complement each other to win the game.
once you have selected the sensors that need to be fused, the next step is to learn how to integrate them. The architecture of sensor fusion is divided into three types, according to the method of fusion:
Centralized fusion: Centralized sensor fusion involves sending the raw data obtained by each sensor directly to the central processing unit for fusion processing. The advantages of this method are high accuracy and flexible algorithms. However, due to the large amount of data that needs to be processed, the computing power of the central processing unit is reduced, the requirements for the central unit are higher, and the delay of data transmission needs to be taken into consideration. Hence it is difficult to achieve.
Distributed fusion: In the so-called distributed method, the original data obtained by each sensor is initially processed in a location closer to the sensor, and then the results are sent to the central processing unit for information fusion calculation, after which the final results are obtained. This method requires low communication bandwidth, fast calculation speed, and good reliability. However, because the original data is filtered and processed, some information will be lost, so in principle the accuracy of the final results will not be as high as with the centralized method.
Hybrid fusion: As its name suggests, this method combines the above two methods. In the hybrid method, some sensors use a centralized fusion method, while others use a distributed fusion method. Because it combines the advantages of both centralized and distributed fusion methods, the hybrid fusion framework has strong levels of adaptability and stability, but the overall system structure tends to be more complicated, and it will incur additional data communication and computing processing costs.
Another criterion for classifying the methods of sensor fusion is based on their data information processing stage. Generally speaking, data processing has to undergo through three levels: data acquisition, feature extraction, and recognition and decision-making. Information fusion is carried out at different levels. Different strategies and different application scenarios produce different results.
According to this concept, sensor fusion can be divided into data-level fusion, feature-level fusion and decision-level fusion.
Data-level fusion: After the data is collected by multiple sensors, the data is fused. However, the data processed by data-level fusion must be collected by the same type of sensor, and cannot process heterogeneous data collected by different sensors.
Feature-level fusion: The feature vector that can reflect the attributes of the monitored object is extracted from the data collected by the sensor. At this level, fusing information of the features of the monitored object is called feature-level fusion. This method is feasible because part of the key feature information can replace all the information from the data.
Decision-level fusion: On the basis of feature extraction, some recognition, classification, and simple logical operations are performed to make judgments. On this basis, information fusion is completed according to application requirements, and higher-level decisions are made – this is called decision-level fusion. Decision-level fusion is generally application-oriented.
There are no set guidelines when it comes to choosing the best strategy and architecture for sensor fusion, as it should always be determined according to specific practical applications. Of course, other factors such as computing power, communications, security and costs must also be considered in choosing the best type of sensor fusion architecture for an application.
Regardless of which sensor fusion architecture is used, sensor fusion is largely dependent on software and the main challenges arise from the algorithm. Hence the development of efficient algorithms based on actual applications has become the top priority of sensor fusion development.
In terms of algorithm optimization, the introduction of artificial intelligence has made an ongoing impact on sensor fusion development. By using artificial neural networks (ANNs), software can imitate the judgment and decision-making processes of the human brain. ANNs can continuously learn and evolve, allowing the software to keep developing, thus accelerating the development of sensor fusion.
Although software is critical for the sensor fusion process, hardware also plays a crucial role. For example, if all sensor fusion algorithm processing is done by the main processor, the load on the processor will be extremely heavy. Therefore, a popular approach in recent years has been to introduce a sensor hub. A sensor hub can independently process a sensor's data outside the main processor. This can reduce the load on the main processor, and also reduce the system power consumption by lowering the working time of the main processor. This is necessary in power-sensitive applications such as wearables and the Internet of Things (IoT).
Figure 2. Example of a sensor hub: In this health wearable sensor system, MAX32664 is used as the sensor hub to perform fusion processing on the data information of the optical and motion sensors. (Image source: Maxim Integrated)
According to market research data, the demand for sensor fusion systems will increase from US$2.62 billion in 2017 to US$7.58 billion in 2023, with a compound annual growth rate of approximately 19.4%. All indications suggest that the development of sensor fusion technology and its applications will be driven by two growth industries.
One surge of development will be driven by autonomous driving. The automotive market will be the most important track for sensor fusion technology, boosting the creation of new technologies and solutions.
Another surge of development will be driven by the application diversification trend, which will continue to accelerate. In addition to those applications with high performance and safety requirements in the past, sensor fusion technology in the field of consumer electronics will also attract developers’ attention.
In short, sensor fusion provides a more accurate and effective way to gain insights about the world, saving us from the embarrassment of “blindly touching an elephant.” With sensor fusion and the insights it provides, we can build a smarter future.
下一篇: PLC、DCS、FCS三大控
上一篇: Why is IP&E so Impor