Derived senses make our lives colorful

Publisher:EEWorld资讯Latest update time:2020-07-13 Source: EEWORLD Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Translated from EEtimes

 

 

New developments in electronic devices are constantly changing our world, but can they change the way we perceive? Research has been ongoing for years in the field of devices and sensors that replicate the functions of human sense organs, striving to make them function in the same way that our eyes, ears, noses, taste buds, and skin interact with the world. In addition to these five senses, we are also working to enrich the way we interact with electronics, providing derivative senses for wearable interactions and gesture control. Here are some of the latest advances in emulating natural senses, and a look at how we can not only guide ordinary senses, but also bring about completely new experiences in the future.

 

Vision systems were the first development in sensor technology and are widely used today. While machine vision functions very differently than human vision, using digital inputs and outputs to operate mechanical parts, it can still mechanically replicate tasks that would normally require a human operator. It is this vision system that allows some modern cars to detect their surroundings and even respond to anticipated collisions by activating the brakes. Accompanied by pattern recognition algorithms, these vision sensors play a role in facial recognition and other detection and image analysis.

 

Simply put, machine vision is to use machines to replace human eyes to make measurements and judgments. The machine vision system mainly consists of three parts: image acquisition, image processing and analysis, and output or display. Computer software is mainly used to simulate human visual functions, extract information from the image of objective objects, process it, and finally use it for actual detection, measurement and control.

 

Composition of machine vision system

 

    1. Image processing unit - the brain

 

    The image processing unit is completed by image processing software, which contains a large number of image processing algorithms. After obtaining the image, these algorithms are used to process the digital image, analyze and calculate, and output the results. The competition between machine vision companies is ultimately a competition of algorithm accuracy, so each company will invest a lot of resources in the development of core software. Only excellent machine vision image processing software can perform fast and accurate inspections and reduce dependence on hardware systems. Software is the brain of machine vision. Only after the software has digitized the collected image can the machine perform functions such as recognition and detection.

 

    2. Light source

 

    Light source is one of the most important components in machine vision system. A suitable light source is a prerequisite for the normal operation of machine vision system. The purpose of using light source is to distinguish the measured object from the background as clearly as possible and obtain high-quality and high-contrast images.

 

    3. Lens – Retina

 

    The function of the lens is to create an optical image. Although cameras, analysis software, and lighting are all important for machine vision systems, the most critical component is the industrial camera lens. For the system to fully function, the lens must be able to meet the requirements. The parameters that determine the performance of the lens are mainly focal length, working distance, depth of field, resolution, etc. Depth of field refers to the distance range of the object before and after the best focus when the lens can obtain the best image. Field of view refers to the maximum range that the camera can observe, usually expressed in angles. Generally speaking, the larger the field of view, the larger the observation range. Working distance refers to the distance from the lens to the object. The longer the working distance, the higher the cost.

 

    4. Camera – Eyeball

 

    The purpose of a machine vision camera ("eyeball") is to transmit the image projected onto the sensor through the lens to a machine device that can store, analyze and/or display it. According to the chip type, it can be divided into CCD cameras and CMOS cameras. CCD and CMOS are two commonly used image processing technologies. The main difference between them lies in the different transmission methods.

 

    5. Image acquisition unit - visual nerve

 

    The most important component in the image acquisition unit is the image acquisition card, which is the interface between the image acquisition part and the image processing part. It generally has the following functional modules: image signal reception and A/D conversion module, which is responsible for the amplification and digitization of image signals. There are acquisition cards for color or black and white images. Color input signals can be divided into composite signals or RGB component signals. The camera control input and output interface is mainly responsible for coordinating the camera to synchronize or realize asynchronous resetting and timing photography. The bus interface is responsible for high-speed output of digital data through the internal bus of the PC. It is generally a PCI interface with a transmission rate of up to 130Mbps, which is fully capable of real-time transmission of high-precision images. And it takes up less CPU time.

 

    5. Output unit

 

    After completing the image acquisition and processing, the input and output unit needs to output the image processing results and make actions that match the results, such as rejecting waste, alarm lights, and displaying production information through the human-machine interface.

 

Case Appreciation

 

The electronic skin developed in Tokyo could be used as a wearable medical monitor.

 

The replication of natural senses has many applications in improving life around us and increasing industrial efficiency. Electronic skin is an ultra-thin, flexible, wearable device that combines microelectronics and sensors, and can even simulate the sense of touch in prosthetic limbs by detecting the pressure of light and transmitting it to the wearer. Developments in technology are also opening the door to wearable devices that can enhance interaction with computers, perhaps generating tactile stimulation to provide directions or notifications. While these wearable sensors are limited by the need for thin, flexible materials, artificial tactile sensors can also be used in robots and even smartphones. These sensors are getting closer and closer to mimicking the human sense of touch and are even beginning to sense elements such as surface texture. As with most of these technologies, there are quite a few use cases at present, but there is a lot more potential for future development.

 

Electronic noses and tongues have been used in industry for more than a decade; electronic versions of human senses are used in the food, beverage, pharmaceutical, plastics, packaging, and environmental industries. Sensors can detect if taste or smell is off in food, provide quality control, and detect contaminants, replacing tasks that can be unpleasant or even dangerous. Gas sensor arrays on electronic noses and liquid sensors on electronic tongues, along with pattern recognition tools, may be able to detect subtle differences that are beyond the human palette.

 

Digital Nose E-NOSE

 

E-NOSE functions by using pattern recognition to detect traces, or "fingerprints," of chemical compounds on an array of sensors. Scientists have also tested their ability to detect disease — perhaps blood sugar levels or even cancer through compounds in the breath. In 2018, researchers at Brown University developed a smaller sensor that can detect more contextual information. Their TruffleBot is able to measure the physical properties of tiny changes in pressure and temperature, which it can use to identify odors. The device does these extra things by including chemical and mechanical sensors, in this case, a digital barometer. On top of that, their device can even sniff out odors, sucking in air for analysis and acting as a true sniffer.

 

Electronic tongue

 

The e-tongue device works on a similar principle: an array of liquid sensors is used to collect information, which is then analyzed by computer algorithms. Also like the e-nose, while the specificity of each sensor unit in the array is low, the combination of multiple units and specificity classes results in a greater potential for information. After years of use in industrial settings, the e-tongue has seen some refinement. Most models are based on measurement techniques using electrochemistry, such as potentiometry or voltammetry. First manufactured in 1997, the voltammetric e-tongue can be miniaturized, or produced with self-polishing properties to protect the electrode surface from contamination. Like many of these sensory replicas, the tongue has a well-established methodology, but there are also considerable possibilities for development, such as measurement systems for microbial activity or water quality, the use of microelectrodes, etc.

 

Digital Lollipop

 

On the other side of taste technology, there are devices that can trick us into thinking we’re tasting something we’re not. Taste simulation is another fascinating fusion of electronics and food. Nimesha Ranasinghe, a world-leading researcher in the field, has invented a variety of ways to control taste. Ranasinghe’s chopsticks with electrodes built into them can simulate saltiness, tumblers can simulate the sourness of lemonade, and lollipops can taste different depending on the body’s biochemical makeup. This is just another step in bringing human senses into the virtual reality space, using electrodes on the surface of tableware to create a sensation when our own senses come into contact with the tableware.

 

 

Transmitting information imperceptibly, without or as an extension of human senses, opens up many possibilities: delivering visual information to people with impaired vision, enhancing taste and touch, and even recreating these sensations in virtual environments. It may even be possible to create sensations in humans that we have never experienced before. After studying the navigation abilities of birds and bats, a team of researchers in Germany created something they call a "sensory space navigation belt." The wearable device converts magnetic currents into vibrations, allowing people to sense direction and orient themselves through tactile stimulation.

 

If we can sense the rest of nature through our senses, what else can we learn to sense, what sensations can we create? After laying the foundation for all six senses, it's up to us to expand the possibilities.


Reference address:Derived senses make our lives colorful

Previous article:ST and Fingerprint Cards team up to launch advanced biometric payment card solution
Next article:The AI ​​wave has made the voiceprint market booming, and a market worth hundreds of billions of yuan is in urgent need of development

Latest Internet of Things Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号