Wearable sonar tracks facial expressions using sound, not cameras

Publisher:fuehrd努力的Latest update time:2022-07-20 Source: cnbeta Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Cornell engineers have developed a new wearable device that can monitor a person's facial expressions through sonar and recreate them in a digital avatar. Removing the camera from the equation could ease privacy concerns.


KYZE)MH~JQ~L~3)R58724OS.png

The device, which the team calls EarIO, consists of an earpiece with a microphone and speakers on either side that can be connected to any regular headset. Each speaker emits a sound pulse beyond the range of human hearing toward the wearer's face, and the echo is picked up by the microphone.


As the user makes various facial expressions or speaks, the echo profile will change slightly due to the way the user's skin moves, stretches and wrinkles. A specially trained algorithm can recognize these echo profiles and quickly reconstruct the expressions on the user's face and display them on the digital avatar.


"Through the power of artificial intelligence, the algorithm discovered complex connections between muscle movements and facial expressions that are invisible to the human eye," said study co-author Li Ke. "We can use this to infer complex information that is even harder to capture -- the entire frontal face."


The team tested the EarIO system on 16 participants, running the algorithm on a regular smartphone. Sure enough, the device was able to reconstruct facial expressions just as well as a regular camera. Background noise, such as wind, talking or street noise, did not interfere with its ability to recognize faces.


The team says sonar has some advantages over using cameras. Acoustic data requires much less energy and processing power, which also means the equipment can be smaller and lighter. Cameras can also capture a lot of other personal information that users may not intend to share, so sonar could be more private.


As for what this technology might be used for, it could be a handy way to replicate your physical facial expressions on a digital avatar in games, VR, or virtual worlds.


The team said further work is still needed to tune out other distractions, such as when the user turns their head, and to simplify the system for training the AI ​​algorithms.


The research was published in the Association for Computing Machinery's Journal on Interactive, Mobile, Wearable and Ubiquitous Technologies.


Reference address:Wearable sonar tracks facial expressions using sound, not cameras

Previous article:Apple is developing an AR headset, and Cook almost admitted it
Next article:Apple AR/MR devices are expected to replace iPhones in the next 10 years, and the price can reach 17,000 yuan

Latest Internet of Things Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号