Source: The content
is compiled by Semiconductor Industry Watch (icbank) from Google Blog
,
thank you.
Pixel 4 and Pixel 4 XL are optimized for ease of use, and a key feature that makes this possible is Motion Sense, which enables users to interact with their Pixel in a variety of ways without touching the device. For example, with Motion Sense, you can use specific gestures to change music tracks or instantly silence an incoming call. In addition, Motion Sense can also detect when you are close to the phone and when you reach out, allowing Pixel to be more helpful by predicting actions, such as looking at the camera to provide a seamless face unlock experience, intentionally lowering the volume when a ringtone is ringing to eliminate it, or turning off the display to save power when you are no longer close to the device.
The technology behind Motion Sense is Soli, the first integrated short-range radar sensor in a consumer smartphone, which facilitates close-range interaction with the phone without touching it. Below, we discuss Soli’s core radar sensing principles and the design of signal processing and machine learning (ML) algorithms based on identifying human activities in radar data, as well as how we solved some integration challenges to prepare Soli for use in consumer devices.
Soli radar system designed for motion sensing
The basic function of radar is to detect and measure the properties of remote objects based on their interaction with radio waves. A classic radar system consists of a transmitter that emits radio waves, which are then scattered or redirected by objects in its path, with a portion of the energy reflected back and intercepted by a radar receiver. Based on the received waveforms, the radar system can detect the presence of objects and estimate certain properties of those objects, such as distance and size.
Radar has been under active development as a detection and ranging technology for nearly a century. Traditional radar approaches are designed to detect large, rigid, distant objects such as aircraft and cars. As a result, they lack the sensitivity and resolution to sense complex motion within the requirements of consumer handheld devices. Therefore, to enable motion sensing, the Soli team developed a new small radar system from scratch, integrating novel sensing paradigms and algorithms that can be specialized for fine-grained perception of human-computer interactions.
Classical radar designs rely on fine spatial resolution relative to the size of the target in order to distinguish different objects and differentiate their spatial structures. Such spatial resolution usually requires wide transmission bandwidth, narrow antenna beamwidth, and large antenna arrays.
Soli, on the other hand, uses a completely different sensing paradigm, based on motion rather than spatial structure. Because of this novel paradigm, we were able to fit Soli’s entire antenna array in the 5 mm x 6.5 mm x 0.873 mm chip package in Pixel 4, allowing the radar to be integrated into the top of the phone. Notably, in contrast to optical imaging sensors, for example, we developed algorithms that do not require a clear image of the target’s spatial structure. Therefore, a distinguishable image of a person’s body or face is not generated and cannot be used for “motion sensing” presence or gesture detection.
Soli relies on processing time variations in the received signal to detect and resolve subtle motion. The Soli radar transmits a 60 GHz frequency modulated signal and receives a superposition of reflections from nearby objects or people. Sub-millimeter displacements in the target's position from one transmission to another cause noticeable timing shifts in the received signal. Over a window of multiple transmissions, these shifts manifest as Doppler frequencies that are proportional to the object's velocity. By resolving different Doppler frequencies, the Soli signal processing pipeline can distinguish between objects moving in different motion patterns.
The animation below demonstrates how different actions exhibit unique motion signatures in the processed Soli signal. The vertical axis of each image represents the distance from the sensor to the top, or radial distance, increasing from top to bottom. The horizontal axis represents velocity toward or away from the sensor, with zero at the center, negative velocities on the left corresponding to approaching targets, and positive velocities on the right corresponding to receding targets. The energy received by the radar is mapped into these range-velocity dimensions and represented by the intensity of each pixel. Therefore, strongly reflective targets tend to be brighter relative to the surrounding noise floor than weakly reflective targets. In these range-velocity mappings, the distribution and trajectory of the energy shows clear differences when a person is walking,
In the left image, we see reflections from multiple body parts appear on the negative side of the velocity axis as the person approaches the device, then converge at the top of the image at zero velocity as the person stops approaching the device. In the middle image depicting the reach, the hand starts at a resting position 20 cm from the sensor, then accelerates toward the device at a negative velocity, and finally decelerates to a stop when it reaches the device. The reflections corresponding to the hand move from the middle to the top of the image, corresponding to the decreasing range of the hand from sensor to sensor during the gesture. Finally, the third image shows a hand sliding across the device, moving at a negative velocity toward the sensor on the left half of the velocity axis.
The 3D position of each resolvable reflection can also be estimated by processing the signals received at each of Soli's three receivers. This position information can be used in addition to range and speed for target identification.
The signal processing pipeline we designed for Soli includes a combination of custom filters and coherent integration steps that improve the signal-to-noise ratio, attenuate unwanted interference, and distinguish reflections from people from noise and clutter. These signal processing features allow Soli to operate at low power within the constraints of consumer smartphones.
Designing machine learning algorithms for radar
After using Soli’s signal processing pipeline to filter and enhance the raw radar signal, the resulting signal transformation is fed into Soli’s ML models for gesture classification. These models have been trained to accurately detect and recognize motion sensing gestures with low latency.
There are two main research challenges in classifying mid-air gestures, which are common to any motion sensing technology. The first is that each user is unique and performs even simple actions, such as swiping, in a variety of ways. Second, there may be many unrelated actions within the sensor's range that look similar to the target gesture. In addition, when the phone moves, from the perspective of the motion sensor in the phone, the whole world appears to move.
Addressing these challenges required designing custom ML algorithms optimized for low-latency detection of mid-air gestures from radar signals. Soli’s ML model consists of neural networks trained using millions of gestures recorded by thousands of Google volunteers. These radar recordings were mixed with hundreds of hours of background radar recordings from other Google volunteers containing general motions performed near the device. Soli’s ML model was trained using TensorFlow and optimized to run directly on the Pixel’s low-power digital signal processor (DSP). This allows us to run the model at low power even when the main application processor is powered down.
Soli from concept to product
Soli can be integrated into the Pixel smartphone primarily because its end-to-end radar system (including hardware, software, and algorithms) has been carefully designed to enable contactless interactions within the size and power constraints of consumer devices. Soli's tiny hardware allows the entire radar system to fit within the limited space of the Pixel's top bezel, a major achievement for the team.
In fact, the first Soli prototype in 2014 was the size of a desktop computer. To shrink the entire radar system down to a single 5.0 mm x 6.5 mm RFIC, we combined hardware innovations with the novel temporal sensing paradigm we introduced previously, including antennas on the package. The Soli team also introduced several innovative hardware power management schemes and optimized Soli’s computation cycles to enable Motion Sense to fit within the power budget of a smartphone.
Hardware innovations included iteratively shrinking the radar system from desktop prototypes to a single 5.0 mm x 6.5 mm RFIC, including the antenna on package.
To integrate Soli into Pixel, the radar systems team worked closely with product design engineers to maintain Soli signal quality. The chip placement in the phone and the Z-stack of materials above the chip were optimized to maximize signal transmission through the glass and minimize reflections and occlusions from surrounding components. The team also invented custom signal processing techniques to allow it to coexist with surrounding phone components. For example, a novel filter was developed to reduce the impact of audio vibrations on the radar signal, allowing gestures to be detected while music is playing. Such algorithmic innovations enable Motion Sense features in a variety of common user scenarios.
"Soli's successful integration into Pixel 4 and Pixel 4 XL devices demonstrates for the first time the feasibility of radar-based machine perception in an everyday mobile consumer device. Motion Sense in Pixel devices shows the potential of Soli to bring seamless contextual awareness and gesture recognition to both explicit and implicit interactions. We are excited to continue researching and developing Soli to enable new radar-based sensing and perception capabilities."
*Click the end of the article to read the original text
in
English
.
*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.
Today is the 2250th issue of content shared by "Semiconductor Industry Observer" for you, welcome to follow.
Semiconductor Industry Observation
"
The first vertical media in semiconductor industry
"
Real-time professional original depth
Scan the QR code
, reply to the keywords below, and read more
"Core" Epidemic
|Sensor
|IGBT
|
Storage
|
Gallium Nitride|Infineon|China-US Trade|Semiconductor Stock Prices|Chip Testing
Reply
Submit your article
and read "How to become a member of "Semiconductor Industry Observer""
Reply
Search
and you can easily find other articles that interest you!
Click to read the original text to read the original English text!