Article count:2305 Read by:3469961

Account Entry

CES 2016 Highlights - ADI's Key Technologies to Help You Go Beyond Everything Possible

Latest update time:2016-01-20
    Reads:

CES 2016 has come to an end. Which technology products have left a deep impression on you? Today, please follow me to the ADI exclusive technology exhibition area to see ADI's key technologies that may help you surpass all possibilities in your future designs~


Electrodermal activity monitoring for stress measurement


ADI demonstrated how to correlate bioimpedance measurements with changes in pressure using a data acquisition platform that can measure electrical skin activity (EDA) (skin impedance). Our expert team demonstrated how EDA measurements pose many challenges, including coupling with human tissue, measuring the correct tissue parameters associated with emotional changes, and obtaining accurate measurement results.


Wearable Health Monitoring and the Internet of Things


Accuracy and battery life are two fundamental issues facing wearable technology, but as we move into the world of the Internet of Things, secure transmission of data over wireless networks has become an important factor. ADI demonstrated a wearable technology platform that can solve these three design challenges. The demonstration included low-power optical, MEMS, and processor technologies; with this technology, 24x7 all-weather monitoring can be achieved within a limited power budget, highly accurate heart rate measurement and motion suppression, and secure data processing using encrypted heart rate data.


Entering the human body sensor market through computational biology


ADI's third-party partner LifeQ demonstrated how to use biomathematical models to continuously monitor human physiological functions with the help of wearable technology.


Comprehensive home health monitoring system powered by Thread


ADI dual-node system demonstration, one node is a Image Detection Platform (BLIP), which is responsible for capturing local behavior monitoring data and passing image and occupancy telemetry data to a local gateway via Wi-Fi and then uploading to the cloud. The other node is a human vital sign monitoring (VSM) sensor node; the ADI demonstrator wears this node, which aggregates data streams from multiple sensors (SPO2, activity, heart rate, etc.) and uses the ADI-compliant Thread Group protocol stack solution to pass VSM packets to the local gateway and then upload to the cloud.

The combined stream of image analysis and VSM data is then integrated into a converged graphical user interface in the cloud. Display results can then be downloaded from the cloud and viewed via Wi-Fi internet displays at the ADI IoT Demo Zone (S110) and another ADI-only zone (MP25877) at the Las Vegas Convention Center.

The ADI Home Health Monitoring System demonstration hardware was on static display in the Thread Group CES exhibit at the Sands Expo and Convention Center, and invitations and a Thread Group member map with the ADI fully functional demonstration location (S110) were distributed to individuals interested in visiting our dedicated exhibition hall to see the complete system in operation.


New low-power human-machine interface technology


ADI's human-machine interface (HI) strategy organically combines industry-leading sensor technology with innovative algorithms to achieve new functions and take user experience to a new level. Human-machine interface technology solutions use natural and intuitive user interfaces to improve the interactive experience of wearable devices, portable devices and a wide range of other user interface platforms, and enhance their fun, efficiency and value.

The human-machine interface technology showcased unique algorithms and system-level IP, based on ADI's core capabilities such as capacitive, optical, and inertial sensing. This series of demonstrations relies on portable platforms to highlight the new generation of low-power user experience, challenging the limits of applications such as advanced user detection, contextual awareness, and gesture user interfaces. In the form factor demonstration, guests can personally experience ADI's emerging detection technologies and algorithmic achievements.


Internet of Things-based web conferencing


ADI demonstrated our superior sensor-to-cloud capabilities based on a web conferencing use case that leverages ADI’s technology in sensing, signal processing and connectivity hardware, as well as its IoT cloud and software capabilities.


Network Audio Module Based on ADSP-BF707 Blackfin®+ DSP

The ADSP-BF707 Blackfin+ embedded processor is ideal for wireless speaker designs, as one processor can act as both a host controller and a DSP, and can perform audio decoding with 32-bit accuracy.

With 32-bit precision, BF707 can decode all two-channel codecs, such as MP3, FLAC, AAC, ALAC, OGG, WMA, WAV, etc. In addition, BF707 also has ultra-low power consumption, large memory and USB interface, making it an ideal choice for portable and desktop wireless speaker Wi-Fi audio streaming modules. The demonstration module is developed by an independent design company and is available for OEM use.


Dolby® Atmos® - enabled SOUNDBAR using the ADSP-2158x SHARC DSP

The demonstration is designed to show how ADI can implement Dolby Atmos in a SOUNDBAR speaker based on a single chip. The ADSP-2158x SHARC DSP is very suitable for home audio applications such as AVR, SOUNDBAR speakers, headphones, lifestyle products, etc. that use new multi-channel decoders such as Dolby Laboratories Atmos. Dolby Atmos brings a three-dimensional object-based audio experience to the home theater. This dual-core DSP SHARC+ processor has excellent capabilities and can fully implement multi-channel Atmos decoding, object and channel audio rendering, Atmos surround sound and Dolby audio processing functions on a single chip. In addition, lip sync delay and other necessary pre- and post-processing routines are performed on the same device.

3D Stereo Headphones Powered by Dolby® Atmos® and Smyth Virtual Surround Sound

The SHARC family of digital signal processors can decode new object-based formats such as Atmos, DTS:X, Auro3D, and are certified for 12-channel decoded output. This proof-of-concept demonstration shows an object-based headset that uses Smyth virtual surround sound technology to create a three-dimensional stereo sound effect on a head-mounted off-the-shelf headphone. The proof-of-concept uses two ADSP-21489 SHARC DSPs (for object-based processing tasks) and an additional ADSP-21489 chip to implement Smyth virtual surround sound, but is best used with a single ADSP-2158x DSP chip.

High-fidelity audio for mobile devices integrating ADI high-performance audio amplifiers and low-noise LDOs

With the rapid popularization of WiFi and 4G networks, high-speed data connections are now ubiquitous, and consumers are increasingly demanding high-quality streaming video and audio through mobile devices such as tablets and smartphones at home or on the go. This high-fidelity audio demo is a complete audio signal chain solution for mobile devices that brings listeners an audiophile-grade lossless music experience, with headphones being the only limiting factor. The demo integrates an audio DAC with ADI's industry-leading low-power, high-performance amplifier (ADA4807) and low-noise LDOs (ADP151, ADP7118, and ADP7182). The solution features ultra-high dynamic range, ultra-low distortion, and excellent power efficiency. This ready-to-integrate/productize design overcomes the main challenges facing system designers, namely size, power consumption, and time to market, while also achieving second-to-none high-fidelity audio performance.

Distributed audio layout and tuning via the automotive audio bus

The Automotive Audio Bus is a high-speed (50Mbps) bus technology that transmits audio (I2S) and control (I2C) data, as well as clock and power signals, over a single unshielded twisted pair (UTP) cable. The Automotive Audio Bus is particularly suitable for applications such as active noise cancellation and distributed audio, which require connecting multiple remote nodes in a cost-effective manner to achieve optimal system performance.

The demonstration will highlight the I2C control capabilities of A2B technology by configuring and tuning multiple remote, daisy-chained amplifiers, a system architecture widely considered for next-generation distributed audio systems . Additionally, the demonstration will showcase the power of the A2B development environment (SigmaStudio), which can greatly reduce system design complexity and shorten overall time to market.

Automotive Audio Bus Enables Efficient In-Vehicle Communication Systems

Future cars will have superior cabin acoustics, better voice recognition, and improved cellular call quality, which will greatly enhance the driving experience. A2B technology can transmit multiple discrete channels of digital audio over low-cost unshielded twisted pair cables, making it ideal for creating the most cost-effective system solution.

The demonstration showcases an in-vehicle communication system connected via A2B technology that uses a multiple microphone array to distinguish multiple speakers, even in the presence of background noise. In addition to the microphone beamforming algorithm, advanced echo cancellation and speech recognition routines are implemented on a high-performance SHARC processor, further improving the overall experience in the car cabin.

Internet connectivity via car audio bus

For advanced in-car infotainment systems, the cloud and the ability to access a large number of applications and multimedia resources in the cloud are becoming increasingly important. The automotive audio bus with a bandwidth of 50Mbps has proven to be a cost-effective way to transmit Internet (cloud) data and local audio content in parallel.

The demonstration will highlight the comprehensive capabilities of A2B technology using an open Linux hardware/software platform that can deliver up to 18Mbps of Internet data transmission (live Internet radio streaming) while simultaneously delivering audio I/O from a local audio node. As audio is transmitted over the A2B bus, activity on the bus will be monitored and the results displayed in the form of packets.

Testing and configuring SHARC ADSP-SC5xx systems using a bus analyzer

Next-generation audio systems will benefit greatly from the superior performance and feature set of ADI’s fifth-generation ADSP-SC5xx SHARC processors. As system-level performance requirements continue to increase, tasks are often distributed across multiple physically independent processing nodes connected by an efficient audio bus. A2B technology has proven to be an efficient and economical way to implement this type of distributed architecture. This demonstration connects multiple SHARC-based audio processing nodes via the A2B bus, highlighting the strength of two of ADI’s most innovative technologies. In addition, the entire network was configured and tested using Mentor Graphics’ recently released A2B Bus Analyzer tool.


In-car active noise cancellation


The Automotive Audio Bus is a high-speed (50Mbps) bus technology that transmits both audio (I 2 S) and control (I 2 C) data, as well as clock and power signals, over a single unshielded twisted pair (UTP) cable. The Automotive Audio Bus is particularly well suited for applications such as active noise cancellation and distributed audio, which require cost-effective connectivity to multiple remote nodes to achieve optimal system performance.


A train with A 2 B advertisements passed over the exhibition hall. A 2 B takes you into the future of car audio systems.