How to accelerate artificial intelligence in smartphones? Micron has a "big trick"!
When you use facial recognition to complete a payment in a supermarket
When you open an app to find the nearest "five-star" restaurant
When you use your mobile voice assistant to help you book a travel ticket
... ...
How much intelligence do you hold in your hands?
The answer is a lot!
2017
was the year that artificial intelligence was finally integrated into smartphones, not just into the cloud they were connected to, but into the smartphones themselves.
These phones are equipped with on-device AI engines designed to efficiently acquire data from sensors, thoroughly understand it, and then store and compute locally on the device. Performing tasks such as facial recognition, activity prediction, and enhanced data encryption, these phones must balance the need for additional storage and computing power with size constraints, cost-effectiveness, and battery power. Because the AI chips in these phones must be able to provide fast and accurate decisions based on local data, they rely on faster and more innovative system memory and storage.
Real-world use cases for AI smartphones
New user experiences are already emerging from the enhanced image, hearing, and voice processing capabilities offered by the latest smartphones. The next wave of experiences will be applications that support new use cases for AI in smartphones, including language processing, human activity prediction, and enhanced data encryption. As facial recognition for user authentication becomes more prevalent, innovators will use on-device AI to help make user authentication more sophisticated, but also more secure and convenient. For example, facial recognition was previously defeated using photos. Now, smartphone user authentication is more secure and faster using multiple 3D depth sensors and infrared cameras.
Natural language translation using on-device AI can enhance the speech recognition already in most smartphones. Going a step further, local analysis and processing of phone and chat conversations can help smartphones be more responsive with intent prediction (i.e., anticipating a person’s behavior), and intelligent assistants suggesting a certain action or purchase. Future smartphone apps are bound to move some buyer assistance functions from cloud-based bots to faster, more secure smartphones.
Integrating cloud-based AI with on-device AI can further expand the range of use cases. For example, the University of California, Berkeley has an earthquake warning app called MyShake that uses the accelerometer in your phone (which adjusts the screen when you turn the phone sideways) and GPS to measure the amount of shaking happening locally. Combined with collecting reports from other MyShake users near you and performing comprehensive analysis in the cloud, the app could become a personal seismometer and early warning system.
Smartphones become learning machines
Powering the shift to local AI on devices are new specialized AI processing chips, which are more machine learning than AI, strictly speaking . Machine learning is a subset of AI; it’s a technique that helps machines learn automatically over time, without being manually programmed, by responding to different types of data and eventually creating repeatable patterns. Neural network systems help these machine learning applications sort through data so that computers can more quickly classify it. In 2017, engineers learned how to add new AI components to system-on-chips (SoCs), improving performance and efficiency for “intelligent” or AI-assisted tasks while reducing cost, power consumption, and size.
AI intensifies challenges to phone size and battery life
Among the components of a smartphone, the CPU/GPU, screen, and memory consume the most power. Now, on top of that, add the power requirements of new AI engines. As consumers demand higher-pixel displays and more memory to support their increased load, battery life remains a major concern for manufacturers.
5G networks are already in operation in countries such as China. This ubiquitous ultra-high-speed wireless connection will create endless possibilities for multimedia and video experiences with a throughput 50 times faster than existing 4G networks and latency improvements at least five times higher than 4G in the future. However, mobile devices will require more complex memory subsystems to meet speed and storage requirements without increasing power consumption or space.
Dedicated AI engines are needed to handle
Local AI processing will increase memory size and storage requirements. More importantly, as more AI-specific applications emerge, the need for faster storage performance will grow exponentially.
3D NAND is becoming the preferred storage solution for mobile devices that require high density, high capacity, and a small footprint. For example, 64 layers of 3D NAND data storage unit stack layers vertically build storage devices with six times higher capacity than traditional 2D planar NAND technology. In addition, the latest 3D NAND memory devices use a high-performance UFS storage interface that can implement read and write commands simultaneously and have faster random read speeds than the previous generation e.MMC 5.1 interface. This combination of 3D NAND chips and a fast UFS interface enables more storage on a smaller chip area, bringing significant cost savings, low power consumption, and high performance to mobile devices equipped with AI.
A bright future
Smart assistant features and functions on smartphones must make fast and accurate decision support based on data streams. Slow storage and memory result in slow AI training performance, long standby time, and fast battery drain. Fortunately, memory and storage innovations provide faster I/O operations and near real-time AI calculations to meet the growing data demands of these AI engines, creating a powerful user experience.
For more exciting content, please click:
-
How to condense global artistic inspiration? Micron has "black technology"!
-
How to make better use of data and artificial intelligence? See how Micron does it!
Featured Posts