Recently, Apple released the A11 Bionic neural engine and Huawei released the Kirin 970 integrated NPU. Edge-side artificial intelligence has become a hot topic in the industry. Artificial intelligence with high barriers to entry will soon be available to ordinary people. What expectations should we have for smartphone artificial intelligence? Let's follow the editor of Mobile Phone Portable to learn about the relevant content.
Artificial intelligence, which is being hyped up by public opinion, is actually just getting started
As the tide/halo/bonus of Internet+ recedes, the market, industry, and investment all need new hot spots. Artificial intelligence is said to be a hot spot in the next decade. Thanks to the extraordinary progress made in computing power, large data sets, and deep neural networks, emerging technologies such as artificial intelligence are moving rapidly on the Gartner 2017 Emerging Technology Maturity Curve.
A dangerous signal is that the market has almost reached the point where not talking about AI means being out of date, and AI investment and public opinion are becoming bubbly. A moderate bubble is conducive to the rapid popularization and commercialization of emerging technologies, but when a concept is hyped up, the biggest crisis is that it is not easily perceived by users.
Every manufacturer is talking about its own mobile phone artificial intelligence. It is confusing, but more often than not, we can only hear the footsteps on the stairs. For the end consumers, it may be just one point (after all, the scope of artificial intelligence is too broad), or it may just be a repackaging of publicity with the same substance in a different name.
After all, the era of artificial intelligence has just begun.
End-to-end AI faces challenges, but also has unique advantages. Dedicated hardware AI chips have become the preferred choice
At present, AI computing is mainly carried out through deep learning, which is divided into two categories: training and inference. Training is mainly cloud-based training, which is the process of finding model parameters based on known data. It mainly relies on massive cloud data and complex neural network structures, and the amount of calculation is huge. Due to the lack of large-scale data on the terminal side, there is currently a lack of terminal model training. Inference can be performed on both the cloud and terminal sides. Inference is based on existing models to judge a specific application (picture, voice, translation, etc.) on the cloud/terminal side (a large number of matrix operations) and return the results.
The cloud AI chip route is basically established, with GPUs used for training and FPGAs used for inference. Google has taken a different approach by developing ASIC (TPU) that can complete both training and inference. The end-side AI chips have different solutions depending on the type of terminal, with GPUs, FPGAs, ASICs, NPUs (AIASIC in SoC), etc. all having applications.
Compared with the booming development of cloud-based training and inference, edge intelligence is indeed lagging behind, mainly restricted by the computing performance on the mobile phone side. Compared with cloud server computing, smartphones face huge challenges in supporting AI in terms of size, power supply, heat dissipation and energy consumption.
However, compared with artificial intelligence in the cloud, deploying artificial intelligence on the smart terminal side has more advantages in many aspects such as privacy protection, bandwidth requirements, real-time/low latency, power consumption and experience.
Mobile phone SoC chips must constantly pursue the best performance, and at the same time, each capability must be added with the highest performance density and the best energy efficiency, which places extremely high demands on chip design.
Considering the aforementioned factors of power consumption, bandwidth, performance, reliability, security and latency, using hardware to implement machine learning and deep learning is more advantageous than software + cloud computing solutions. It is an inevitable trend for neural network processing to become the key processing unit of artificial intelligence mobile phone SoC. Just like CPU, GPU, audio and video codec, deploying artificial intelligence on the smart terminal side is already a general trend.
Mobile AI chips have become a new focus of competition. Huawei and Apple are currently half a step ahead, and they may flourish in 2018.
Artificial intelligence chips can be called another professional differentiation in the history of chip development. GPUs also went this way in the past, and the main goal is still to shorten computing time and reduce computing energy consumption.
ARM
At the beginning of the year, DynamIQ technology was released to optimize artificial intelligence and machine learning. It enables large and small core configuration on a single computing cluster, and independent frequency control, as well as on, off, and sleep state control for each processor, to achieve efficient and seamless switching of the most suitable processor between different tasks. Instruction sets and optimization libraries for artificial intelligence will be added. The next generation ARMV8.2 version of the instruction set will support neural network convolution operations, improving the efficiency of artificial intelligence and machine learning for general SoC chips.
Recently, British Imagination released the latest neural network accelerator PowerVR2NXNNA. It is believed that ARM's dedicated AI chip IP will most likely be released in 2018.
Qualcomm
When Qualcomm released the Zeroth platform in 2016, it released the Neural Processing Engine SDK package, which supports mainstream deep learning frameworks such as Caffe and TensorFlow. At the same time, Qualcomm acquired the Dutch machine learning startup Scyfer and invested in the neuroscience startup BrainCorp, continuously strengthening its layout in artificial intelligence. After Huawei and Apple successively launched dedicated AI chip units, it is inevitable that Qualcomm's flagship chip will achieve AI capability hardening. It is rumored that Qualcomm has begun designing and manufacturing dedicated chips for executing neural networks. The focus may be on whether Qualcomm will develop its own IP or use other companies' IP.
MTK
It is rumored that MediaTek has completed the design of the built-in AI computing unit in mobile phone chips. It is expected that the Helio P70 chip, which will be launched in 2018, will have a built-in neural network and visual processing unit (Neuraland Visual Processing Unit, NVPU).
apple
The A11 Bionic application processor used in Apple's iPhone 8 series introduces a neural network processing engine (NE), which is aimed at specific machine learning algorithms and processes support for features such as 3D Sensor, face recognition unlocking, and Animoji in the new iPhone.
Huawei
Kirin 970 is designed with HiAI mobile computing architecture, which integrates NPU (Neural Network Processing Unit) dedicated hardware processing unit for the first time. Its AI performance density is much better than CPU and GPU. It realizes intelligent scene recognition and object recognition based on AI and performs targeted optimization to improve users' photo-taking effects.
It is safe to assume that in 2018-19, it is highly likely that flagship smartphones will support dedicated AI chips. This does not mean that smartphones have truly reached the stage of AI in their vision. The industry is looking for killer apps based on deep learning, but this may be a false proposition. The optimization of existing application experiences such as photography and facial recognition are more qualified to become the first batch of AI-benefiting applications.
SoCs integrate dedicated AI chips to significantly improve computing power, but mobile AI experience still has a long way to go
Traditional CPUs, GPUs, and DSPs are not essentially based on hardware neurons and synapses as basic processing units. Compared with NPUs, they are inherently disadvantaged in deep learning. When the chip integration and manufacturing process levels are comparable, their performance will theoretically be inferior to that of NPUs.
According to Huawei's official announcement, Kirin 970's new heterogeneous computing architecture has about 25 times the performance and 50 times the energy efficiency advantage when processing the same AI application tasks compared to four Cortex-A73 cores. Taking image recognition speed as an example, Kirin 970 can reach about 2005 images per minute.
Strictly speaking, at this stage, we should not expect AI to generate new applications. Instead, we should expect AI to make existing applications more efficient, less energy-efficient, and provide a better experience. Currently, the two most mature AI application areas are speech recognition and image recognition, and Apple and Huawei's dedicated AI chips have also chosen to make breakthroughs in these two areas to improve the user experience of the most commonly used applications.
Image recognition: Huawei's AI+Smart Eyes, Apple's FaceID unlocking
Taking photos is the experience that users are most concerned about now, and improving the experience through AI is a highly perceptible choice. Huawei Kirin 970 uses AI to realize scene recognition, object recognition and then intelligent optimization during the photo shooting process. Scene recognition, such as sports scenes and night environments, improves and optimizes the freeze-frame clarity and low-light photography effects in sports scenes. Object recognition, such as face recognition, performs intelligent detection for a variety of complex face scenes such as different skin colors, hats, eyes, masks, occlusions, side faces, etc., and specifically improves the color and fill light of facial information, optimization of face tracking, etc. It is equivalent to applying the existing professional-level photography model (knowledge base) to the user's photography process through an AI chip, without the need to learn professional photography skills.
iPhoneX's facial recognition unlocking in image recognition is a very ostentatious experience. The terminal side builds 3D data of the user's face based on the structured light solution, and the unlocking comparison is processed by the neural engine (Neural Engine) of the neural network module in the A11 chip.
Voice recognition: AI noise reduction improves voice recognition rate, upgraded version of Siri
The Kirin 970A chip I noise reduction replaces the traditional inversion technology noise reduction model with an artificial intelligence noise model, suppresses non-steady-state noise, enhances voice signals, and increases the voice recognition rate in high-speed and noisy environments from 80% to 92% (official data from Huawei Laboratory).
As an enlightenment application for popularizing artificial intelligence, Siri has made great contributions. At the new iPhone launch conference, Siri has also been significantly improved compared with previous generations. It must have also used the new AI technology mentioned in Apple's machine learning blog and further expanded Siri's service capabilities.
Other AI application experiences, such as image recognition and album classification in photo applications; song recommendations, which are adjusted and recommended by learning from users' listening records; smart replies/recommendations, which predict users' message replies and emotional expressions based on cloud knowledge bases; and battery life optimization, which adjusts battery management based on machine learning of user usage behavior. These are all very practical AI experiences.
Obviously, people's expectations for AI phones are not just these. Overall, AI technology is a basic capability belonging to the enabling layer. The integration of dedicated AI chips breaks through the performance bottleneck, and the future AI experience will still be driven by application scenarios. Whether it is a model, a knowledge base, or an AI-based application, it depends on ecological cooperation and on third-party application developers to use the open capabilities of AI chips for development, thereby exerting the computing power of AI chips.
Looking into the future, the winning formula for mobile chips will be artificial intelligence and 5G.
1. The key to the success of AI chips is the construction of an AI application ecosystem
Huawei and Apple launched dedicated AI processing unit chips almost at the same time. Considering the 18-month chip design cycle, Huawei's Kirin chip needs to be praised. SoC chip integrated with dedicated AI chip is a milestone in edge AI and is expected to be popularized quickly, but it is only the first step of AI and is far from victory. What I see is more challenges.
The competition in the AI chip market in the future will not only depend on the chip manufacturers' own technological research and development, but also on the ecosystem operation capabilities, including knowledge base/model cooperation in vertical fields and third-party application developers, to see whose ecosystem can provide richer applications and better experience. For Huawei, the challenge is greater than the initial single point breakthrough in the baseband field to continuous leadership.
Huawei is obviously aware of this and has announced that it will open Kirin 970 as an artificial intelligence mobile computing platform to more developers and partners, providing a complete multi-application mode and machine learning framework support, allowing developers to obtain the AI computing power of Kirin 970 in the way they are most accustomed to.
As for the cloud + terminal layout of artificial intelligence, the hybrid of Google + Apple may be the example that Huawei should learn from.
2. The improvement of SoC chip communication connectivity should not be overshadowed by AI
With the popularity of AI, communications are no longer the focus of the spotlight, but the improvement of communication connectivity should not be underestimated or forgotten by the public opinion. Take Kirin 970 as an example. The 10nm process, the industry's first SoC supporting cat18, instrument test of 1.2Gbps download rate under FDDLTE, 5CC (there is no demand in China yet, but there is demand in areas with more discrete spectrum, such as AT&T in the United States), 4X4MIMO and 256QAM (the media should be relatively familiar with them) are all big news in the past, but this year almost no one cares about them.
If we look beyond the field of mobile AI chips, in the battlefield of artificial intelligence competition, cellular connection chips are the biggest obstacle for Nvidia and others to get involved in the field of cellular smart terminals, and naturally they are also the competitive barriers of mobile SoC chip manufacturers. From this perspective, the future competition for the leadership of smart terminal SoC chips will still belong to Apple (rumored to be developing its own modem), Huawei, and Qualcomm.
The above is an introduction to the mobile phone portable market - Apple and Huawei have successively entered the market, and mobile AI chips have become a new focus of competition. If you want to know more related information, please pay more attention to eeworld. eeworld Electronic Engineering will provide you with more complete, detailed and updated information.
Previous article:The market value increased by 17.05 billion yuan in one day. How did BOE become popular?
Next article:Patent war among mobile phone giants: Samsung and Huawei are in close combat, Apple is fighting Qualcomm
- Popular Resources
- Popular amplifiers
- Apple faces class action lawsuit from 40 million UK iCloud users, faces $27.6 billion in claims
- Apple and Samsung reportedly failed to develop ultra-thin high-density batteries, iPhone 17 Air and Galaxy S25 Slim phones became thicker
- Micron will appear at the 2024 CIIE, continue to deepen its presence in the Chinese market and lead sustainable development
- Qorvo: Innovative technologies lead the next generation of mobile industry
- BOE exclusively supplies Nubia and Red Magic flagship new products with a new generation of under-screen display technology, leading the industry into the era of true full-screen
- OPPO and Hong Kong Polytechnic University renew cooperation to upgrade innovation research center and expand new boundaries of AI imaging
- Gurman: Vision Pro will upgrade the chip, Apple is also considering launching glasses connected to the iPhone
- OnePlus 13 officially released: the first flagship of the new decade is "Super Pro in every aspect"
- Goodix Technology helps iQOO 13 create a new flagship experience for e-sports performance
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
- Brief Analysis of Automotive Ethernet Test Content and Test Methods
- How haptic technology can enhance driving safety
- Let’s talk about the “Three Musketeers” of radar in autonomous driving
- Why software-defined vehicles transform cars from tools into living spaces
- How Lucid is overtaking Tesla with smaller motors
- Wi-Fi 8 specification is on the way: 2.4/5/6GHz triple-band operation
- Wi-Fi 8 specification is on the way: 2.4/5/6GHz triple-band operation
- Vietnam's chip packaging and testing business is growing, and supply-side fragmentation is splitting the market
- Vietnam's chip packaging and testing business is growing, and supply-side fragmentation is splitting the market
- Another question about the matrix keyboard
- Transmit power measurement method
- My Journey of MCU Development (VI)
- [RVB2601 Creative Application Development] 4 Long press and short press to eliminate the letter A
- CC2520 BasicRF Study Notes
- Improve your basic knowledge of embedded systems
- 【FAQ】Microchip Live: Simplifying security application design with dsPIC33/PIC24 and ATECC608 devices
- Application Examples of Portable Spectrum Analyzers for Radio Signal Monitoring
- High Accuracy Split-Phase CT Fuel Gauge Reference Design Using Independent ADCs
- [Xianji HPM6750 Review] HPM SDK Development Environment Construction and Hello World