Artificial intelligence (AI) is a branch of computer science that attempts to understand the essence of intelligence and produce a new type of intelligent machine that can respond in a similar way to human intelligence. Research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems. Since the birth of artificial intelligence, the theory and technology of artificial intelligence have become increasingly mature, and the application field has continued to expand. It can be imagined that the technological products brought by artificial intelligence in the future will be the "container" of human wisdom. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can think like a human and may also exceed human intelligence.
However, with the development of AI from the cloud to the edge, this view is changing rapidly. The AI computing engine enables MCU to break through the possible limits of embedded applications. Embedded design has been able to improve the real-time response capability and device security of network attacks.
MCU cloud computing that supports AI
has driven the demand for MCUs with AI functions; it reduces the bandwidth required for data transmission and saves the processing power of cloud servers, as shown in the following figure.
MCUs equipped with AI algorithms are applying applications that include functions such as object recognition, enabling voice services, and natural language processing. They also help improve the accuracy and data privacy of battery-powered devices in the Internet of Things (IoT), wearables, and medical applications.
So, how do MCUs implement AI functions in edge and node designs? Here is a brief introduction to three basic methods that enable MCUs to perform AI acceleration at the edge of IoT networks.
Three MCU + AI occasions
The first method, which is probably the most common one, involves model conversion of various neural network (NN) frameworks such as Caffe 2, TensorFlow Lite, and Arm NN to deploy cloud-trained models and inference engines on MCUs. There are software tools that can take pre-trained neural networks from the cloud and optimize them for MCUs by converting them to C code. The
optimized code running on the MCU can perform AI functions in voice, vision, and anomaly detection applications. Engineers can download these toolsets to the MCU configuration and run the inference of the optimized neural network. These AI toolsets also provide code examples for neural network-based AI applications. The
AI execution model conversion tool can run the inference of the optimized neural network on low-cost and low-power MCUs, as shown in the figure below.
The second approach is to bypass the need for pre-trained neural network models borrowed from the cloud, and designers can integrate AI libraries into microcontrollers and incorporate local AI training and analysis functions into their code.
Developers can then create data models based on signals acquired from sensors, microphones and other embedded devices at the edge and run applications such as predictive maintenance and pattern recognition.
The third approach is that the availability of AI-specific coprocessors enables MCU vendors to accelerate the deployment of machine learning functions. Coprocessors such as the Arm Cortex-M33 take advantage of popular APIs such as CMSIS-DSP to simplify code portability, allowing MCUs to be tightly coupled with coprocessors to accelerate AI functions such as co-processing correlation and matrix operations. The
above software and hardware platforms demonstrate how AI functions can be implemented in low-cost MCUs through inference engines developed according to embedded design requirements. This is critical because AI-enabled MCUs are likely to change the design of embedded devices in IoT, industrial, smart building and medical applications.
|