To understand AI processing in the Industrial Internet of Things, the first thing you need to consider is where the bulk of the data processing takes place.
Artificial intelligence (AI) has the power to make embedded systems in the Industrial Internet of Things (IIoT) more responsive and reliable. In addition to scheduling routine maintenance work in a more cost-effective manner, the technology is already being used to monitor machinery conditions and identify if failure is imminent.
When it comes to deploying AI technology in embedded systems, it is critical to consider where this data processing takes place. AI algorithms vary widely in computational performance, and both what processing is required for the algorithm and where it is processed also have a significant impact.
For system designers, there are three clear approaches to developing AI-based embedded systems, including using cloud-based AI services, deploying systems with built-in AI, or creating their own algorithms, often based on open source software.
Deep neural network (DNN) architecture is an example of a computationally intensive algorithm, especially during the training phase, requiring billions of floating point operations each time the model is updated. Due to the intense demands of DNNs, the typical approach is to send data to the cloud for remote processing. AI-enabled devices in industrial control can be processed remotely by leveraging tools and frameworks created using cloud services, many of which are available as open source.
A popular example is Google’s TensorFlow, which provides multiple levels of abstraction for engineers experienced in creating artificial intelligence algorithms as well as those just getting started. The Keras API is part of the TensorFlow framework and makes it easy to explore machine learning techniques and start applications.
However, one disadvantage of cloud-based processing is the bandwidth required. A reliable internet connection is essential for maintenance services, and it is worth noting that with cloud AI, consumer applications rely on broadband connections, but machine tools in factories may not be able to access remote AI models that are updated in real time due to network problems.
Therefore, by doing more processing locally, bandwidth requirements can be drastically reduced. For many industrial applications, the amount of data required can be drastically reduced by focusing on content. In applications that monitor environmental variables, many variables do not change for a long time. What matters to the model is the change above or below a certain threshold. Even though the sensor may need to analyze sensor input in milliseconds, the update rate of the cloud server may only be a few updates per second, or even less.
Building AI software
For more complex forms of data, such as audio or video, a greater degree of pre-processing is required. AI can help not only save overall bandwidth before image processing, but also help improve the overall performance of the system before image processing. For example, denoising before compression will often increase the efficiency of the compression algorithm. This is particularly relevant to lossy compression techniques that are sensitive to high-frequency signals. Edge detection can be used in conjunction with image segmentation to allow the model to focus only on objects of interest. This reduces the amount of extraneous data that needs to be fed into the model during training and inference.
Although image processing is a complex field, in many cases developers can process algorithms locally, leveraging off-the-shelf libraries and eliminating the need for high-bandwidth internet connections. A popular example is the open source computer vision library OpenCV, which is used to pre-process data for AI models. In C++, developers can call C++, Java, Python, and MATLAB code, supporting simple prototyping before porting algorithms to embedded targets.
By using OpenCV and processing the data locally, integrators also eliminate the security risks associated with transferring and storing data in the cloud. The biggest concern for end users is the privacy and security of data when it is transferred to the cloud. Condition monitoring and industrial inspection are critical processes that require the best possible data analysis. Although cloud operators have measures in place to prevent data breaches, keeping the data in each device limits the risk of hacking.
In addition to support for image processing, the latest version of OpenCV also includes direct support for machine learning models built using many popular frameworks, including Caffe, PyTorch, and Tensorflow. Using the cloud for initial or prototype development before porting models to embedded platforms is a proven approach.
Performance is a primary concern for any machine learning model ported to an embedded device. Since the training data is performance-intensive, one option is to perform training locally or on a cloud server (depending on privacy concerns) and perform inference on the device itself (when the trained model receives live data).
If higher performance is required on a local device, a possible solution is the Avnet Ultra96-V2, which integrates the Xilinx Zynq UltraScale+ ZU3EG MPSoC. The combination of the Arm processor core with the embedded signal processing engine and fully programmable logic array provides efficient support for DNN models and image processing routines. Reconfiguration provides the ability to process training locally, as well as the ability to perform inference when the application has high throughput requirements.
Avnet Ultra96-V2
Inference incurs lower overhead than training, and a microcontroller running a DNN kernel in software may be satisfactory for sensors rather than image streams. But for low-power devices, lower data rate streams are required. Some teams are reducing the number of calculations required for inference through optimization, even if this increases the complexity of development. AI models often contain a high degree of redundancy, and significant processing power can be saved by pruning connections between neurons and reducing the precision of calculations to 8-bit integers or even lower resolution.
Edge devices with built-in AI
Another option is to run inference on a local gateway device. If the throughput of each node is relatively low, a gateway can handle inference tasks for multiple sensor nodes. The need to distribute workloads, port and optimize models from cloud-oriented frameworks increases development complexity, so another option is to adopt a framework that has been optimized for embedded applications. The Brainium platform developed by Octanion provides a complete development framework for embedded systems. Its software environment directly supports prototyping using cloud systems and deployment using Avnet's SmartEdge Agile hardware on IoT devices and gateways.
The Brainium software environment coordinates the activities of devices, gateways, and cloud layers to form a complete AI environment. To enable applications to scale to deeply embedded nodes, the environment supports some AI techniques that are less computationally intensive than those used in DNNs. The gateway software can be deployed on off-the-shelf hardware such as Raspberry Pi or any platform capable of running Android or iOS. Where higher performance is required, Brainium's cloud layer can be deployed on AWS, Azure, or custom server solutions.
Schneider Electric and Festo have already incorporated local AI support into their application-specific control products. The former offers predictive analytics applications to identify subtle changes in system behavior that affect performance. In 2018, Festo acquired data science specialist Resolto, whose SCRAITEC software can understand the health of a system in order to detect every anomaly.
Which approach an OEM or integrator takes when deploying AI depends on the specific situation. In addition to available processing power, other factors are driving companies to adopt cloud computing, build new software and/or integrate edge devices to manage AI. For example, when users try to take advantage of big data analytics, they may want to put information from many systems into a larger database, so they tend to use cloud services. Others want to ensure a high degree of privacy for the data, and in cases where processing load is a key factor, this can be addressed by using local gateway-based engines to extensive use of cloud computing. The important thing is that there are many environments that can be easily prototyped and deployed to any architecture of your choice.
Previous article:PEI-Genesis and PowerCenter join hands to present at Taiwan International Smart Energy Week
Next article:Arm develops three IPs focusing on autonomous driving and safe decision-making in the automotive and industrial fields
Recommended ReadingLatest update time:2024-11-23 15:28
- Popular Resources
- Popular amplifiers
- Infineon Technologies Launches ModusToolbox™ Motor Kit to Simplify Motor Control Development
- STMicroelectronics IO-Link Actuator Board Brings Turnkey Reference Design to Industrial Monitoring and Equipment Manufacturers
- SABIC further deepens strategic partnership with Boao Forum for Asia
- Using 3.3V CAN transceivers to achieve reliable data transmission in industrial systems
- Nidec Precision Testing Technology will be exhibited at SEMICON Japan 2024
- HARTING and TTI announce strategic partnership now extended to Asia
- Samtec Connector Science | Connecting Artificial Intelligence in Smart Factories
- Advantech and Innodisk collaborate to unlock AMR vision capabilities with AFE-R360 MIPI camera module
- Laird Thermal Systems Announces New Line of Micro-Thermoelectric Coolers for Next-Generation Optoelectronic Devices
- Intel promotes AI with multi-dimensional efforts in technology, application, and ecology
- ChinaJoy Qualcomm Snapdragon Theme Pavilion takes you to experience the new changes in digital entertainment in the 5G era
- Infineon's latest generation IGBT technology platform enables precise control of speed and position
- Two test methods for LED lighting life
- Don't Let Lightning Induced Surges Scare You
- Application of brushless motor controller ML4425/4426
- Easy identification of LED power supply quality
- World's first integrated photovoltaic solar system completed in Israel
- Sliding window mean filter for avr microcontroller AD conversion
- What does call mean in the detailed explanation of ABB robot programming instructions?
- STMicroelectronics discloses its 2027-2028 financial model and path to achieve its 2030 goals
- 2024 China Automotive Charging and Battery Swapping Ecosystem Conference held in Taiyuan
- State-owned enterprises team up to invest in solid-state battery giant
- The evolution of electronic and electrical architecture is accelerating
- The first! National Automotive Chip Quality Inspection Center established
- BYD releases self-developed automotive chip using 4nm process, with a running score of up to 1.15 million
- GEODNET launches GEO-PULSE, a car GPS navigation device
- Should Chinese car companies develop their own high-computing chips?
- Infineon and Siemens combine embedded automotive software platform with microcontrollers to provide the necessary functions for next-generation SDVs
- Continental launches invisible biometric sensor display to monitor passengers' vital signs
- Read the DSP6713 datasheet
- Interface Design between TMS320C6201/6701 DSP Processor and FLASH Memory
- Does anyone know what temperature sensor this is?
- FET biasing method targeting battery-powered PWM applications
- A classic power supply circuit, analyzed very thoroughly!
- Haha, I haven't used a soldering iron for a long time. I actually forgot how to use it.
- [MM32 eMiniBoard Review] + Green Plant Monitor
- +Advertising section
- Send a few photos to celebrate the New Year
- [ART-Pi Review] 2: Onboard Storage QSPI_FLASH