AI at the edge has become a recent trend, and there are many different ways to achieve AI at the edge. For engineers, it is not always obvious which processor and software are best for different applications when it comes to AI. We may find the answer at the Embedded Vision Summit, an informative conference on the feasibility and practicality of embedded computer vision and edge AI today.
The 2020 Embedded Vision Summit will be virtual, but as in previous years, topics will cover various systems for extracting meaning from videos and images. This field has been dominated by artificial intelligence and machine learning technologies recently, so there will be many AI chip manufacturers and other AI industry experts in attendance.
Let’s first spoil the highlights here:
Keynote: As wonderful as ever
This year's show's main speaker is David Patterson from UC Berkeley. Patterson, one of the founders of RISC computing and vice chairman of the RISC-V Foundation, will speak about how the rise of AI is placing unprecedented demands on processors. These demands create opportunities for domain-specific processors that are vastly more efficient. This is achieved by customizing chips specifically for the nature of AI inference work: large-scale matrix multiplications of low-precision numbers.
Patterson has collaborated with Google on its Tensor Processing Unit (TPU) design — for hyperscale and edge AI computing — and will use it to illustrate the performance and efficiency gains from domain-specific processing. Overall, this change in approach enables engineers to exploit deep neural networks like never before in cost- and power-constrained systems.
AI silicon
In fact, some of the sessions will be hosted by several companies developing ASIC designs for various edge AI areas.
At last year's Embedded Vision Summit, Hailo announced its first deep learning processor, the Hailo-8, which runs at up to 26 tera operations per second (TOPS), and the company is now testing it with a number of select customers, mainly in the automotive industry. This year, they will share lessons learned from real-world applications, including video analytics, industrial inspection, and smart cities.
As a result, the Perception chip is designed to process audio and video on the same device, running at no more than 1 Watt. Their talk will be about running modern neural networks at high speeds on battery-powered hardware.
In addition to ASICs, many other types of compute are suitable for edge AI in computer vision applications. This includes DSPs, whose highly parallel nature is well suited for matrix multiplication operations. Commonly used in audio AI such as speech recognition (where there is an obvious synergy), although their performance is not high, DSPs do have some interesting applications in low-resolution visual AI where power constraints are very tight.
At this year’s Embedded Vision Summit, Cadence will showcase a range of edge AI processing IP acquired from Tensilica, including its popular HiFi DSP IP, which supports Google’s TensorFlow Lite. Cadence will also host an “expert bar” session where the company’s experts will answer attendees’ questions about its various DSP IP products as well as its DNA processor IP designed specifically for AI processing.
In addition, some FPGAs are also well matched to visual AI at the edge. FPGAs are particularly well suited for 1-bit math (FPGA lookup tables are essentially 1-bit MACs); some cutting-edge areas of AI are reducing precision to 1-bit to reduce memory usage and power consumption (binarized neural networks).
At this summit, Lattice will showcase its small, low-power, low-cost FPGAs. These devices will be added to the system as co-processors to achieve visual artificial intelligence. The company will implement other applications such as gesture classification, human detection and counting, and facial recognition on its devices. Lattice has developed a complete software stack that abstracts all the tricky parts of implementing AI on its devices.
Lattice will also host an “expert bar” on low-power AI, where they will answer questions about how much AI work can be done on FPGAs and what the power budget will be.
TinyML
Tiny ML is a field dedicated to artificial intelligence on microcontrollers and other tiny computing devices. The field offers huge opportunities and is growing rapidly.
Don't miss the panel moderated by Pete Warden from Google, an authority in this space, with speakers from Microsoft, M12, Perception, and OctoML. This panel will discuss the key technology gaps that need to be filled.
Among the 50-plus exhibitors in the virtual showroom, Eta Computer, which has developed an ultra-low-power SoC for artificial intelligence in IoT devices such as smart sensors, will be there to show off some of the tech.
Eta will demonstrate human detection and people detection at power consumption as low as a few milliwatts - they have developed several vision AI algorithms that, when combined with their silicon, can reduce power consumption to very small levels. Their demo will show power-efficient CIFAR 10 object detection, person detection at 3mW, and people counting at less than 5mW.
sensor
In addition to artificial intelligence and processing technologies, sensor technologies that enable cutting-edge vision applications will also be demonstrated at the conference.
As CMOS image sensors become SoCs, their power consumption and cost steadily decrease, enabling a wider range of systems to be empowered with vision. Action camera manufacturer GoPro will provide a practical guide to building vision systems using modern CMOS image sensors.
There will also be a panel discussion on imagining the future of image sensors, moderated by industry expert Shung Chieh of Solidspace3, with attendees from Aurora, OmniVision, Applied Materials, and the University of Pittsburgh. The panel will share its vision for the future of image sensors and explore some of the key trends in the field, such as improvements in sensor chip-integrated processors, neuromorphic sensing, and hyperspectral imaging.
Arrow Electronics will demonstrate an AI-based proof of concept for people monitoring using Analog Devices’ 3D ToF Sensor Development Kit – Privacy-preserving ToF sensors for social distancing/occupancy management applications. This session is a tutorial on how to use the development kit that the system is based on.
app
The summit will also showcase today's cutting-edge applications of computer vision.
The Ocean Institute has demonstrated a method to remove water from underwater images, making them easier to process.
For example, John Deere will talk about the use cases of image processing and artificial intelligence in agriculture, how to use computer vision on a large scale to improve efficiency and quality, and how to commercialize their image processing systems, which requires a high degree of consistency in various components. The presentation will cover the different requirements of agricultural vision systems and how Deere solves these challenges.
Previous article:Demand for electronic devices, servers, and data centers surges, CPU sales to reach $41.7 billion
Next article:High-quality WiFi chips promote the rapid development of intelligent networking
- Popular Resources
- Popular amplifiers
- Detailed explanation of intelligent car body perception system
- How to solve the problem that the servo drive is not enabled
- Why does the servo drive not power on?
- What point should I connect to when the servo is turned on?
- How to turn on the internal enable of Panasonic servo drive?
- What is the rigidity setting of Panasonic servo drive?
- How to change the inertia ratio of Panasonic servo drive
- What is the inertia ratio of the servo motor?
- Is it better for the motor to have a large or small moment of inertia?
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Detailed explanation of intelligent car body perception system
- How to solve the problem that the servo drive is not enabled
- Why does the servo drive not power on?
- What point should I connect to when the servo is turned on?
- How to turn on the internal enable of Panasonic servo drive?
- What is the rigidity setting of Panasonic servo drive?
- How to change the inertia ratio of Panasonic servo drive
- What is the inertia ratio of the servo motor?
- Is it better for the motor to have a large or small moment of inertia?
- What is the difference between low inertia and high inertia of servo motors?
- 【EVAL-M3-TS6-665PN】1. Here comes a big guy-----EVAL-M3-TS6-665PN
- How to charge shared electric vehicles
- [AIoT Smart Smoke Detection System] LAUNCHXL-CC26X2R1 Development Board Hands-on Experience
- Working Principles and Applications of Embedded Systems
- A brief discussion on the application of PD and QC fast charging protocol deception chip circuit
- IAR prompts that the device cannot be connected
- I have a wireless radio code and I open it with IAR, but I don't have a board. How can I test it?
- MSP430G2553 Software UART and Hardware UART and Jumper Settings
- Learn 3D visualization from scratch: Camera "sweet spot"
- Optimization of CCS for TMS320C66x