If you follow the latest trends in the tech industry, you probably know that there is a lot of debate in the industry about what the "next big thing" will be. Many people think it will be augmented reality (AR) glasses, some think it will be fully self-driving cars, and some agree on the potential of 5G. However, no matter what the trend ends up being, it will be powered by artificial intelligence (AI) in some way. In fact, AI and machine learning (ML) are where our future is headed.
Image source: Amazon
Personal robotic devices and digital companions joined the arena a few weeks ago with the surprise debut of Amazon's Astro.
To be clear, we’ve seen some incredible progress in many areas driven by AI. Advanced analytics, neural network training, and other related areas where large amounts of data are used to find patterns, learn rules, and then apply them have been huge beneficiaries of existing AI approaches.
At the same time, for applications like autonomous driving, simply pushing more and more data into algorithms that produce ever-improving but still flawed machine learning models doesn’t really work. We are still a long way from true Level 5 autonomous driving, and given the number of accidents and even casualties caused by Tesla’s AutoPilot, it may be time to consider another approach.
Likewise, we’re still in the early days of personal robotics, but it’s easy to draw conceptual similarities between self-driving cars and robots. Ultimately, the problem is that it’s simply impossible to feed every potential scenario into an AI training model and create a predetermined answer on how to react to any given situation. There’s simply too much randomness and unanticipated influences.
What is needed is a type of computing that can truly think and learn independently, and then adapt its learning capabilities to those unexpected scenarios. It may sound crazy and controversial, but this is exactly what researchers in the field of neuromorphic computing are trying to do. The basic idea is to replicate in digital form the structure and function of the most adaptable computing/thinking organization we know of, the human brain. Based on basic biological principles, neuromorphic chips try to recreate a series of connected neurons using digital synapses to send electrical impulses between neurons, just like a biological brain.
This is an area of academic research that has been around for decades, but has only recently started to make real progress and gain more attention. Intel released its second-generation neuromorphic chip, Loihi 2, and a new open source software framework called Lava.
Image source: Intel
Loihi 2 won't be commercially available anytime soon. It's billed as a research chip, and the latest version offers 1 million neurons, a far cry from the approximately 100 billion neurons found in the human brain. Still, it's an impressive, ambitious project that offers 10 times the performance of its 2018 version, 15 times the density (based on new Intel 4 chip manufacturing process technology), and improved energy efficiency. It also offers better and easier ways to interconnect its unique architecture with other traditional chips.
Intel has clearly learned a lot from the first version of Loihi, and one of the biggest lessons is that it is very difficult to develop software for this brand new architecture. Therefore, another important part is the debut of Lava, an open source software framework and a set of tools that can be used to write applications for Loihi. The company also provides tools that can simulate its operation on traditional CPUs and GPUs so that developers can create code without access to the chip.
What’s particularly fascinating about the way neuromorphic chips operate is that, while their functionality is very different from traditional CPU computing and GPU-like parallel computing models, they can be used to achieve some of the same goals. In other words, neuromorphic chips like the Loihi 2 can provide the ideal results that traditional AI pursues, but at a faster, more energy-efficient, and less data-intensive rate. Through a series of asynchronously occurring event-based spikes that trigger digital neurons to respond in various ways, much like the operation of the human brain (in contrast to the synchronous structured processing in CPUs and GPUs), neuromorphic chips can essentially "learn" new things on the fly. As a result, neuromorphic chips are well suited for devices that must react to new stimuli in real time.
These capabilities are what make these chips so attractive, and self-driving cars are essentially what they are. Ultimately, commercial neuromorphic chips may be needed to power the self-driving cars of our dreams.
Of course, neuromorphic computing isn’t the only new approach to advancing technology. There’s also a ton of work going on in the field of quantum computing. Like quantum computing, the inner workings of neuromorphic computing are incredibly complex and are currently mostly found in research projects for corporate R&D labs and academic studies. Unlike quantum, however, neuromorphic computing doesn’t currently require the extreme physical challenges (near absolute zero temperatures) and power requirements that quantum requires. In fact, one of the many appeals of neuromorphic architectures is that they are designed to consume very low power, making them suitable for a variety of mobile or other battery-powered applications (such as self-driving cars and robots).
Despite recent advances, commercial applications of neuromorphic chips are still several years away. However, it’s hard not to be fascinated by this technology that unlocks the potential of AI-driven devices to become truly intelligent. The difference may seem subtle, but ultimately we may need this new intelligence in order for some of the “next big things” to truly happen in a way that we can appreciate and imagine.
Previous article:At the 2021 Yunqi Conference, Banma Smart Driving released a microkernel, expanding its circle of friends again
Next article:How will the smart car industry develop in 2022? These ten technology trends will become mainstream
Recommended ReadingLatest update time:2024-11-16 08:58
- Popular Resources
- Popular amplifiers
- Network Operating System (Edited by Li Zhixi)
- Microgrid Stability Analysis and Control Microgrid Modeling Stability Analysis and Control to Improve Power Distribution and Power Flow Control (
- MATLAB and FPGA implementation of wireless communication
- Introduction to Internet of Things Engineering 2nd Edition (Gongyi Wu)
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Discussion on the Causes of Breakpoint Phenomenon of Shaded Pole Motor
- What is the difference between different chip suffixes?
- A survey on TikTok to see how much our engineering community plays it
- Qorvo at CES 2020: Innovative Solutions for 5G, IoT, Wi-Fi 6 and V2X
- 【National Technology N32G430】04 Configure the clock and run the lights
- After using Allegro to copy the module, the device is locked. Can you tell me how to unlock it?
- Is it useful to use a voltage divider resistor before DCDC voltage reduction?
- CANoe and Vehicle spy tutorial manual
- 【Teardown】Electric bicycle lights and horns
- About C2000 program transfer from flash to RAM