Translated from - nextplatform
Intel's latest neuromorphic system, Pohoiki Springs, was released in March 2020. It integrates 768 Loihi neuromorphic research chips in a chassis the size of five standard servers. It is reported that this experimental research system for neuromorphic computing released by Intel is an advanced method that can simulate the working method of the human brain and complete calculations faster with only minimal energy.
Neuromorphic computing still has a long way to go before it can become part of the system. Although some use cases show significant promise, mapping the problem to the architecture remains a challenge. In addition, like quantum computing, most major chip and system manufacturers are deeply interested in exploring the possibilities of the technology, and Intel is no exception.
Since the range of problems that can be solved with neuromorphic chips is still limited, Intel has focused on a very specific use case to highlight the progress of a 64-chip system based on the "Loihi" architecture. They used a system called "Pohoiki Springs" that has been scaled to 768 Loihi chips (100 million neurons), packed into an Intel processor in a 5U rack base, and demonstrated how a neuromorphic system can accurately identify odors with a small training sample, in a system that consumes only 300 watts of power.
Will neuromorphic chips become the main weapon of the future?
Technology research firm Gartner predicts that neuromorphic chips will become the dominant computing architecture for new advanced artificial intelligence deployments by 2025. Gartner predicts that by 2025, the technology is expected to replace graphics processors as one of the main computer chips used for AI systems, especially neural networks, which are used in speech recognition and understanding, as well as computer vision.
Neuromorphic computing enables machine learning models to be trained using only a fraction of the data required to train them on traditional computing hardware. Mike Davies, director of Intel’s Neuromorphic Computing Lab, said: “This means that the model learns in a similar way to how human babies learn, just by looking at an image or a toy once and remembering it forever.”
“This [makes it possible] to do some of the computations that are difficult to do today” because they require too much energy or take too long. Davies also said that if there were a widespread power outage, neuromorphic computing could automatically help identify some of the areas that most urgently need power. Neuromorphic computing could also help consumers more accurately find items that are similar or match a specific product image.
Davies said the Pohoiki Springs system is different from traditional machines in that its memory and computing elements are intertwined rather than separate, minimizing the distance data has to travel, since in traditional computing architectures data must flow back and forth between memory and computing.
Intel researchers recently used a single neuromorphic research chip to train an AI system to identify noxious odors, using one training sample per odor, compared to 3,000 samples required by advanced deep learning methods, so the energy required for neuromorphic computing is extremely low.
In experiments, machine learning models were able to detect distinct odors from chemical sensors, such as ammonia, acetone and methane, even when they were masked by different scents that could indicate the possible presence of explosives and drugs.
The Pohoiki Springs system has about 770 of these neuromorphic research chips in a chassis the size of five standard servers. Davies says the system has the computing power of about 100 million neurons, roughly equivalent to the brain of a mole rat.
Edy Liongosari, chief research scientist at Accenture Labs, said one of the main advantages of neuromorphic computing is that it can perform AI-based calculations with lower energy.
Energy consumption is a barrier to large-scale AI deployment: Researchers at the University of Massachusetts Amherst say the carbon footprint of developing a single AI model is equivalent to the lifetime emissions of five average American cars.
Accenture Labs has been working with Intel’s neuromorphic computing researchers since 2018 to study how the technology could help AI algorithms used in connected devices, such as security cameras that continuously detect motion. Neuromorphic chips could eventually be embedded in cameras. “There are some use cases where power is at a premium,” said Mr. Liongosari.
Such cameras continuously analyze large amounts of data, which requires energy to identify anomalies such as intrusions. Neuromorphic computing can help machine learning algorithms identify intrusions using far less training data than traditional algorithms.
Can Pohoiki Beach really think like the human brain?
The new Pohoiki Springs is based on Intel's "Nahuku" baseboard, with each board containing eight to 32 Loihi processors, a 64-processor system made up of two to eight boards (Intel didn't provide details on the exact configuration, including how the chips and baseboards are networked). The same system has now been scaled up to the aforementioned 768 chips.
Loihi is a self-learning neuromorphic chip launched by Intel Labs in September 2017. It is based on a 14nm process technology, has a die size of 60 mm, and contains more than 2 billion transistors, 130,000 artificial neurons, and 130 million synapses. As a multi-core grid implementation, it will increase significantly by 2020, with the number of neurons exceeding 1 million. Each core contains a "learning engine" that can support different types of artificial intelligence models, including supervised, unsupervised, and reinforcement learning. According to Intel, Loihi is 1,000 times faster and 10,000 times more efficient than CPUs in processing applications such as sparse coding, graph search, and constraint satisfaction problems. The chip has been provided to researchers in the Intel Neuromorphic Research Community (INRC) through cloud services and the Kapoho Bay platform (a USB form factor device based on Loihi).
Like the brain, Loihi processes certain demanding workloads 1,000 times faster and 10,000 times more efficiently than conventional processors. Pohoiki Springs is the next step in scaling this architecture to evaluate its potential for solving artificial intelligence problems as well as addressing a wide range of computational challenges. Intel researchers believe the extreme parallelism and asynchronous signaling of neuromorphic systems could provide significant performance gains at dramatically reduced power levels compared to today’s most advanced conventional computers.
According to Mike Davies, who leads Intel's neuromorphic computing project, these special systems could be used by doctors to sniff out disease, for example, or to detect manufacturing sites for weapons, drugs, bombs or hazardous chemicals at airports.
While it may sound far-fetched to use specialized neuromorphic architectures and cumbersome programming kits to do all of this, neural networks can also choose similar patterns (as Google and others have shown), and neuromorphic systems have some properties that traditional deep learning models and machines can't touch. Energy efficiency and time are two of the most prominent advantages.
The efficiency gains come from fully integrating the computer and memory on a neuromorphic system. Streaming instructions and data don't have to go through separate memories. Everything is integrated into a distributed compute and memory fabric. It all comes down to asymmetric properties, Davies explains. "To get the system to communicate is a question of energy consumption. If you're not sending any zero binary values, that means you're not using energy. Sending information in this way, encoding at some point in time, is a way of sending information that you can compute with these codes so that you can select the '0' state." The problem is, to get to that state requires rethinking the algorithms. That's what spiking is all about in neuromorphic systems.
“Pohoiki Springs scales up our Loihi neuromorphic research chip by more than 750 times while operating at power levels below 500 watts. The system enables our research partners to explore ways to accelerate workloads that run slowly on traditional architectures, including high-performance computing (HPC) systems.”
According to Chris Eliasmith, professor at the University of Waterloo and co-CEO of Applied Brain Research, the Loihi chip consumes 109 times less power when running real-time deep learning benchmarks, and 5 times less power than dedicated IoT inference hardware. If the network size is increased by 50 times, Loihi can maintain real-time performance with only a 30% increase in power consumption, while traditional IoT hardware has increased power consumption by 500%.
So what Intel’s Pohoiki Beach system does is outstanding. It enables workers to use the system to efficiently scale new neural-inspired algorithms—such as sparse coding, simultaneous localization and mapping (SLAM), and path planning—that learn and adjust based on the data input. Pohoiki Beach is an important milestone for Intel’s neuromorphic research, laying the foundation for Intel Labs’ plan to scale the architecture to 100 million neurons later this year.
For example, Intel's Pohoiki Beach system can break through the performance constraints of traditional general-purpose computing technology and significantly improve efficiency in areas such as autonomous driving, smart homes, and network security. Since it does not require a global clock signal and uses an asynchronous spiking neural network (SNN), Pohoiki Beach can more efficiently solve the problems brought about by a large number of IoT Internet of Things.
Theoretically, Intel's Loihi chip can be expanded to a maximum of 16,384 chips interconnected, equivalent to 2 billion neurons, which is close to 2% of the 86 billion neurons in the human brain. Don't underestimate the insignificant number compared to the others. Moore's Law still works in cutting-edge fields. If necessary, Loihi's parameters will also grow exponentially. A way of thinking that can simulate the way the human brain thinks may not be too far away.
Previous article:Using AI to improve AI and make AI applications more efficient and transparent
Next article:Development of the Robot Industry: From "Artificial Intelligence" to "Artificial Intelligence"
Recommended ReadingLatest update time:2024-11-17 07:51
- Popular Resources
- Popular amplifiers
- Microcomputer Principles and Interface Technology 3rd Edition (Zhou Mingde, Zhang Xiaoxia, Lan Fangpeng)
- Microcomputer Principles and Interface Technology Examples and Exercises (Kong Qingyun, Qin Xiaohong)
- Design and application of autonomous driving system (Yu Guizhen, Zhou Bin, Wang Yang, Zhou Yiwei)
- EDA Technology Practical Tutorial--Verilog HDL Edition (Sixth Edition) (Pan Song, Huang Jiye)
- e-Network Community and NXP launch Smart Space Building Automation Challenge
- The Internet of Things helps electric vehicle charging facilities move into the future
- Nordic Semiconductor Launches nRF54L15, nRF54L10 and nRF54L05 Next Generation Wireless SoCs
- Face detection based on camera capture video in OPENCV - Mir NXP i.MX93 development board
- The UK tests drones equipped with nervous systems: no need to frequently land for inspection
- The power of ultra-wideband: reshaping the automotive, mobile and industrial IoT experience
- STMicroelectronics launches highly adaptable and easy-to-connect dual-radio IoT module for metering and asset tracking applications
- This year, the number of IoT connections in my country is expected to exceed 3 billion
- Infineon Technologies SECORA™ Pay Bio Enhances Convenience and Trust in Contactless Biometric Payments
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- Rambus Launches Industry's First HBM 4 Controller IP: What Are the Technical Details Behind It?
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- Corelinks SinA33 development board virtual machine compiles QT and Linux
- Power Technology Application 7 articles
- 1. Material unpacking and data link arrangement
- Code Composer C3x C4x version 4.10 installation package
- Help
- How to use GPIO of TI C55x series DSP
- Introduction to 5G R16 Standard Vertical Industry Extensions
- 【McQueen Trial】McQueen product information
- Has anyone made irregular PCB? Please help
- Car Charger IC