Intel smells the opportunity of neuromorphic "chip"

Publisher:EEWorld资讯Latest update time:2020-03-26 Source: EEworldKeywords:Intel Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Translated from - nextplatform

 

 

Intel's latest neuromorphic system, Pohoiki Springs, was released in March 2020. It integrates 768 Loihi neuromorphic research chips in a chassis the size of five standard servers. It is reported that this experimental research system for neuromorphic computing released by Intel is an advanced method that can simulate the working method of the human brain and complete calculations faster with only minimal energy.

 

Neuromorphic computing still has a long way to go before it can become part of the system. Although some use cases show significant promise, mapping the problem to the architecture remains a challenge. In addition, like quantum computing, most major chip and system manufacturers are deeply interested in exploring the possibilities of the technology, and Intel is no exception.

 

Since the range of problems that can be solved with neuromorphic chips is still limited, Intel has focused on a very specific use case to highlight the progress of a 64-chip system based on the "Loihi" architecture. They used a system called "Pohoiki Springs" that has been scaled to 768 Loihi chips (100 million neurons), packed into an Intel processor in a 5U rack base, and demonstrated how a neuromorphic system can accurately identify odors with a small training sample, in a system that consumes only 300 watts of power.

 

图ä¸æ˜¯ä¸€å —è¿ Arria 10 FPGAs offer a new class of performance and performance that is unique to the industry. ï¼Œæ¯ å —åŸºæ ¿ åŒ…å «8到32å —è‹±ç‰¹å°”Loihiç¥žç» æ‹Ÿæ€ èŠ¯ç‰‡ã€‚è‹±ç‰¹å°”æœ€æ–°çš„ç¥žç » æ‹Ÿæ€ ç³»ç»ŸPohoiki Beachç”±å¤šå —NahukuåŸºæ ¿ ç»„æˆ ï¼Œå «64å —Loihi芯片。

 

Will neuromorphic chips become the main weapon of the future?

 

Technology research firm Gartner predicts that neuromorphic chips will become the dominant computing architecture for new advanced artificial intelligence deployments by 2025. Gartner predicts that by 2025, the technology is expected to replace graphics processors as one of the main computer chips used for AI systems, especially neural networks, which are used in speech recognition and understanding, as well as computer vision.

 


 

Neuromorphic computing enables machine learning models to be trained using only a fraction of the data required to train them on traditional computing hardware. Mike Davies, director of Intel’s Neuromorphic Computing Lab, said: “This means that the model learns in a similar way to how human babies learn, just by looking at an image or a toy once and remembering it forever.”

 

“This [makes it possible] to do some of the computations that are difficult to do today” because they require too much energy or take too long. Davies also said that if there were a widespread power outage, neuromorphic computing could automatically help identify some of the areas that most urgently need power. Neuromorphic computing could also help consumers more accurately find items that are similar or match a specific product image.

 

Davies said the Pohoiki Springs system is different from traditional machines in that its memory and computing elements are intertwined rather than separate, minimizing the distance data has to travel, since in traditional computing architectures data must flow back and forth between memory and computing.

 

Intel researchers recently used a single neuromorphic research chip to train an AI system to identify noxious odors, using one training sample per odor, compared to 3,000 samples required by advanced deep learning methods, so the energy required for neuromorphic computing is extremely low.

In experiments, machine learning models were able to detect distinct odors from chemical sensors, such as ammonia, acetone and methane, even when they were masked by different scents that could indicate the possible presence of explosives and drugs.

 

The Pohoiki Springs system has about 770 of these neuromorphic research chips in a chassis the size of five standard servers. Davies says the system has the computing power of about 100 million neurons, roughly equivalent to the brain of a mole rat.

 

Edy Liongosari, chief research scientist at Accenture Labs, said one of the main advantages of neuromorphic computing is that it can perform AI-based calculations with lower energy.

 

Energy consumption is a barrier to large-scale AI deployment: Researchers at the University of Massachusetts Amherst say the carbon footprint of developing a single AI model is equivalent to the lifetime emissions of five average American cars.

 

Accenture Labs has been working with Intel’s neuromorphic computing researchers since 2018 to study how the technology could help AI algorithms used in connected devices, such as security cameras that continuously detect motion. Neuromorphic chips could eventually be embedded in cameras. “There are some use cases where power is at a premium,” said Mr. Liongosari.

 

Such cameras continuously analyze large amounts of data, which requires energy to identify anomalies such as intrusions. Neuromorphic computing can help machine learning algorithms identify intrusions using far less training data than traditional algorithms.

 

Can Pohoiki Beach really think like the human brain?

 

The new Pohoiki Springs is based on Intel's "Nahuku" baseboard, with each board containing eight to 32 Loihi processors, a 64-processor system made up of two to eight boards (Intel didn't provide details on the exact configuration, including how the chips and baseboards are networked). The same system has now been scaled up to the aforementioned 768 chips.

 

 

Loihi is a self-learning neuromorphic chip launched by Intel Labs in September 2017. It is based on a 14nm process technology, has a die size of 60 mm, and contains more than 2 billion transistors, 130,000 artificial neurons, and 130 million synapses. As a multi-core grid implementation, it will increase significantly by 2020, with the number of neurons exceeding 1 million. Each core contains a "learning engine" that can support different types of artificial intelligence models, including supervised, unsupervised, and reinforcement learning. According to Intel, Loihi is 1,000 times faster and 10,000 times more efficient than CPUs in processing applications such as sparse coding, graph search, and constraint satisfaction problems. The chip has been provided to researchers in the Intel Neuromorphic Research Community (INRC) through cloud services and the Kapoho Bay platform (a USB form factor device based on Loihi).

 

Like the brain, Loihi processes certain demanding workloads 1,000 times faster and 10,000 times more efficiently than conventional processors. Pohoiki Springs is the next step in scaling this architecture to evaluate its potential for solving artificial intelligence problems as well as addressing a wide range of computational challenges. Intel researchers believe the extreme parallelism and asynchronous signaling of neuromorphic systems could provide significant performance gains at dramatically reduced power levels compared to today’s most advanced conventional computers.

 

According to Mike Davies, who leads Intel's neuromorphic computing project, these special systems could be used by doctors to sniff out disease, for example, or to detect manufacturing sites for weapons, drugs, bombs or hazardous chemicals at airports.

 

While it may sound far-fetched to use specialized neuromorphic architectures and cumbersome programming kits to do all of this, neural networks can also choose similar patterns (as Google and others have shown), and neuromorphic systems have some properties that traditional deep learning models and machines can't touch. Energy efficiency and time are two of the most prominent advantages.

 

The efficiency gains come from fully integrating the computer and memory on a neuromorphic system. Streaming instructions and data don't have to go through separate memories. Everything is integrated into a distributed compute and memory fabric. It all comes down to asymmetric properties, Davies explains. "To get the system to communicate is a question of energy consumption. If you're not sending any zero binary values, that means you're not using energy. Sending information in this way, encoding at some point in time, is a way of sending information that you can compute with these codes so that you can select the '0' state." The problem is, to get to that state requires rethinking the algorithms. That's what spiking is all about in neuromorphic systems.

 

“Pohoiki Springs scales up our Loihi neuromorphic research chip by more than 750 times while operating at power levels below 500 watts. The system enables our research partners to explore ways to accelerate workloads that run slowly on traditional architectures, including high-performance computing (HPC) systems.”

 

According to Chris Eliasmith, professor at the University of Waterloo and co-CEO of Applied Brain Research, the Loihi chip consumes 109 times less power when running real-time deep learning benchmarks, and 5 times less power than dedicated IoT inference hardware. If the network size is increased by 50 times, Loihi can maintain real-time performance with only a 30% increase in power consumption, while traditional IoT hardware has increased power consumption by 500%.

 

So what Intel’s Pohoiki Beach system does is outstanding. It enables workers to use the system to efficiently scale new neural-inspired algorithms—such as sparse coding, simultaneous localization and mapping (SLAM), and path planning—that learn and adjust based on the data input. Pohoiki Beach is an important milestone for Intel’s neuromorphic research, laying the foundation for Intel Labs’ plan to scale the architecture to 100 million neurons later this year.

 

For example, Intel's Pohoiki Beach system can break through the performance constraints of traditional general-purpose computing technology and significantly improve efficiency in areas such as autonomous driving, smart homes, and network security. Since it does not require a global clock signal and uses an asynchronous spiking neural network (SNN), Pohoiki Beach can more efficiently solve the problems brought about by a large number of IoT Internet of Things.

 

Theoretically, Intel's Loihi chip can be expanded to a maximum of 16,384 chips interconnected, equivalent to 2 billion neurons, which is close to 2% of the 86 billion neurons in the human brain. Don't underestimate the insignificant number compared to the others. Moore's Law still works in cutting-edge fields. If necessary, Loihi's parameters will also grow exponentially. A way of thinking that can simulate the way the human brain thinks may not be too far away.

[1] [2]
Keywords:Intel Reference address:Intel smells the opportunity of neuromorphic "chip"

Previous article:Using AI to improve AI and make AI applications more efficient and transparent
Next article:Development of the Robot Industry: From "Artificial Intelligence" to "Artificial Intelligence"

Recommended ReadingLatest update time:2024-11-17 07:51

Former Intel engineer: Skylake's quality issues accelerated Apple's chip replacement
At the 2020 WWDC Global Developers Conference last week, Apple officially announced its plan to use Apple's self-developed chips on Mac in the future, and said that its processors will bring new performance and lower energy consumption. According to Patently Apple, Francois Piednoel, a former engineer who worked at
[Mobile phone portable]
NORCO embedded platform supports Intel Cup undergraduate electronic design competition
The 5th "Intel Cup Undergraduate Electronic Design Competition Embedded System Special Invitational Competition" kicked off on March 24 at Shanghai Jiao Tong University. The competition will last for 3 months. Guests attending the opening ceremony included Mr. Zhang Daliang, Director of the Higher Education Depart
[Industrial Control]
NORCO embedded platform supports Intel Cup undergraduate electronic design competition
13th generation Core and Ruixuan graphics cards bring shocking experience, Intel and partners set off Bilibili World 2023 carnival
July 21, 2023, Shanghai - Today, Intel made its debut at Bilibili World 2023, a large-scale e-sports entertainment exhibition, bringing deeply optimized experiences of a variety of popular games, and working with Colorful, GUNNIR, HP, Lenovo ( Nine industrial partners including Savior), Mechanic, Mechanical Revolution
[Home Electronics]
13th generation Core and Ruixuan graphics cards bring shocking experience, Intel and partners set off Bilibili World 2023 carnival
​Intel joins hands with Xinghuan Technology to appear at the China International Import Expo to help enterprises with digital development with innovative solutions
November 6 , 2023 , Shanghai - Today, Intel and Xinghuan Technology jointly released the AIGC vector database solution at the 2023 China International Import Expo, aiming to support the massive vector data generated by diversified machine learning models and meet the needs of enterprises . Aiming at the needs of
[Industrial Control]
​Intel joins hands with Xinghuan Technology to appear at the China International Import Expo to help enterprises with digital development with innovative solutions
Intel engineering resources are tilted towards Intel 18A, Arrow Lake mainly uses external processes
On September 5, Intel said in a press release yesterday that its Intel 18A advanced node is currently progressing well. To further support the development of Intel 18A, the company announced that it will focus engineering resources on this node in advance. At the same time, the Arrow Lake client processor serie
[Embedded]
Intel asks Samsung to manufacture 14nm processors due to insufficient production capacity
In order to ease the embarrassment of CPU shortage, Intel has begun negotiating with Samsung on the details of CPU foundry for a "Rocket Lake" microarchitecture CPU. This is the first time Intel has asked Samsung to manufacture its CPU...   South Korean media Sedialy reported that Samsung has officially agreed to m
[Semiconductor design/manufacturing]
Intel asks Samsung to manufacture 14nm processors due to insufficient production capacity
Four key takeaways from Intel Investor webinar
Four key takeaways from Intel Investor webinar Intel focused on the company’s Data Center and Artificial Intelligence (DCAI) division, announcing a new roadmap and a series of new developments. All eyes lately—especially those of investors—have been on Intel’s data center business.
[Network Communication]
Four key takeaways from Intel Investor webinar
Intel's new 10nm Tiger Lake processor will be released in 2020
According to Tom's Hardware, Intel released a series of news at the 2019 Investor Conference. Its CEO Bob Swan announced that Intel will launch its first discrete 10nm GPU in 2020, and launched the first Ice Lake architecture block diagram. Its new 10nm Tiger Lake processor will be available in 2020, followed by the 7
[Mobile phone portable]
Intel's new 10nm Tiger Lake processor will be released in 2020
Latest Internet of Things Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号