RISC-V AI chips will be everywhere

Publisher:楼高峰Latest update time:2022-03-09 Source: 编译自spectrumKeywords:RISC-V Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere
Adoption of RISC-V, a free and open-source computer instruction set architecture first introduced in 2010, is taking off like a rocket, fueled in large part by demand for artificial intelligence and machine learning. The number of chips incorporating at least some RISC-V technology will grow 73.6% annually through 2027, with about 25 billion AI chips produced by then, generating $291 billion in revenue, according to research firm Semico.

 
That growth from what was an emerging idea just a few years ago to today is impressive, but it also represents a sea change for AI, said Dave Ditzel, whose company, Esperanto Technologies, has created the first high-performance RISC-V AI processor, one designed to compete with powerful GPUs in AI recommendation systems. According to Ditzel, in the early enthusiasm for machine learning and AI, people thought that general-purpose computer architectures — x86 and Arm — would never keep up with GPUs and more specialized accelerator architectures.
 
“We’re starting to prove those people wrong,” he said. “RISC-V seems like an ideal foundation for solving the kinds of computations people want to do for AI.”
 
With the company’s first silicon chip, a 1,092-core AI processor, and a major development deal with Intel, he may soon be proven right.
 
Ditzel's entire career has been defined by the theory behind RISC-V. RISC, stands for Reduced Instruction Set Computer. The idea is that you can make a smaller, lower-power but better-performing processor by slimming down the core set of instructions it can execute. IEEE Fellow David Patterson coined the term in a seminal 1980 paper. His student Ditzel was a co-author. Ditzel went on to work on RISC processors at Bell Labs and Sun Microsystems before co-founding Transmeta, which produced a low-power processor designed to compete with Intel by translating x86 code for the RISC architecture.
 
For Esperanto, Ditzel sees RISC-V as a way to accelerate artificial intelligence with relatively low power consumption. At a basic level, a more complex instruction set architecture means more transistors are needed to make up a processor, and each transistor leaks a little current when it is off and consumes power when switching states. "That's what makes RISC-V attractive," he said. "It has a simple instruction set."
 

Kernel


At its core, RISC-V is a set of just 47 instructions. The actual number of X86 instructions is hard to enumerate, but it’s probably closer to 1,000. Arm’s instruction set is thought to be much smaller, but still much larger than RISC-V’s. But Ditzel said using just a reduced set of instructions wasn’t enough to achieve the computing power that Esperanto was after. “Most RISC-V cores aren’t that small or that energy efficient. So it wasn’t just a matter of us taking a RISC-V core and putting 1,000 of them on a chip. We had to completely redesign the CPU to fit it into those very tight constraints.”
 
When Ditzel and his colleagues began their work, the “vector” instructions needed to efficiently do machine learning math, such as matrix multiplication, were noticeably missing from the RISC-V instruction set. So the Esperanto engineers came up with their own approach. As reflected in the architecture of the processor core, the ET-Minion, these include units that execute 8-bit integer vectors as well as 32-bit and 16-bit floating-point vectors. There are also units that perform more complex “tensor” instructions, as well as systems related to the efficient movement of data and instructions related to the arrangement of the ET-Minion cores on the chip.
 
The resulting system-on-chip, the ET-SoC-1, consists of 1,088 ET-Minion cores and four cores called ET-Maxions, which help manage the Minions’ work. The chip’s 24 billion transistors occupy 570 square millimeters. That makes it about half the size of the popular AI accelerator Nvidia A100. The two chips follow very different philosophies.
 
ET-SoC-1 is designed to accelerate AI in power-constrained data centers, with the core of the board fitting into the Peripheral Component Interconnect Express (PCIe) slot of an installed server. This means the board has only 120 watts of available power, but it must deliver at least 100 trillion operations per second to be valuable. Esperanto managed more than 800 trillion operations per second within that power envelope.
 
"Most AI accelerators are built around a single chip that takes up most of the power budget of the board," Jayesh Iyer, chief architect at Esperanto.ai, told technologists at the RISC-V Summit in December. "Esperanto's approach is to use multiple low-power chips, which still fits within the power budget."
 
When executing a recommendation system benchmark neural network, each chip consumed 20W of power—less than one-tenth the power of the A100—with six chips on the board. This combination of power and performance is achieved by lowering the operating voltage of the chips, without the expected performance sacrifice. (Generally speaking, a higher operating voltage means you can run the chip's clock faster and complete more calculations.) At the rated voltage of 0.75V (the nominal voltage of the ET-SoC-1 manufacturing process), a single chip would far exceed the power budget of the board. But when the voltage drops to around 0.4V, you can run six chips at 120W of power, a 4x improvement in recommendation system performance compared to a single high-voltage chip. At this voltage, each core of the ET-Minion consumes only about 10 milliwatts.
 
“Low-voltage operation is key to the design of the Esperanto ET-minion (core),” Iyer said. “It informs decisions at both the architectural and circuit levels. For example, the core pipeline for RISC-V integer instructions consists of a minimum number of logic gates per clock cycle, allowing higher clock rates at reduced voltages. When the core is performing long tensor calculations, this pipeline is shut down to save energy.

Other AI processors


Other recently developed AI processors have also turned to a combination of RISC-V and their own custom machine learning acceleration. For example, Ceremorphic recently stood out with its hierarchical learning processor, which uses RISC-V and Arm cores with its own custom machine learning and floating point units. Intel's upcoming MobileyeEyeQ Ultra will have 12 RISC-V cores and neural network accelerators in the chip, designed to provide intelligence for Level 4 autonomous driving.
 
For embedded AI processor company Kneron, using RISC-V processors is both a business and technology move. The company has been selling chips and intellectual property that use Arm CPU cores and custom accelerator infrastructure. But last November, Kneron released its first RISC-V-based technology in the KL530, designed to support autonomous driving with a relatively new type of neural network called the Vision Transformer. According to Kneron CEO Albert Liu, the RISC-V architecture makes it easier to pre-process neural network models, making them run more efficiently. However, he said: "Given the potential acquisition of Arm by Nvidia last year, this move can reduce our risk in any business decision that may affect us." The deal fell apart in February this year, but it would have put Kneron's previous supplier of CPU core architecture in the hands of a competitor.
 
Future RISC-V processors will be able to use a community-agreed set of open source instructions to handle operations related to machine learning. RISC-V International, the body responsible for the core instruction set architecture and new extensions, approved a set of more than 100 vector instructions in December 2021.
 
“With the new vector instructions, people doing their own things in AI don’t have to start from scratch,” said Mark Himelstein, the group’s chief technology officer. “They can use the instructions that other companies are using. They can use the tools that other companies are using. And then they can innovate in terms of implementation, power consumption, performance or anything else.”
 
Even with the vector extensions, promoting machine learning remains a top priority for the RISC-V community, Himelstein said. Most development of RISC-V extensions related to machine learning is happening in the organization’s Graphics Special Interest Group, which merged with the machine learning group “because they wanted the same things,” he said. But other groups, such as those interested in high-performance and data center computing, are also eyeing machine learning-related extensions. Himelstein’s job is to make sure the efforts of all parties converge to the extent possible.
 
Despite RISC-V's success, Arm is still the market leader in many markets where it has added many new AI features, and it will likely still be so five years from now, with RISC-V accounting for about 15% of the CPU core design market share. "It's not 50%, but it's not 5% either," said Rich Wawrzyniak, principal analyst at Semico Research. "If you think about how long RISC-V has been around, that's pretty rapid growth."


Keywords:RISC-V Reference address:RISC-V AI chips will be everywhere

Previous article:NXP discusses the three major trends in edge computing: multi-core, accelerators and security
Next article:STMicroelectronics Cassis: How to welcome the next "automation era"

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号