Article count:25311 Read by:103629709

Featured Content
Account Entry

Cerebras: The company's chip is 10,000 times faster than a GPU

Latest update time:2020-11-18
    Reads:

Source: The content is compiled from " venturebeat " by Semiconductor Industry Observer (ID: icbank), thank you.


Cerebras Systems and the federal Department of Energy’s National Energy Technology Laboratory announced today that the company’s CS-1 system is 10,000 times faster than a graphics processing unit (GPU).

In other words, this means that AI neural networks that previously took months to train can now be trained in just minutes on the Cerebras system.

Cerebras produces the world's largest computer chip, WSE. Chipmakers typically cut wafers from 12-inch diameter silicon ingots and process them in chip factories. After processing, the wafers are cut into hundreds of individual chips that can be used in electronic hardware.

But Cerebras, founded by SeaMicro founder Andrew Feldman, makes one giant chip out of an entire wafer. Each part of the chip, called a core, is interconnected to other cores in a complex way. The interconnects are designed to keep all the cores running at high speeds so the transistors can work together.

Cerebras' CS-1 system uses a WSE silicon-wafer-sized chip that has 1.2 trillion transistors, the basic on-off electronic switches that are the foundation of silicon chips. Intel's first 4004 processor from 1971 had 2,300 transistors, while the Nvidia A100 80GB chip announced yesterday has 54 billion transistors.

The CS-1 is also 200 times faster than the Joule supercomputer, which ranks 82nd on the world’s top 500 supercomputers, Feldman said in an interview with VentureBeat.

“It shows record performance,” Feldman said. “It also shows that wafer-scale technology has applications beyond AI.”


The data are the result of a radical approach taken by California-based Cerebras, which created a chip with 400,000 AI cores instead of slicing the wafer into individual chips. The unusual design makes it much easier to accomplish the task, Feldman said, because the processors and memory are close together and there is plenty of bandwidth to connect them. Questions remain about how well the approach works for different computing tasks.

A paper published as a result of Cerebras' work with federal labs says the CS-1 can deliver performance that is unmatched by any number of central processing units (CPUs) and GPUs, both of which are commonly used in supercomputers. (Nvidia's GPUs are now used in 70% of the top supercomputers.) Feldman added, "This is true no matter how big the supercomputer is."

Cerebras will demonstrate it at the SC20 supercomputing online event this week. The CS-1 beat the Joule supercomputer in a workload related to computational fluid dynamics, which simulates the movement of fluids in places like carburetors. The Joule supercomputer cost tens of millions of dollars to build, has 84,000 CPU cores spread across dozens of racks and consumes 450 kilowatts of power.

Above: Cerebras has about a half-dozen supercomputing customers.

In this demonstration, the Joule supercomputer used 16,384 cores, and the Cerebras computer was 200 times faster, according to Brian Anderson, director of the Energy Lab. Cerebras costs millions of dollars and uses 20 kilowatts of power.

“For these workloads, the wafer-scale CS-1 is the fastest machine ever made,” Feldman said. “And it’s faster than any other combination or cluster of other processors.”

A single Cerebras CS-1 is 26 inches tall, can occupy one-third of a rack, and is powered by Cerebras' WSE, the industry's only wafer-scale processing engine. It combines in-memory performance with large bandwidth, low-latency inter-processor communication and an architecture optimized for high-bandwidth computing.

The results come after months of work led by NETL machine learning and data science engineer Dirk Van Essendelft and Cerebras co-founder and chief architect for advanced technologies Michael James.

In September 2019, the Department of Energy announced a partnership with Cerebras that includes deployments with Argonne National Laboratory and Lawrence Livermore National Laboratory.

The Cerebras CS-1 was announced in November 2019. Built around the WSE, the CS-1 is 56 times the volume, has 54 times more cores, 450 times more on-chip memory, 5,788 times more memory bandwidth, and 20,833 times more fabric bandwidth than the leading GPU competitor, Cerebras says.

Above: The brain at Lawrence Livermore National Laboratory

Feldman noted that the CS-1 can complete calculations faster than real time, meaning it can start a simulation of a power plant’s reactor core when a reaction begins and complete it before the reaction ends.

"These dynamic modeling problems have interesting characteristics," Feldman said. "They scale poorly between CPU and GPU cores. In the language of computational scientists, they do not exhibit 'strong scaling.' This means that, above a certain point, adding more processors to a supercomputer does not yield additional performance gains."

Cerebras has raised $450 million and has 275 employees.


★ Click [Read original text] at the end of the article to view the original link of this article.


*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.


Today is the 2497th issue of content shared by "Semiconductor Industry Observer" for you, welcome to follow.

Recommended Reading

Semiconductor Industry Observation

" The first vertical media in semiconductor industry "

Real-time professional original depth


Scan the QR code , reply to the keywords below, and read more

Storage|Wafer |Lithography |FPGA|M&A|IC Design|Huawei|Domestic Chips

Reply Submit your article and read "How to become a member of "Semiconductor Industry Observer""

Reply Search and you can easily find other articles that interest you!


Click to read the original text to view the original link of this article!

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号