Article count:10804 Read by:13623255

Account Entry

The new chip idea of ​​the British startup team can speed up computers 100 times!

Latest update time:2019-06-13
    Reads:

Figure 1: IPU accelerator card launched by Graphcore, a British artificial intelligence chip hardware design startup


Unlike other number-crunching software, Graphcore, a British artificial intelligence (AI) chip hardware design startup, develops "brains" specifically for computers, and these brains are better at guessing.


Simon Knowles, chief technology officer of Graphcore, smiles as he sketches his vision for the future of machine learning on a whiteboard. With a black marker, he dots and diagrams the "nodes" of the human brain, which are typically responsible for the "contemplative or thinking part" of the brain. His startup is trying to emulate these neurons and synapses in the next generation of computer processors, which the company is betting will help "mechanize intelligence."


AI is often thought of as complex software that mines massive data sets, but Knowles and his co-founder, Graphcore CEO Nigel Toon, believe the computers that run it remain a bigger hurdle. Sitting in an airy office in the British port city of Bristol, Knowles and Toon say the problem is that the chips themselves (which are classified as central processing units (CPUs) or graphics processing units (GPUs) based on their function) don’t “think” in any recognizably human-like way.


Whereas the human brain uses intuition to simplify certain problems, like identifying an approaching friend, a computer might try to analyze every pixel of that person’s face and compare it to a database of billions of images before attempting to say hello. This level of precision made sense when computers were primarily calculators, but it’s wildly inefficient for AI, requiring a lot of energy to process all the relevant data.


In 2016, Knowles and the more business-minded Toone founded Graphcore, which put "not so precise" computing at the core of the chip, calling it the Intelligent Processing Unit (IPU). Knowles said: "The concepts in your brain are quite vague. It is actually a collection of very approximate data points that allow you to have precise ideas." Knowles' English accent and frequent giggles make people compare him to the headmaster of Hogwarts College in Harry Potter.


There are all sorts of theories about why human intelligence developed the way it did. But for machine learning systems, which need to process large, irregular, unorganized structures of information (i.e., graphs), building chips specifically designed to connect data points like brain nodes may be the key to the continued evolution of AI. “We want to build a high-performance computer that can crunch numbers in a very imprecise way,” Knowles said.


In other words, Graphcore is developing a "brain" for computers that, if its co-founders are right, will be able to process information more like humans do, rather than forging it through massive number crunching. "For decades, we've been telling machines what to do step by step, but we're not doing that anymore," explains Toone, describing how Graphcore's chips teach machines to learn: "It's like going back to the 1970s, when the microprocessor first came out, and we needed to reinvent Intel."


Investor Hermann Hauser, co-founder of Arm Holdings Plc, which controls the most widely used chip design, is betting that Knowles and Toone’s IPU will usher in the next wave of computing: “This has only happened three times in computer history: CPUs in the 1970s, GPUs in the 1990s, and Graphcore’s IPU is the third time,” he said.


Figure 2: IPU server racks at Graphcore’s office


Graphcore grew out of a series of seminars that Hauser organized at the Royal Society of Cambridge, a scientific group whose members included Isaac Newton and Charles Darwin, in 2011 and 2012. In the plush dining hall of King’s College, AI experts, neuroscientists, statisticians and zoologists debated the impact of advanced computing on society.


Hauser believes Knowles "has a brain the size of the Earth" and feels out of place in this "ivory tower," despite his career beginning at Cambridge University. After graduating in the 1980s, Knowles studied early neural networks at a British government research lab. He then co-founded wireless processor startup Element 14 and sold it to Broadcom for $640 million in 2000.


Soon after, Knowles and Toone, who had experience in semiconductor startups, teamed up for the first time. In 2002, they founded Icera, a mobile chip maker that they sold to Nvidia for $436 million less than a decade later. Neither was ready to retire at the time—“Neither of us were very good at golf,” Toone says—and they were discussing other ideas when Knowles left for a lecture series at Cambridge University. “I was the scruffy guy in the room with a chimney hat who just wanted to build something,” Knowles recalls. “You know: ‘Forget thermodynamics, I want to build a steam engine!’”


When Steve Young, a professor of information engineering at Cambridge who later sold speech processing services to Apple that are now used in Siri, gave a talk on the limits of computational conversational systems, Knowles peppered him with questions about energy efficiency. “I asked him about the precision of the numbers used in the algorithm, which seemed a little off topic to Steve,” Knowles says. But he stressed that in silicon, “the precision of the numbers is critical as a determinant of energy.”


A few days later, Steve Young emailed Knowles to say that his students had looked into the matter and discovered that they had been using 64 bits of data for each calculation. They realized that they could perform the same functions with 8 bits, as Knowles had suggested, but the calculations would be less precise. When the computer has less math to do, it can use the energy saved to crunch more numbers. It's a bit like the human brain switching from calculating the GPS coordinates of a restaurant to just remembering its name and neighbors.


“If we built a processor that was better suited for this kind of work, we could get a thousand times more performance,” Knowles said. Steve Young and others were so impressed that Knowles and Toone decided they had to create Graphcore. They began raising money to develop the idea as early as 2013, and unveiled the company to the world in 2016.


The semiconductor industry is currently debating the sustainability of Moore’s Law, the 1960s observation that the number of transistors on a chip will double every two years. Graphcore’s leaders are concerned about a related concept called the Dennard scale, which states that as transistor density increases, power requirements will remain constant.


But that principle no longer applies, and now adding more transistors to a chip means the chip will get hotter and use more energy. To alleviate this problem, many chipmakers design their products so that they don't use up all the processing power every time, running only the parts necessary to support the application. On the chip, these once-unused areas are called "dark silicon."


Knowles and Toone said that unless the circuits can be fundamentally redesigned to improve efficiency, the high temperature problem will become a major obstacle to mobile phones and laptops becoming faster in the coming years. Daniel Wilkinson, who is responsible for Graphcore's chip architecture, said: "I need to start from scratch, which has never happened in the field of chip design."


That challenged the team of dozens of engineers to design a chip that could harness all that processing power at once while consuming less power than a state-of-the-art GPU. One of the biggest energy stresses on silicon involves moving and retrieving data, but historically, processors have been separate from memory. Moving data back and forth between these components is “very energy intensive,” Knowles says. Graphcore began designing what Knowles calls a “homogeneous fabric,” which “mixes” the chip’s logic with its memory so that it doesn’t have to expend as much energy transferring data to other hardware.


Over more than three years, Knowles and Toone simulated hundreds of computer test methods for chip layout, eventually settling on a design that includes 1,216 processor cores, which Knowles calls "a lot of little islands of processors with dispersed energy." The resulting IPU debuted in 2018, a sleek-looking microchip with nearly 24 billion transistors that can access data at a fraction of the power of a GPU. "Each chip uses 120 watts of power, which is about the same as a bright incandescent light bulb," Toone said, standing in a messy electronics lab at the Bristol headquarters, running his finger over the mirrored surface of the IPU.


To test the prototype chip, the team fed it a standard data training model containing millions of images labeled with common objects (fruit, animals, cars). An engineer then queried the IPU for a photo of his own cat, Zeus, and within an hour the computer had not only correctly identified it but also correctly described Zeus' appearance. "The IPU was able to recognize that it was a tabby cat," Knowles said.


Since the first test, the IPU has sped up and can now recognize more than 10,000 images per second. The goal of the chip is to be able to digest and determine much more complex data models, allowing the system to understand what a cat is at a more basic level. "We don't tell the machine what to do, we just describe how it should learn and give it a lot of examples and data," said Knowles. "It doesn't actually need supervision. The machine is exploring what it should do."


Figure 3: Graphcore’s first chip, Colossus


On the fifth floor of Graphcore’s office, bulky industrial air conditioners blow cold air into the company’s data server room, and curtains swing back and forth to let in the unusual mid-May sunshine in Bristol. Although the chips are installed in boxy servers the size of refrigerators and are very energy-efficient, the machines still generate a lot of heat. These IPU server racks are enough to perform 64 petaflops, the equivalent of 183,000 iPhone Xs running at top speed at the same time. Knowles and Toone nicknamed their IPU “Colossus” after the world’s first electronically programmable computer, which was developed by the British government during World War II to crack encrypted messages from Germany.


Graphcore has raised $328 million from investors including BMW, Microsoft and Samsung, and the company was valued at $1.7 billion in December. Graphcore declined to comment on specific applications for its chips, citing a nondisclosure agreement, but many use cases seem obvious given its investors, such as self-driving cars, Siri-like voice assistants and cloud server farms. But Knowles is most interested in applications that change humanity, such as the IPU, which could have a greater impact on the complex analysis scientists need in climate change and medical research.


To help large corporate customers solve the problem of how to build the next generation of computers to use the chips correctly, Graphcore provides server blueprints and packages its products with free software tools. "We'll give you a recipe for computer design and sell you the ingredients," Toone said. The IPU relies on the so-called "parallel computing" concept. The basic idea of ​​writing a program is that you need to set functions for each processor, but with the proliferation of processors built into chips (large Graphcore chips include about 5 million processor cores and can run nearly 30 million programs at a time), this coding task has replaced manual programming, which means that processors must be automatically programmed to execute independently.


In layman's terms, Graphcore splits huge computing tasks into small data problems, each of which is processed separately on these "processor islands" and then synchronized like a Marine Corps marching band, sharing what they learn at the most efficient moment.


Tobias Jahn, chief investor at BMW’s venture capital arm, envisions Graphcore chips being used in the company’s data centers and perhaps its cars. “BMW is interested in making Graphcore a large-scale, global silicon supplier,” he said. Self-driving cars must perform a multitude of critical tasks at once, making them a key market for products like IPUs, since there are often delays in working in the cloud. Hauser, co-founder of Arm Holdings, estimates that each driverless car might need two IPUs. Graphcore says it expects to have $50 million in revenue in 2019.


Big-name competitors are also rushing into the space. Electric car maker Tesla Inc recently filed a patent for its own AI chip, and Google last year unveiled a microprocessor designed for machine learning. Nvidia has been revamping its main GPU chip design to make it less precise but more efficient, more like Graphcore’s approach.


“Everyone else is knocking on Nvidia’s door,” said Alan Priestley, an analyst at market research firm Gartner. “Graphcore has a big advantage, but it’s still a very small competitor compared to Nvidia’s market share. So while their IPUs may be better than Nvidia’s GPUs for these workloads, the risk they run is that customers often choose ‘good enough’ rather than ‘excellent.’”


If, as promised, the IPU enables machines 100 times more powerful than today's computers, another major challenge will be ethical dilemmas. Toone and Knowles are wary of the dangers, especially how these technologies could be misused for weapons and surveillance. Ultimately, though, they say, it will be up to governments to set limits. "Mechanical power helped us invent airplanes and cars, but it also helped invent tanks," Knowles noted. "Over time, society is going to have to find a balance between good and evil."


Currently, Graphcore is focused on developing more software to let customers see the power of the IPU, while expanding the business to the point where it will eventually go to market. For each major milestone, the company will open a bottle of champagne to celebrate, such as the $50 million in financing at the end of 2017 and the $10 million in sales orders in 2018. Signs of this growth can be seen everywhere in Graphcore's office, and the bottles of champagne are getting bigger and bigger.


Knowles always starts with Pol Roger, a drink they see as a symbol of pride that they may be helping to create Britain's first tech giant. "It starts with Pol Roger and ends with Pol Roger," said Knowles, who recently finished a 9-liter magnum. "When you have an IPO, you open the biggest bottle of champagne you can," he said.


Source: Huanqiu.com. If copyright is involved, please contact us to delete it.


Focus on industry hot spots and understand the latest frontiers

Please pay attention to EEWorld electronic headlines

https://www.eeworld.com.cn/mp/wap

Copy this link to your browser or long press the QR code below to browse

The following WeChat public accounts belong to

EEWorld (www.eeworld.com.cn)

Welcome to long press the QR code to follow us!


EEWorld Subscription Account: Electronic Engineering World

EEWorld Service Account: Electronic Engineering World Welfare Club


Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号