Intel China Research Institute Song Jiqiang: Evolution of chip technology in the intelligent era | Commemorating the 60th anniversary of the invention of the integrated circuit
▲Click above Leifeng.com Follow
Text | Ren Ran
Report from Leiphone.com (leiphone-sz)
Leifeng.com: On October 11, 2018, an academic conference commemorating the 60th anniversary of the invention of integrated circuits was held at Tsinghua University in Beijing. Wang Yangyuan, an academician of the Chinese Academy of Sciences, Xu Juyan, an academician of the Chinese Academy of Engineering, Wei Shaojun, a professor at Tsinghua University, and other top professionals in the domestic semiconductor industry delivered reports or speeches at the conference.
Among them, Song Jiqiang, director of Intel China Research Institute, gave a speech titled "Evolution of Chip Technology in the Intelligent Era", which reviewed and analyzed the evolution of Moore's Law over the past 60 years, and analyzed the future development directions of neuromorphic chips and quantum chips. Leifeng.com edited, organized and simplified the content of the speech without changing the original meaning.
Dean Song Jiqiang: Since the beginning of the application of integrated circuits, chips have been supporting the operation of the entire world. From helping us improve productivity, allowing us to do things faster and more smoothly, to allowing us to do various things on the move, the devices in our hands can know where we are and who we are, which has brought great changes to our lives.
We have now entered an era of intelligent computing, where devices can not only know where we are and who we are, but can also understand you through their eyes, ears and even brains.
In fact, in addition to Moore's Law proposed by Gordon Moore, "Bell's Law" proposed by Gordon Bell is also very famous in the integrated circuit industry. The content of Bell's Law is: the mainstream category of computing devices will undergo a major change approximately every 10 years, the size will be reduced by an order of magnitude, and the number of users will increase by at least an order of magnitude.
From the 1970s to the 1980s and then to the 1990s, Bell's Law has always accurately matched the actual situation; and the iPhone in 2007, with its smartphone form and new interaction method, made smartphones a mainstream computing device through the usage model of smart connected devices.
However, with the development of computing devices, the era of intelligent computing has begun. Chips are developing in the direction of small size, low power consumption, wireless communication, and wireless charging. They can be used in many products such as driverless cars and smart homes, making the whole life full of intelligent connected devices. In such an era of the Internet of Everything, it is very likely that there will no longer be a single mainstream device category.
So what remains unchanged? What remains unchanged is that these devices are collecting, transmitting and processing data, and then mining and integrating this data through intelligent algorithms to provide better value. Chips are made to be used by users. Intel's goal is to provide end-to-end data flow support through chips, rather than just focusing on the PC chip field.
Before we talk about the technological evolution of future chips, let's take a look at what has changed in the past 10 years. The content of these pages is taken directly from Intel's public speech at the US government's ERI (Electronic Resurgence Initiative) summit three months ago.
First, 10 years ago, Intel made a roadmap forecast, including estimates of new materials and new processes. Looking back now, most of the directions in the forecast are correct. In the process of promoting Moore's Law, 80% of the work is based on material reform. We not only need to study how to make chips smaller, but also how to make these devices in different ways. The other 20% of the work is basically seeking progress in chemical processes, such as atomic layer deposition, atomic layer etching and other technologies.
After so many attempts, we have gained a lot, but we have also found many errors: graphene transistors did not appear as expected, and many predictions with specific dates were generally inaccurate. For example, the sentence "Silicon lattice is ~ 0.5nm, hard to imagine good devices smaller than 10 lattices across – reached in 2020" actually means that as the process becomes smaller and smaller, it becomes increasingly difficult for us to control and produce semiconductor chips, not that the silicon semiconductor process will end in 2020.
According to Moore's Law, the miniaturization of CMOS will continue, but it will be achieved through different methods such as materials and chemical processes. More importantly, we can also stack transistors through three-dimensional design. In addition, there are some new functions and new circuit control methods that can keep Moore's Law going.
However, this is not enough. There will be many types of applications in the future. How can we quickly meet different applications? We have entered the 7nm era, and the speed of process changes is slowing down. To quickly respond to many different applications, we need heterogeneous integration methods and new data processing algorithms such as AI.
Intel's summary of the past years is that we have not reached the physical limit yet. We have made 2nm wide transistors and 5nm wide connections, but it is not enough to just make components smaller. The most important issue is how to produce tens of billions or hundreds of billions of transistors at the same time under precise processes.
In addition, we have to deal with various changing needs. We need to understand how to quickly solve these problems through various integration methods, various new architectures, and new data processing methods, rather than relying solely on CMOS miniaturization technology. Moore's Law will continue to evolve, but it will move forward in different forms and ways.
If we map Intel's past research to product lines, we can see many products at different nodes. A more direct example is the High-K metal gate at the 45nm node, which actually uses a new chemical process and new materials to manufacture components with a new structure. The 3D Finfet transistor at the 22nm node is also an example. We now have a clear understanding of transistor manufacturing at the 5nm node.
Intel has been working with industry and academia to track the evolution of semiconductor technology and evaluate the performance of semiconductor devices every year. Each point on the graph represents a new device, and the two coordinate axes are power consumption and switching performance. We all hope that the performance of semiconductor components is in the lower left corner of the graph, while the upper right corner is the emerging spin-based magnetic device. The stability and switching speed of magnetic devices are worse than those of current electrical devices, but judging from the trends in the past few years, we have found some good ways to improve the switching performance of magnetic devices and optimize circuit connections.
At the same time, through these studies, we can further discover how to better use transistors in circuits and how to combine new transistors with new architectures and new functions. Intel concluded from the statistical chart that CMOS is still in a very good position at present, and its power consumption and performance are better than most semiconductor components. At least in the last 10 years, we still have to manufacture chips mainly based on CMOS, and other new technologies can be mixed with CMOS to improve performance, reduce power consumption or reduce prices.
Some people may ask, why do some seemingly good technologies fail to succeed in the industry? From Intel's point of view on Moore's Law, this situation can be explained by the "User Value Triangle". The User Value Triangle points out that economic benefits are stronger than technical benefits, which means that the driving force of economics will be stronger. In the process of promoting Moore's Law, Intel also solves the problem of economic benefits in many different ways.
For example, we know that in the development of computer systems, as CPU speeds increase rapidly, memory has encountered problems such as insufficient capacity, insufficient bandwidth, and insufficient latency. How to solve these problems? In fact, the industry has long figured out how to reduce memory latency, but the cost is very high, far less than directly increasing the capacity density of memory. Therefore, from the perspective of economic benefits, the industry ultimately chose to focus on solving the capacity problem.
On the other hand, 3D memory is a very difficult technology. Driven by economics, people finally successfully developed 3D NAND technology and 3D XPoint technology. Therefore, cooperation with industry and academic partners can solve many conflicts between technology and economic benefits that we encounter.
In terms of heterogeneous integration mentioned above, Intel also has many technological achievements, which we call "mixed" heterogeneous design. We can package chip dies made at different nodes in 2D/2.5D/3D to ensure their interconnection bandwidth and reduce power consumption. This is also a key technology for Intel to continue to promote Moore's Law.
(Leifeng.com Note: The hybrid heterogeneous design here is the technology called EMIB (Embedded Multi-die Interconnect Bridge) announced by Intel at last year's Hot Chips conference)
We now know that the diversity of terminal devices is just around the corner, but the terminal devices are limited by size, power consumption, and price, so there is not much room for them to play. Ambient computing or edge computing is a scenario where computing power can be better deployed. At the same time, the current scale of cloud computing is getting larger and larger. Even a small piece of computing power is very powerful. Even 0.1% of the demand is enough to support the development of a custom chip. Therefore, Alibaba and Google are both developing their own cloud computing processors.
Intel is currently promoting several of these areas at the same time. With so many different technologies moving forward together, we can have a very diverse future.
Judging from the DARPA curve, we are in the process of moving from the second stage to the third stage. For an intelligent system, we hope that its perception, learning, abstraction and reasoning abilities are strong. Only in this way can we consider it a truly intelligent system.
However, through deep learning, we have only raised the system's perception and learning capabilities to a relatively high level, while its abstraction and reasoning capabilities are even worse than the previous stage. The third stage of the DARPA curve actually aims to solve the problem of how to integrate these four capabilities and support such an intelligent system through hardware.
Let's take a real-life example. We call the system that integrates perception, learning, abstraction, and reasoning capabilities an "intelligent autonomous system," and a typical example of this system is driverless cars. Driverless cars must be able to see road signs in the environment and receive various signal instructions; they must also be able to guide actions based on what they observe and their own knowledge, and select exits, ramps, intersections, etc. in real time and accurately.
We can abstract these capabilities into three categories: perception, decision-making, and action. The perception layer is mainly based on multi-channel vision, and may also have multi-line 3D radar, so it requires strong parallel computing capabilities; the decision layer is mainly based on the abstract information processed by the perception layer, and needs to combine knowledge and rules for reasoning, so it requires strong serial computing capabilities; and the action layer must complete the execution process at a certain time point, so it requires strong real-time processing capabilities.
These three parts need to work simultaneously, and if we want this system to have the ability to learn and adapt, it must also be able to turn more unknown world states into known ones. Therefore, if we want to achieve the ambitious goal of intelligent computing, our chip industry still has a lot to do.
Based on the existing ideas and conducting identification research, we believe that reconfigurable computing is a must. General computing and serial computing can be processed by CPU, but parallel computing requires special devices. Taking FPGA as an example, we can use FPGA to achieve hardware acceleration of highly parallel computing. Therefore, we can provide acceleration for diverse applications in the future by combining general and customized hardware.
With such a hardware foundation, we also need to consider how to let programmers use it. Many times, new chips die not because of performance or technology, but because programmers can't use them well or even don't know how to use them; they also need to be connected to system software so that the system can seamlessly switch to the accelerated processor; most importantly, they need to have stronger security.
What if we jump out of the previous incremental thinking and use revolutionary thinking to solve intelligent computing? First, we need to change the computing model.
The traditional computing model is to draw a flowchart first and then program according to the flowchart. This relies on human thinking to solve problems and write programs. Programmers are the most valuable in this stage. Now when we are doing perception tasks, programmers no longer know how to describe the perception computing process, nor can they draw the process, but we have a large amount of clear labeled data, and we can train a set of computing processes through deep learning models. At this stage, the value of data scientists and algorithm engineers has doubled.
Going one step further, if we want to truly handle multiple computations such as perception, learning, abstraction, and reasoning like the human brain, we need to study neuromorphic computing. Neuromorphic computing can mimic the structure of the human brain, allowing multiple computing processes to proceed simultaneously, and can interact with the outside world to continue learning through observation and feedback.
Even more powerful is quantum computing. Quantum computing can perform highly concurrent large-scale calculations through entangled quantum bits, but the current problem is that the state of quantum entanglement is extremely unstable and the calculation process is prone to errors. What we need to solve in the future is the error rate of quantum computing.
Intel has corresponding work in progress in these aspects. Let's first look at the neuromorphic chip, which is a non-von Neumann architecture chip that completely integrates storage and computing units, simulates the connection between neurons, and is an asynchronously controlled chip. Neuromorphic chips can perform self-learning on-chip, supporting unsupervised learning, supervised learning, self-supervised learning, and reinforcement learning modes. Currently, Intel already has 14nm and 10nm neuromorphic chip samples, and is cooperating with domestic universities and enterprises to promote the development of neuromorphic chips.
In terms of quantum computing, Intel is conducting research in two directions. One is the traditional computing method based on superconducting quantum bits, which is currently widely used in academia. Intel has conducted a large number of experiments on three quantum bit nodes: 7, 17, and 49. At the same time, Intel is also observing how to make users want to use new technologies when they emerge, and what prevents users from becoming mass users from early adopters.
These thoughts have gone beyond the thinking of chip technology itself. We hope that everyone will work with us to promote development in the post-Moore's Law era.
◆ ◆ ◆
Recommended Reading
Wormholes, brain science, dark matter, five-dimensional space, string theory... Tencent WE Conference has more than just these top-notch materials
Revenue exceeded 85 billion, cloud computing grew by 90%: How do you view Alibaba’s Q2 “report card”?
Google strike sweeps the world! 47 offices hit the biggest scale in history, employees publicly publish demands
Apple’s financial report will no longer disclose iPhone, iPad, and Mac sales. Why?
Global AI+Adaptive Education Summit
Official website of the conference (free tickets are available for application)
: https://gair.leiphone.com/gair/aiedu2018
Leiphone.com, together with Squirrel AI Learning from Xue Education and the IEEE Education Engineering and Adaptive Education Standards Working Group, held the Global AI+ Intelligent Adaptive Education Summit at Kerry Center.
Confirmed Guests
Top scholars including Michael Jordan , a member of three American academies and a master of machine learning, Tom Mitchell , the father of machine learning , and Robert Pearlstein , vice president of SRI International Research Institute ;
Founder of famous domestic education startups such as VIPKID, Zuoyebang, and Hujiang.com;
Knewton, Byju's, DreamBox, Duolingo, ALEKS, AltSchool and other most influential AI adaptive education companies abroad.
Free tickets are now available, click "Read original text" to apply now!