In 2012, two major events occurred in the AI circle. In chronological order, the first one was the release of the "debut work" of Google Brain, a long-established group of Google - a deep learning network "Google Cat" that can recognize cats, with a recognition rate of 74.8%. The accuracy rate is 0.8% higher than the 74% of the winning algorithm in the well-known image recognition competition ImageNet the previous year.
But Google's glory days only lasted a few months. In December 2012, the winner of the latest ImageNet session was announced. Deep learning master Hinton and his disciples used the convolutional neural network AlexNet to increase the recognition accuracy to 84%, thus starting the AI revolution in the next ten years. Google Cat was buried in the dust of history.
Hinton with two students, 2012
It wasn’t just the ImageNet model itself that shocked the industry. This neural network, which required 14 million images and a total of 262 petaflops of floating-point operations to train, used only four Nvidia Geforce GTX 580s during the one-week training process. For reference, Google Cat used 10 million pictures, 16,000 CPUs, and 1,000 computers [1].
It is rumored that Google also secretly participated in the competition this year, and its shock was directly reflected in its subsequent actions:
Google spent $44 million to acquire the Hinton team, and immediately placed an order for a large number of GPUs from Nvidia for artificial intelligence Training, and at the same time, Microsoft, Facebook and other giants are also "sweeping goods".
Nvidia became the biggest winner, with its stock price rising 121 times in the next 10 years.
An empire was born.
But two dark clouds gradually gathered over the empire.
Google, which was shopping for Nvidia, made a stunning debut three years later with AlphaGo, and defeated the human champion Ke Jie in 2017.
Keen people discovered that the chip driving AlphaGo is no longer Nvidia's GPU, but Google's self-developed TPU chip.
Three years later, a similar story happened again. Tesla, which was once regarded as a benchmark customer by Huang Renxun, also bid farewell to Nvidia GPU. It first launched the FSD car chip with NPU as the core, and then took out the D1 chip used to build an AI training cluster - this means that Nvidia has successively Here we have lost two of the most important customers in the AI era.
By 2022, the global IT cycle has entered a downward phase. Major cloud computing companies have reduced GPU procurement budgets for data centers. The blockchain mining trend has gradually cooled down. In addition, the U.S. ban on Chinese chips has made it impossible to sell A100/H100, etc. domestically. For high-end graphics cards, Nvidia's inventory surged, and its stock price fell by 2/3 from its highest point.
ChatGPT came out at the end of 2022, and GPUs were once again robbed as fuel for the "alchemy" of large models. Nvidia got a breather, but the third dark cloud followed: On April 18, 2023, the famous technology media The Information broke the news:
This Microsoft, the initiator of this AI wave, is secretly developing its own AI chips
[2].
This chip named Athena is manufactured by TSMC and uses 5nm advanced process. The number of Microsoft's R&D team has reached nearly 300. Obviously, the goal of this chip is to replace the expensive A100/H100, provide a computing engine for OpenAI, and eventually grab Nvidia's cake through Microsoft's Azure cloud service.
Microsoft is currently the largest purchaser of Nvidia H100, and it was even rumored that it would "cover" the entire H100 production capacity. The breakup signal from Microsoft was undoubtedly a bolt from the blue. You must know that even in Intel's darkest days, none of its customers "dare" to make their own CPU chips (except Apple, but Apple does not sell to external parties).
Although NVIDIA currently monopolizes 90% of the AI computing power market with its GPU+NVlink+CUDA,
the first cracks in the empire have appeared.
01
GPU not designed for AI
From the beginning, GPUs were not designed for AI.
In October 1999, NVIDIA released
the GeForce 256
, a graphics processing chip based on TSMC's 220-nanometer process and integrating 23 million transistors. NVIDIA extracted the acronym "GPU" from the Graphics Processing Unit and dubbed the GeForce 256
"the world's first GPU"
, cleverly defining the new category of GPU and occupying the minds of users of this word to this day.
At this time, artificial intelligence has been silent for many years, especially in the field of deep neural networks. Future Turing Award winners such as Geoffery Hinton and Yann LeCun are still sitting on the academic bench, and they never think about their careers. , will be completely changed by a GPU originally developed for gamers.
Who is GPU born for? image
. To be more precise, it was born to free the CPU from the hard work of image display. The basic principle of image display is to divide each frame of image into pixels, and then perform multiple rendering processes such as vertex processing, primitive processing, rasterization, fragment processing, pixel operations, etc., and finally display it on the screen. .
Processing process from pixels to images Source: graphics compendium
Why do you say this is hard work? Do a simple arithmetic problem:
Assuming there are 300,000 pixels on the screen, calculated at a frame rate of 60fps, 18 million renderings need to be completed per second. Each time includes the above five steps, corresponding to five instructions. In other words, the CPU must complete 90 million instructions per second. To achieve one-second screen rendering, as a reference, Intel's highest-performance CPU at that time had only 60 million calculations per second.
It’s not that the CPU is weak, but that it is good at thread scheduling. For this reason, more space is transferred to the control unit and storage unit. The computing unit used for calculation only occupies 20% of the space. GPU, on the other hand, has more than 80% of the space occupied by computing units, which brings super parallel computing capabilities and is more suitable for fixed-step, repetitive and boring work such as picture display.
Internal structure of CPU and GPU, the green part is the computing unit
It was not until a few years later that some artificial intelligence scholars realized that GPUs with such characteristics were also suitable for deep learning training. Many classic deep neural network architectures have been proposed as early as the second half of the 20th century. However, due to the lack of computing hardware to train them, many studies can only be done on paper and their development has been stagnant for a long time.
A gunshot in October 1999 brought GPU to artificial intelligence. The training process of deep learning is to perform hierarchical operations on each input value based on the functions and parameters of each layer of the neural network, and finally obtain an output value. Like graphics rendering, it requires a lot of matrix operations - this happens to be what the GPU is best at. thing.
A typical deep neural network architecture; Source: towardswards data science
However, the image shows that although the amount of data processing is huge, most of the steps are fixed. Once the deep neural network is applied to the field of decision-making, it will involve complex situations such as branch structures, and the parameters of each layer need to be trained based on positive and negative feedback of massive data. Keep revising.
These differences lay hidden dangers for the future adaptability of GPUs to AI.
Kumar Chellapilla, today's general manager of Amazon AI/ML, was the first scholar to catch the GPU bug. In 2006, he used Nvidia's
GeForce 7800
graphics card to implement a convolutional neural network (CNN) for the first time and found that it was 4 times faster than using a CPU. This is the earliest known attempt to use GPUs for deep learning [3].
Kumar Chellapilla and NVIDIA Geforce 7800
Kumar's work has not attracted widespread attention. The important reason is that the complexity of writing programs based on GPU is very high. But at this time, Nvidia launched
the CUDA platform
in 2007
, which greatly reduced the difficulty for developers to use GPUs to train deep neural networks, which gave deep learning believers more hope.
Then in 2009, Stanford's Andrew Ng and others published a groundbreaking paper [6], in which GPU reduced AI training time from weeks to hours with 70 times more computing power than CPU. This paper points out the direction for the hardware implementation of artificial intelligence. GPU has greatly accelerated the process of AI from paper to reality.
It is worth mentioning that Andrew Ng joined Google Brain in 2011 and is one of the leaders of the Google Cat project mentioned at the beginning. The reason why Google Brain ultimately failed to use GPUs is unknown to outsiders, but before and after Andrew Ng left Google to join Baidu, there have been rumors that it was because Google had an unclear attitude towards GPUs.
After countless people’s explorations, the baton was finally handed over to the deep learning master Hinton. At this time, the time has pointed to 2012.
In 2012, Hinton and two students, Alex Krizhevsky and Ilya Sutskeverz, designed a deep convolutional neural network, AlexNet, and planned to participate in this year's ImageNet competition. But the problem is that it may take several months to train AlexNet using CPU, so they turned their attention to GPU.
This GPU, which is crucial in the development history of deep learning, is the famous "nuclear bomb graphics card"
GTX 580
. As the flagship product of NVIDIA's latest Fermi architecture, the GTX 580 is packed with 512 CUDA cores (108 in the previous generation). While the computing power has leapt, the exaggerated power consumption and heating problems have also earned NVIDIA the nickname "Nuclear Bomb Factory."
A is arsenic, B is honey. Compared with the "smoothness" of using GPU to train neural networks, the heat dissipation problem is simply not worth mentioning. Hinton's team successfully completed the programming using NVIDIA's CUDA platform. With the support of two GTX 580 graphics cards, the training of 14 million images took only one week, and AlexNet successfully won the championship.
Due to the influence of the ImageNet competition and Hinton himself, all artificial intelligence scholars realized the importance of GPU in an instant.
Two years later, Google participated in ImageNet with the GoogLeNet model and won the championship with an accuracy of 93%. It used NVIDIA GPUs. This year, the number of GPUs used by all participating teams soared to 110. Outside of competitions, GPUs have become the “must-have consumption” for deep learning, sending Huang Renxun a steady stream of orders.
This allowed Nvidia to get rid of the shadow of the disastrous failure in the mobile market - after the release of the iPhone in 2007, the cake of smartphone chips expanded rapidly. Nvidia also tried to get a piece of the pie from Samsung, Qualcomm, MediaTek, etc., but the Tegra processor it launched was The heat dissipation problem came to nothing. In the end, the field of artificial intelligence saved by GPU gave Nvidia a second growth curve.
But after all, GPUs are not designed for training neural networks. The faster artificial intelligence develops, the more these problems will be exposed.
For example, although there are significant differences between GPU and CPU, both fundamentally follow the von Neumann structure, and storage and operation are separated. The efficiency bottleneck caused by this separation, after all, the steps of image processing are relatively fixed and can be solved by more parallel operations, but it is very fatal in neural networks with many branch structures.
Every time a layer or branch is added to a neural network, an additional memory access is required to store data for backtracking, and the time spent on this is inevitable. Especially in the era of large models, the larger the model, the more memory access operations it needs to perform - and ultimately the energy consumed in memory access is many times higher than the calculation.
A simple metaphor is that the GPU is a muscular man (with many computing units), but for every instruction received, he has to go back and read the instruction manual (memory). Finally, as the size and complexity of the model increase, the macho man will The time for actual work was very limited, but I was exhausted from flipping through manuals so frequently that I foamed at the mouth.
Memory problems are just one of the many "discomforts" of GPUs in deep neural network applications. NVIDIA was aware of these problems from the beginning and quickly began to "magically modify" the GPU to make it more suitable for artificial intelligence application scenarios; and AI players who are well-informed are also secretly trying to use the flaws of the GPU to pry open the corners of Huang Renxun's empire.
An offensive and defensive battle began.
02
The secret war between Google and Nvidia
Faced with the overwhelming demand for AI computing power and the inherent flaws of GPUs, Huang Renxun came up with two sets of countermeasures and moved forward hand in hand.
The first set is to continue to violently accumulate computing power along the path of "the old man with computing power has boundless magic power".
In an era when the demand for AI computing power doubles every 3.5 months, computing power is the carrot dangled in front of the eyes of artificial intelligence companies, allowing them to scold Jen-Hsun Huang for his superb knife skills while snatching up all Nvidia's possessions like a licking dog. of production capacity.
The second set is to gradually solve the mismatch problem between GPU and artificial intelligence scenarios through "improved innovation".
These problems include but are not limited to power consumption, memory walls, bandwidth bottlenecks, low-precision calculations, high-speed connections, specific model optimization... Starting in 2012, NVIDIA suddenly accelerated the speed of architecture updates.
After NVIDIA released CUDA, it used a unified architecture to support the two major scenarios of Graphics and Computing. The first generation architecture debuted in 2007 and was named Tesla. This was not because Huang Renxun wanted to show favor to Musk, but to pay tribute to the physicist Nikola Tesla (the earliest generation was the Curie architecture).
Since then, each generation of Nvidia's GPU architecture has been named after famous scientists, as shown in the figure below.
In each architecture iteration, NVIDIA continues to accumulate computing power while improving without "breaking the bones."
For example, the second-generation Fermi architecture
in 2011
had the disadvantage of stretched heat dissipation, while the third-generation architecture
Kepler
in 2012
changed the overall design idea from high-perfermance to power-efficient to improve the heat dissipation problem; in order to solve the problem mentioned above To solve the problem of "muscle fool", the fourth generation
Maxwell architecture
in 2014
added more logic control circuits internally to facilitate precise control.
In order to adapt to AI scenarios, NVIDIA's "magically modified" GPUs are becoming more and more like CPUs to a certain extent - just as the CPU's excellent scheduling capabilities come at the expense of computing power, NVIDIA has to exercise restraint in stacking computing cores. However, no matter how much improvements are made to GPUs that carry the burden of versatility, they will still be unable to compete with specialized chips in AI scenarios.
The first to attack Nvidia was Google, which was the first to purchase GPUs on a large scale for AI computing.
After flexing its muscles with GoogLeNet in 2014, Google stopped openly participating in machine recognition competitions and conspired to develop AI-specific chips.
In 2016, Google took the lead with AlphaGo. After winning over Lee Sedol, it immediately launched its self-developed AI chip TPU, catching Nvidia off guard with its new architecture "born for AI."
TPU is
the abbreviation of
Tensor Processing Unit
, and its Chinese name is "Tensor Processing Unit".
If Nvidia's "magic modification" to GPU is to tear down the east wall to make up for the west wall, then TPU is to transfer the chip space to computing to the greatest extent by fundamentally reducing the storage and connection requirements. Specifically, the two Big means:
The first is quantitative technology.
Modern computer operations usually use high-precision data and occupy a lot of memory. However, in fact, most neural network calculations do not require precision of 32-bit or 16-bit floating point calculations. The essence of quantization technology is basically to approximate 32-bit/16-bit numbers. to 8-bit integers, maintaining appropriate accuracy and reducing storage requirements.
The second is the systolic array,
that is, the matrix multiplication array, which is also one of the most critical differences between TPU and GPU. To put it simply, neural network operations require a large number of matrix operations. The GPU can only break down the matrix calculations into multiple vector calculations step by step. Each group needs to access the memory and save the results of this layer until all vector calculations are completed. , and then combine the results of each layer to obtain the output value.
In TPU, thousands of computing units are directly connected to form a matrix multiplication array. As the computing core, matrix calculations can be performed directly. There is no need to access the storage unit except for loading data and functions at the beginning, which greatly reduces the access time. The frequency greatly speeds up the calculation of TPU, and greatly reduces energy consumption and physical space occupation.
Comparison of CPU, GPU, and TPU memory access times
Google developed TPU very quickly. It only took
15 months
from design, verification, mass production to final deployment in its own data center
. After testing, the performance and power consumption of TPU in AI scenarios such as CNN, LSTM, and MLP greatly outperformed NVIDIA's GPUs of the same period.
All the pressure was put on Nvidia at once.
It feels uncomfortable to be stabbed in the back by a major customer, but Nvidia will not stand still and take a beating, and a tug-of-war begins.
Five months after Google launched TPU, Nvidia also launched
the Pascal architecture
with 16nm process
. On the one hand, the new architecture introduces the famous
NVLink high-speed two-way interconnection technology
, which greatly increases the connection bandwidth; on the other hand, it imitates the quantization technology of TPU and improves the computational efficiency of neural networks by reducing data accuracy.
In 2017, NVIDIA launched
Volta, the first architecture
designed specifically for deep learning
, which introduced
TensorCore
for the first time
, specifically for matrix operations - although the 4×4 multiplication array is different from the TPU 256×256 systolic array. It's a bit shabby, but it's also a compromise made while maintaining flexibility and versatility.
4x4 matrix operation implemented by TensorCore in NVIDIA V100
NVIDIA executives declared to customers:
"Volta is not an upgrade of Pascal, but a completely new architecture."
Google is also racing against time. After 2016, TPU has been updated three generations in five years. TPUv2 was launched in 2017, TPUv3 was launched in 2018, and TPUv4 was launched in 2021, and the data was thrown into Nvidia’s face [4]:
TPU v4 vs. NVIDIA's A100 is 1.2 to 1.7 times faster in computing while consuming 1.3 to 1.9 times less power.
Google does not sell TPU chips to the outside world, and at the same time continues to purchase Nvidia's GPUs in large quantities. This makes the AI chip competition between the two remain a "clandestine competition" rather than an "open competition." But after all, Google deployed TPU into its own cloud service system to provide external AI computing services, which undoubtedly compressed Nvidia's potential market.
Google CEO Sundar Picha demonstrates TPU v4
While the two are "fighting in secret", the field of artificial intelligence is also making rapid progress. In 2017, Google proposed the revolutionary
Transformer model
, and OpenAI immediately developed GPT-1 based on Transformer. An arms race for large models broke out, and AI computing power demand accelerated for the second time since the emergence of AlexNet in 2012.
After sensing the new trend, NVIDIA launched the Hopper architecture in 2022, introducing
the Transformer acceleration engine
at the hardware level for the first time
, claiming that it can increase the training time of large language models based on Transformer by 9 times. Based on the Hopper architecture, NVIDIA launched the "most powerful GPU on the planet" -
H100
.
H100 is NVIDIA's ultimate "stitch monster". On the one hand, it introduces various AI optimization technologies, such as quantization, matrix calculation (Tensor Core 4.0) and Transformer acceleration engine; on the other hand, it is full of NVIDIA's traditional strengths, such as 7296 CUDA core, 80GB of HBM2 memory and NVLink 4.0 connection technology up to 900GB/s.
With the H100 in hand, Nvidia is temporarily relieved. There is no mass-produced chip on the market that is more powerful than the H100.
The secret tug of war between Google and NVIDIA is also a mutual achievement:
NVIDIA has imported many innovative technologies from Google, and Google's cutting-edge artificial intelligence research has also fully benefited from the innovation of NVIDIA GPUs. The two have joined forces to reduce AI computing power to a large The language model can be used "on tiptoe". Those who are in the spotlight, such as OpenAI, also stand on the shoulders of these two.
But feelings are feelings, and business is business. The offensive and defensive battle surrounding GPUs has made the industry more certain of one thing:
GPUs are not the optimal solution for AI, and customized application-specific chips (ASICs) have the potential to break Nvidia's monopoly.
The cracks have opened, and Google is naturally not the only one following the trend.
In particular, computing power has become the most certain demand in the AGI era, and everyone wants to sit at the same table with NVIDIA when eating.
03
a widening crack
In addition to OpenAI, there are two companies that are out of the current AI boom. One is the AI drawing company Midjourney, whose ability to control various painting styles scares countless carbon-based artists; the other is Authropic,
whose founder is from OpenAI. The conversational robot Claude had a back-and-forth with ChatGPT.
However, neither company purchased Nvidia GPUs to build supercomputers, but instead used Google's computing services.
In order to prepare for the explosion of AI computing power, Google built a supercomputer (TPU v4 Pod) with 4096 TPUs. The chips are interconnected with self-developed optical circuit switches (OCS), which can not only be used to train their own
LaMDA, Large language models such as MUM and PaLM can also provide cheap and high-quality services to AI startups.
Google
TPU v4 Pod supercomputing
Tesla is also a DIY supercomputer. After launching the on-board FSD chip, Tesla showed the outside world in August 2021 the supercomputing Dojo ExaPOD built with 3,000 of its own D1 chips. Among them, the D1 chip is manufactured by TSMC and uses a 7nm process. 3,000 D1 chips directly make Dojo the fifth largest computer in the world in terms of computing power.
However, the two combined cannot compare with the impact of Microsoft's self-developed Athena chip.
Microsoft is one of NVIDIA's largest customers. Its own Azure cloud service has purchased at least tens of thousands of A100 and H100 high-end GPUs. In the future, it will not only support ChatGPT's huge conversation consumption, but also provide Bing, Microsoft 365, Teams, Github, SwiftKey and other products that use AI.
After careful calculation, the "Nvidia tax" that Microsoft has to pay is an astronomical figure, and self-developed chips are almost inevitable. Just like Alibaba calculated Taobao and Tmall's future demand for cloud computing, databases, and storage, and found that it was also an astronomical figure, so it decisively began to support Alibaba Cloud and launched a vigorous internal "de-IOE" movement.
Saving costs is one aspect, while vertical integration to create differentiation is another aspect.
In the era of mobile phones, the CPU (AP), memory and screen of Samsung mobile phones are all produced and sold by itself, which has made great contributions to Samsung becoming the global Android hegemon. When Google and Microsoft build cores, they also perform chip-level optimization for their own cloud services to create differentiation.
Therefore, unlike Apple and Samsung, which do not sell chips to the outside world, although Google and Microsoft's AI chips will not be sold to the outside world, they will use "AI computing power cloud services" to digest some of Nvidia's potential customers. Midjourney and Authropic are examples. In the future, they will There are more small companies (especially in the AI application layer) choosing cloud services.
The global cloud computing market is highly concentrated. The top five manufacturers (Amazon AWS, Microsoft Azure, Google Cloud, Alibaba Cloud and IBM) account for more than 60%, and they are all making their own AI chips. Among them, Google has the fastest progress and IBM has the strongest reserves, Microsoft has the greatest impact, Amazon has the best confidentiality, and Alibaba has the most difficulties.
Domestic major manufacturers develop their own chips, and the outcome of Oppo Zheku will cast a shadow on every player who enters the market. However, major overseas companies conduct self-research and talent and technology supply chains can be built with funds. For example, Tesla hired
Silicon Valley guru Jim Keller when it was engaged in FSD,
while Google developed TPU and directly hired Turing Award winners,
Professor David Patterson,
the inventor of the RISC architecture
.
In addition to large manufacturers, some small and medium-sized companies are also trying to take away Nvidia's cake, such as
Graphcore
, which was once valued at US$2.8 billion
. Domestic Cambrian also belongs to this category. The following table lists the relatively well-known start-up AI chip design companies around the world.
The difficulty for AI chip startups is that they do not have the strong financial resources of major manufacturers to continue to invest, and they cannot produce and sell themselves like Google. Unless they have a unique technical route or have particularly strong advantages, they have basically no chance of winning in a head-to-head confrontation with Nvidia. The latter’s The cost and ecological advantages can almost eliminate all concerns of customers.
Start-up's impact on Nvidia is limited, and Huang Renxun's hidden worries still lie with those big customers who are not honest.
Of course, major manufacturers still cannot do without NVIDIA. For example, even if Google’s TPU has been updated to the fourth generation, it still needs to purchase GPUs in large quantities to provide computing power in conjunction with the TPU; even though Tesla has the Dojo supercomputer with incredible performance, Musk still needs to Choose to purchase 10,000 GPUs from NVIDIA.
However, Huang Renxun has already experienced the plastic friendship with major manufacturers from Musk. In 2018, Musk publicly announced that he would develop his own automotive chip (Nvidia's DRIVE PX was used at the time). Huang Renxun was questioned on the spot by analysts during a conference call and was unable to come to Taiwan for a time. Musk issued a "clarification" afterwards, but a year later Tesla still left Nvidia without looking back [5].
Large manufacturers never show mercy when it comes to cost savings. Although Intel's chips in the PC era were sold to the B-end, consumers have strong autonomy in choice, and manufacturers need to advertise "Intel Inside"; but in the era of cloud computing, giants can block all underlying hardware information and purchase the same in the future. With 100TFlops of computing power, can consumers tell which part comes from TPU and which part comes from GPU?
Therefore, NVIDIA has to face the problem in the end:
GPU is indeed not born for AI, but will GPU be the optimal solution for AI?
Over the past 17 years, Huang Renxun has separated the GPU from a single game or image processing scenario, making it a general computing tool. When the mining wave comes, grab the mining wave, the Metaverse becomes popular, and the Metaverse and AI come to embrace AI. The GPU is constantly being "magically modified" in each new scenario, trying to find a balance between "versatility" and "specificity".
Reviewing NVIDIA over the past two decades, it has launched countless new technologies that have changed the industry:
CUDA platform, TensorCore, RT Core (ray tracing), NVLink, cuLitho platform (computational lithography), mixed precision, Omniverse, Transformer engine …
These technologies have helped NVIDIA transform from a second-tier chip company to the largest player in the industry in terms of market capitalization, which is really inspiring.
But every generation should have its own computing architecture. Artificial intelligence is developing at a rapid pace, and technological breakthroughs are so fast that it can be measured in hours. If you want AI to penetrate human life as much as it did when PCs/smartphones became popular, then the computing power Costs may need to drop by 99%, and GPUs may not be the only answer.
History tells us that no matter how prosperous an empire is, it may still have to be careful of that inconspicuous crack.
Complete article, thank you for reading.
References
[1]
ImageNet Classification with Deep Convolutional Neural Networks, Hinton
[2]
Microsoft Readies AI Chip as Machine Learning Costs Surge, The Information
[3]
High Performance Convolutional Neural Networks for Document Processing
[4]
Google's Cloud TPU v4 provides exaFLOPS-scale ML with industry-leading efficiency
[5]
Tesla’s AI ambitions, Yuanchuan Research Institute
[6]
Large-scale Deep Unsupervised Learning using Graphics Processors
Author: He Luheng/Boss Dai
Editor: Boss Dai
Visual design: Shu Rui
Editor in charge: Li Motian
This article is
reproduced
from
Silicon Research Society
. If you have
any questions, please contact us at info@gsi24.com.