Do we really need more AI chips?
Source: The content comes from " Zhihu ", author: Max Lv, thank you.
Once anything enters a bubble period, people will inevitably worry about when it will collapse, and the current AI chips have entered a recognized bubble period.
From DianNao in the Cambrian period at ASPLOS'14 to Google's TPUv3 today, AI chips have achieved great success in just five years. Riding on the fast track of AI computing power explosion and shouting about the end of Moore's Law, Domain Specific Architecture seems to be the only way out.
But when countless giants and start-ups design one similar AI chip after another, we need to answer this question: Do we really need so many AI chips?
Software complexity
One of the unavoidable problems with the rapid development of AI chips is the exponential increase in software complexity. Many companies spent two years or even less to make a chip, but found that it took longer to support a variety of frameworks, keep up with the advancement of algorithms, and adapt to various platforms from mobile phones to data centers. If the window of deployment and mass production is missed, even if the chip is made, it will soon become outdated.
Unlike designing general architectures, designing specialized architectures such as AI chips requires considering both software design and optimization. Chip companies often optimistically estimate the cost of software adaptation and optimization, hoping to solve all problems through middleware and compilers. In fact, from Intel to Google to Nvidia, a large number of software engineers are being invested in adapting various platforms and manually optimizing network performance. For startups, there are many problems where chips have been tapeout for a long time but have been repeatedly delayed in delivery.
Essentially, as we begin to continuously explore the potential of chip architecture, the abstraction of the software layer will become more and more difficult, because it has to introduce the model or parameters of the underlying architecture into the upper-level abstraction. The current common practice is to make middleware between the underlying chip architecture and the upper-level software, but the cost of developing these middleware is often underestimated. Some time ago, a classmate from a chip startup company asked me how much manpower and time it would take to develop an Inference middleware like TensorRT? This is not an easy question to answer, so I asked them how many resources they have for this project.
Surprisingly, his boss only gave three or four people the money, because they assumed that they already had a set of low-level compilers and a set of high-level model conversion tools, so such a middleware for architectural abstraction would not require much effort. I guess such an investment should be able to produce a fully functional product, but I don’t believe that the final product can achieve the ideal performance indicators in actual applications. After all, making chips is not just for running benchmarks such as ResNet-50.
Fragmentation
Only needing to write one set of code to run on different platforms has been a long-standing demand of software engineers. The fragmentation brought about by AI chips of different architectures will greatly dampen their enthusiasm for applying AI in actual software products. Unlike previous experience, the poor interpretability of deep learning will bring many unexpected defects. For example, such a common problem is that a private model can get satisfactory results on the local CPU, but the performance drops significantly after being deployed to a certain device. How to debug these problems, who is responsible for debugging, what tools are used for debugging, and even whether the debugging engineer can get the private model? These questions are difficult to answer.
Fragmentation is also reflected in the fact that proprietary architectures often give up forward compatibility in order to tap into absolute performance. As mentioned above, the middleware has a fragmented AI software framework on one end and a chip architecture of generations on the other. How to maintain multiple partially incompatible instruction set architectures at the same time and ensure that every software update can fully cover all devices? There is no other way except to invest more manpower. A common argument is to maintain only a short-term (2-3 years) software support like the current consumer-grade chips. However, in the common application areas of AI chips, such as smart cameras, industrial intelligence, and autonomous driving, the life cycle of a chip may be as long as ten years. It is difficult to imagine how large a company needs to be to provide lasting technical support. If it is estimated that a startup company will not survive for two or three years, how can it safely deploy its products to a mass-produced car for consumers?
AI chips are just a transitional product
From the perspective of a software engineer, I personally firmly believe that customized AI processors will only be a transitional product. A unified, programmable, and highly concurrent architecture should be the direction we pursue. Looking back over the past twenty years, we have witnessed the shrinking market of minicomputers with dedicated architectures, the development of graphics processors to general-purpose vector processors, and even the platforms of our mobile phones and computers will tend to be unified. There is reason to believe that investing resources in customized AI chips now is by no means a good investment.
*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.
Today is the 1965th issue of content shared by "Semiconductor Industry Observer" for you, welcome to follow.
Recommended Reading
Samsung surpasses Huawei and becomes the leader in 5G equipment for the first time
★ The starting point of the curve - half a month after the Huawei ban
★ The domestic analog IC industry is entering a golden age
Semiconductor Industry Observation
" The first vertical media in semiconductor industry "
Real-time professional original depth
Scan the QR code , reply to the keywords below, and read more
Huawei|Samsung|United States|TSMC|5G|ARM|Talent| Marvell
Reply
Submit your article
and read "How to become a member of "Semiconductor Industry Observer""
Reply Search and you can easily find other articles that interest you!