Article count:1075 Read by:1322795

Account Entry

Xilinx rides the wind, Vitis AI breaks the wave, good things come in pairs

Latest update time:2021-09-01 19:54
    Reads:

Click on the blue words to follow us




this summer

Sister, the wind is blowing hard

Nothing But Thirty

Among them, the smart home appliances in Gu Jia's home

It also made a lot of people who eat melons become "lemon essence" in seconds


But in fact

Human exploration of artificial intelligence goes far beyond this

However, what we are going to talk about today is

The undercurrent behind the flourishing AI


AI training VS AI inference, which one will win?


The core of "AI productization" is to transform AI models into production-ready AI applications. The advent of this trend has greatly accelerated the development of this field. As the "productization" process intensifies, the revenue generated by AI inference is expected to soon surpass AI training revenue.

Who will meet the growing demand for AI inference?


As the computing power required by AI models increases by orders of magnitude, the demand for hardware for AI inference has greatly increased. As Moore's Law gradually declines, architectural innovation has become a star of hope. Only domain-specific architecture (DSA) can ensure that hardware can keep up with the growing demand for AI inference - DSA represents the future of computing, which is to customize adaptive hardware for "each type of workload" to achieve the highest operating efficiency.

Who can solve the problem of “AI productization”?

What DSA really means for AI inference is that every AI model we see requires a slightly different, sometimes even completely different, DSA architecture. Given that each AI model requires a customized DSA to be most efficient, the application use cases for AI are growing rapidly. AI-based classification, object detection, segmentation, speech recognition, and recommendation engines are just some of the AI ​​use cases that have been productized, and a large number of new applications are emerging every day. In addition, within each application, more models are developed, either for improved accuracy or for simplification of models. Xilinx FPGAs and adaptive computing devices can adapt the most advanced AI networks from the hardware architecture to the software layer within a single node/single device, saving huge marketing costs and time.


Compared to advanced GPUs, Xilinx FPGAs and adaptive computing devices have 8 times the internal memory, and the memory hierarchy is fully customizable by the user. Now, through the Vitis unified software platform, Xilinx devices can have such capabilities, which combines AI and software development, making it easier for developers to use C++/python, AI frameworks and libraries to accelerate their applications.

so……

Xilinx won the award again!



Recently, at the first China Artificial Intelligence Excellence and Innovation Award Ceremony held by Electronics Enthusiasts, Xilinx and its Vitis AI won the "Most Influential Brand" and "Most Innovative Product" awards. From 1984 to now, Xilinx and Vitis AI have been "standing" for more than 30 years and have been riding the wind and waves together!


Xilinx official microblog
Scan the QR code | Follow us


Your likes and views are the driving force for Xilinx's progress!

↘↘↘

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us About Us Service Contact us Device Index Site Map Latest Updates Mobile Version

Site Related: TI Training

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

EEWORLD all rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号 Copyright © 2005-2021 EEWORLD.com.cn, Inc. All rights reserved