Article count:1075 Read by:1322795

Account Entry

Vitis AI 2.5 takes AI acceleration to the next level

Latest update time:2022-06-22 12:21
    Reads:

Author: Guo Bingqing

AMD Software and AI Product Marketing Manager


As a popular AI acceleration development platform, Vitis AI has released a new version and was officially launched on June 15.


We have been looking forward to AI playing a more important role in different workloads and device platforms. Since there is a huge market demand for Vitis AI from data centers to the edge, AMD Xilinx has focused on enriching and strengthening the functions of Vitis AI to provide faster AI acceleration. This article will provide an overview of the new features and optimizations of Vitis AI 2.5.



The model library of Vitis AI 2.5 has added popular NLP and more CNN models, such as Bert-base, Vision Transformer, end-to-end OCR, and SuperPoint and HFNet models for SLAM scenarios. With AMD's acquisition of Xilinx, AMD has further acquired excellent software and hardware capabilities; today, Vitis AI 2.5 supports 38 basic models and optimized models of the ZenDNN library of deep neural networks based on AMD EPYC server processors . It is conceivable that more AMD CPU users will experience faster AI performance acceleration through Vitis AI.



In the previous version, Vitis AI 2.0, we added the "Full Graph Optimizer (WeGO)" feature for the first time and received positive feedback from the developer community. By integrating the Vitis AI stack with mainstream AI frameworks, WeGO provides a convenient solution for deploying AI models on the cloud Deep Learning Processing Unit (DPU). In Vitis 2.5, WeGO further supports the Pytorch and Tensorflow 2 frameworks. In addition, 19 new examples such as image classification, object detection, and segmentation have been added to help users deploy AI models more smoothly on data center platforms.



The excellent AI acceleration performance on the AMD Xilinx platform is inseparable from a series of powerful AI acceleration engines and easy-to-use software tools. Currently, we provide scalable deep learning processor units (DPUs) to major FPGAs, adaptive SoCs, Versal ACAPs, and Alveo™ data center accelerator cards .


With the release of Vitis AI 2.5, Versal DPU IP can support multiple compute unit modes (Multiple compute unit) on the Versal AI Core series VC1902 device, support Depthwise convolution and LeakyReLU operator combination, etc.; Zynq Ultrascale MPSoC DPU IP has added Pool and Depthwise convolution functions implemented by the arithmetic logic unit (ALU), HardSigmoid and HardSwish functions, Depthwise convolution combined with LeakyReLU operator combination, in addition, it also supports large kernel-size MaxPool, large kernel-size AveragePool and rectangular kernel-size AveragePool, as well as new functions such as 16-bit constant weights.


The data center DPU IP now supports larger kernel-size Depthwise convolution (from 1x1 to 8x8), AI Engine-based pooling, ElementWise addition and multiplication, and large kernel-size pooling to meet more cloud AI application development needs.


This article only briefly explains the main features of Vitis AI 2.5. For a complete introduction to Vitis AI 2.5, please visit the official website .

To learn more about AI model libraries, quantizers, optimizers, compilers, DPU IP, WeGO, and whole application acceleration (WAA), please visit the Vitis AI zone .

Visit the Vitis AI GitHub section now to get the latest tools and images you need to try it out!


 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us About Us Service Contact us Device Index Site Map Latest Updates Mobile Version

Site Related: TI Training

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

EEWORLD all rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号 Copyright © 2005-2021 EEWORLD.com.cn, Inc. All rights reserved