Article count:25311 Read by:103629709

Featured Content
Account Entry

CCF 2022: DPU Evaluation Technology White Paper Released [Attached White Paper Download]

Latest update time:2022-07-31
    Reads:

The first "2022 China Computer Federation Chip Conference" hosted by the China Computer Federation (CCF) was grandly held in Nanjing from July 29 to 31, 2022. On the afternoon of July 30, Zhongke Yusu, a leading domestic DPU design and development company, took the lead in organizing a sub-forum with the theme of "DPU Technology Trends and Applications". At this sub-forum, Zhongke Yusu, together with a number of companies and institutions, solemnly released a white paper on DPU evaluation technology, aiming to establish a fair DPU evaluation system for the industry.


DPU Technology Evaluation White Paper Released


As a leading company in the development of DPU in China, Zhongke Yusu has previously led the completion of the industry's first DPU technology white paper and led the DPU special working group of the New Generation Computing Standard Working Committee, making important contributions to the standardization of DPU. The release of the DPU evaluation white paper will have far-reaching significance for the industry's evaluation system.


We know that the commonly used evaluation dimension for chip evaluation is PPA, namely performance, power, and area. These three dimensions can be used to compare the advantages and disadvantages of similar chip products. However, the premise for the application of this evaluation dimension is that the chip must be of the same type, for example, a server-level CPU based on the X86 or ARM instruction set; or even if it does not belong to the same instruction set, it is at least a CPU of a similar level and can run the same operating system. For chips of different categories, the comparison of PPA is meaningless.


DPU (Data Processing Unit) is a special-purpose processor that has been developed in recent years. It is data-centric and uses software-defined technology to support infrastructure-layer services such as infrastructure-layer resource virtualization, storage, security, and service quality management. It is called the "third main chip" in the data center after CPU and GPU.


The emergence of DPU is a milestone in heterogeneous computing. Similar to the development of GPU, DPU is another typical example of application-driven architecture design. However, unlike GPU, DPU is more low-level and diverse in types. The core problem that DPU needs to solve is the decoupling of computing, networking, and storage, which creates conditions for "specialization", thereby improving the efficiency of the entire computing system and reducing the total cost of ownership (TCO) of the overall system. The emergence of DPU is another milestone in the development of architecture towards specialization.


Therefore, the development of DPU has been thriving in recent years, and many start-ups have emerged in the field of DPU chips. However, there are significant differences in the functions of DPUs from different manufacturers. Although they all belong to the general category of DPU, whether they belong to the "same category" remains to be discussed. This inevitably leads to different emphases in the dimensions of performance evaluation, which presents a great challenge to establishing a fair DPU evaluation system.


Faced with such evaluation challenges, the technical white paper " Dedicated Data Processor (DPU) Performance Benchmark Evaluation Method and Implementation" was jointly written and released by the Institute of Computing Technology of the Chinese Academy of Sciences and China Science and Technology Information Engineering Institute, and jointly edited by the National Key Laboratory of Processor Chip, CCF Integrated Circuit Design Committee, and China Society of Metrology and Testing Integrated Circuit Testing Committee.



The DPU evaluation technology white paper mainly includes seven contents:


1. Introduction to DPU Performance Evaluation

2. DPU performance evaluation system framework and test process

3. Network-oriented benchmarks

4. Storage-oriented benchmarks

5. Computation-oriented benchmarks

6. Security-oriented benchmarks

VII. Conclusion


This white paper defines the functions of current DPU products, takes into account the differences in DPU usage environments, and attempts to establish a fair, open, comprehensive, and objective DPU evaluation system for future DPU products. On the one hand, it provides a reference for DPU users, and on the other hand, it provides guidance for the standardization of future DPU products.


Industry experts discuss DPU technology trends


The forum invited nearly 10 guests, including Li Xiaowei, Executive Deputy Director of the State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Kan Hongwei, Head of Inspur's Heterogeneous Acceleration Frontier Innovation Team, Song Qingchun, Senior Director of NVIDIA Network Asia Pacific, Sun Xiaoning, Director of Tianyi Cloud Elastic Computing Products, Liu Luren, Senior R&D Expert of Tianyi Cloud, Wang Fu, Outstanding Architect of Baidu's Infrastructure Department and Head of Taihang DPU R&D, Li Zhiqiang, Director of the Future Network Office of the Basic Institute of China Mobile Research Institute, Ban Yourong, Researcher of the Network and IT Technology Research Institute of China Mobile Research Institute, and She Pengfei, Head of the Proprietary Technology Platform Architecture Team of the Information Technology Department of Huatai Securities, to discuss DPU technology trends and industrial applications.


Kan Hongwei, head of the Inspur heterogeneous acceleration frontier innovation team, said that the next-generation processor and its computing architecture transformation has become a consensus, and the "third main chip" is unstoppable. Various xPU chips represented by DPUs deploy various computing engines on the network/storage side to unload and divert system loads, and incorporate them into the basic I/O category research layout of the computing architecture, breaking away from the traditional CPU-centric model and making the computing path more smooth and efficient. Distributed xPU chips and their high-density pooled servers are an effective way to balance the system chip process dependence, power consumption, computing power, and performance, so that every data jump can generate more value. And we hope to see more open source customized xPU FPGA software and hardware full-stack platforms to build a bridge between academia and industry, and jointly explore the next-generation parallel XPU computing architecture of "interconnection + pooling + multi-engine independence".


Sun Xiaoning and Liu Luren of Tianyi Cloud Technology delivered a keynote speech entitled "The Past, Present and Future of DPU in Tianyi Cloud". Tianyi Cloud is the only central enterprise ranked in the top ten in the global public cloud market. Currently, Tianyi Cloud ranks first in the global operator cloud business scale, first in China's government (public cloud) market, and first in the hybrid cloud market. In 2021, Tianyi Cloud's business revenue was 27.9 billion, and the number of online customers exceeded 2 million. Currently, Tianyi Cloud has launched an elastic bare metal product based on DPU 1.0.


Liao Yunkun, a doctoral student at the Institute of Computing Technology of the Chinese Academy of Sciences, an RDMA technology expert, and Zhongke Yushu, introduced the opportunity for the birth of DPU and Zhongke Yushu's progress in DPU. He introduced that Zhongke Yushu proposed the SDA computing architecture KPU with "software-defined accelerator" as the core technology route, which solved the problem of fragmentation of dedicated processor design. The KPU architecture has the advantages of software-defined configurability, low design cost, and efficient computing. It also launched the DPU software development platform HADOS. HADOS is a dedicated software framework that supports the DPU hardware platform. It has good ecological compatibility and rich development and maintenance tools. It supports a variety of business types and can greatly reduce the difficulty of application software development. In addition, it has independently developed the network offload engine NOE and the data computing offload engine DOE.


Li Zhiqiang, director of the Future Network Office of the Basic Research Institute of China Mobile Research Institute, delivered a keynote speech entitled "Prospects for the Development of Future Networks Towards Integrated Computing and Networking". Li Zhiqiang pointed out that the current economic society is accelerating into the digital and intelligent era, which requires strong computing support. Through the deep integration of network infrastructure and computing infrastructure, new types of computing and network collaborative scheduling can be realized. As the target stage of computing power network, integrated computing and network will lead to the cross-integration of computing and network disciplines, which may produce a large number of original innovative technologies, promote innovative breakthroughs and integrated development in the two major fields. He then pointed out that more of the innovations are innovations in IP future networks, and he looks forward to joint exploration and joint promotion by the industry.


Song Qingchun, senior director of NVIDIA Networking Asia Pacific, spoke on the theme of "Using DPU to Create Cloud Native Supercomputing Architecture". He first pointed out that traditional computing platforms calculate on CPUs and GPUs, and the CPU runs services and manages infrastructure operations. The principle of computing platforms based on cloud native supercomputing architecture is that computing is done wherever the data is, and the network becomes the computing unit. The DPU offloads infrastructure operations, and storage becomes a new computing unit. The cloud native supercomputing architecture realizes the unification of computing and communication platforms, and can use network computing technology to solve some communication bottlenecks, such as latency and network congestion. He emphasized that DPU will play a great role in accelerating cloud native supercomputing technology, and introduced the use of NVIDIA DPU to accelerate HPC and AI business performance.


Wang Fu, Baidu's outstanding R&D architect and head of Taihang DPU R&D, then shared the theme of "Thinking about Taihang DPU Architecture in the Cloud Native Era". He believes that for clouds to chips, semi-closed cloud computing is a high-speed road for chip development; and for DPU to clouds, DPU has become a core component of cloud computing. The purpose of self-developed DPU by leading CSPs is mainly to solve the limitations of data center management and virtualization and maintain the advantages of their products. Baidu has been developing DPU for 2-3 years, and its product is Taihang DPU. The development and planning paths of Taihang DPU include 1.0, 2.0 and 3.0. The main features of Taihang DPU products are self-developed engines, integrated hardware and software architecture, full virtualization offloading to reduce losses, and integrated development of front-end and back-end. Finally, the development history of DPU is summarized, from Standard NIC-SmartNIC-CloudNIC to the possible CNIOE concept (Cloud Native IO Engine) in the future; at the same time, several possible forms of DPU in the future are also mentioned: ASIC/heterogeneous chips, eAISC/FPGA, AISC+FPGA.


Ban Yourong, project manager of the Network Institute of China Mobile Research Institute, gave a speech entitled "Research and Exploration of DPU in Computing Networks". In the speech, he pointed out that DPU is a key technology for computing network computing infrastructure. In view of the imperfect DPU technical system, preliminary suggestions for the standardization of software and hardware were put forward. In terms of DPU software, it is recommended to start the standardization definition of functions and interfaces from the five major systems of management, network, storage, computing, and security. In terms of DPU server hardware, standardization is carried out for the key points that require customized design for the introduction of DPU in servers, guiding servers to complete advanced design and breaking through the bottleneck of large-scale application of new technologies.


Introduction to the 2022 CCF Chip Conference


The CCF Chip Conference was chaired by Academician Sun Ninghui of the Institute of Computing Technology, Chinese Academy of Sciences, and Academician Liu Ming of the Institute of Microelectronics, Chinese Academy of Sciences. Han Yinhe, Director of the CCF Fault Tolerance Committee, Shu Jiwu, Director of the CCF Information Storage Committee, and Wu Chenggang, Director of the CCF Architecture Committee, served as program chairmen. Li Huawei, Secretary-General of the CCF Integrated Circuit Design Committee, and Shi Longxing of Southeast University served as chairmen of the organizing committee.


The conference also invited important guests such as Wu Hanming, academician of the Chinese Academy of Engineering, Cui Tiejun, academician of the Chinese Academy of Sciences, Liao Xiangke, academician of the Chinese Academy of Engineering, Wei Shaojun, dual-appointed professor of Tsinghua University and Peking University, Liu Weiping, chairman of Beijing HuaDa Empyrean Technology Co., Ltd., Zhao Haijun, co-CEO and executive director of SMIC, and Hu Weiwu, researcher of the Institute of Computing Technology of the Chinese Academy of Sciences and chairman of Loongson Technology, to attend and deliver keynote speeches.


The conference brings together experts, scholars and researchers engaged in chip-related research and technology development in the fields of computer, microelectronics, electronic information and other disciplines across the country to share technology research and development experience and cooperation needs, promote industry-university-research cooperation, and build the broadest and most in-depth academic exchange platform for domestic and foreign research institutions, universities and enterprises.


Attachment:


Reply "DPU Evaluation Technical White Paper" in the background to download the technical white paper "Dedicated Data Processor (DPU) Performance Benchmark Evaluation Method and Implementation".


*Disclaimer: This article is originally written by the author. The content of the article is the author's personal opinion. Semiconductor Industry Observer reprints it only to convey a different point of view. It does not mean that Semiconductor Industry Observer agrees or supports this point of view. If you have any objections, please contact Semiconductor Industry Observer.


Today is the 3117th content shared by "Semiconductor Industry Observer" for you, welcome to follow.

Recommended Reading

Semiconductor Industry Observation

" The first vertical media in semiconductor industry "

Real-time professional original depth


Scan the QR code , reply to the keywords below, and read more

Wafers|ICs|Equipment |Automotive Chips|Storage|TSMC|AI|Packaging

Reply Submit your article and read "How to become a member of "Semiconductor Industry Observer""

Reply Search and you can easily find other articles that interest you!

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号