Article count:903 Read by:1105584

Account Entry

Why is FPGA so popular with the advent of big data? This article explains it all

Latest update time:2017-07-06
    Reads:

Data is becoming a new driving force for the progress of human society. Some experts predict that the average annual growth rate of China's big data industry will exceed 50% in the next five years. By 2020, China's total data volume will account for 20% of the world's total data volume, making it the world's largest data resource country and a global data center. However, while salivating over this big data feast, we are also facing a "growing pain": Do we have enough capacity to process and "digest" this massive amount of data? Although the number of data centers has also increased rapidly in recent years, in the face of the exponential growth of data processing tasks, we still need to seek solutions from the underlying core hardware architecture.


The general-purpose CPU is the core of traditional data centers. However, due to its classic von Neumann structure based on instruction decoding and execution and shared memory, it is destined to complete complex data processing tasks, but processing large amounts of parallel and repetitive data is not its strong point. "Multi-core" CPU is a countermeasure, but it still cannot get rid of the limitations of the architecture. In addition, Moore's Law is approaching its ceiling, and relying on the progress of process technology to bring performance improvements, this road is becoming more and more difficult. Therefore, the concept of heterogeneous processors was proposed. Simply put, it is to offload the work that the CPU is not good at to other more suitable devices for processing. Data processing devices of different architectures work together, each doing their own job, and improving efficiency. In heterogeneous data processing, the industry has different ideas about who is suitable for "adding" with the CPU. Usually throughput, latency, power consumption and flexibility are used as basic evaluation criteria.


In heterogeneous processors, "CPU+GPU" is an important option. GPU uses SIMD (single instruction multiple data) to allow multiple execution units to process different data at the same pace, greatly improving the ability of parallel data processing and can be reused in computationally intensive tasks. However, GPU has a "hard flaw", which is that the latency is relatively high. This is because although GPU can achieve data parallelism, its pipeline depth is limited. When each computing unit processes different data packets, it needs to do the same thing at a uniform pace, which increases the input and output delays. Usually the GPU delay will reach milliseconds.



In order to overcome the above problems, today's protagonist "FPGA" needs to appear. FPGA is a programmable logic device that can define the hardware functions of the device through software programming as needed, which is very flexible. This means that based on the data processing architecture of FPGA, the function of each logic unit is defined, and the work can be completed without instructions, and there is no need for complex shared memory scheduling and refereeing, which gets rid of the constraints of the von Neumann architecture. In terms of delay, the advantage of FPGA is particularly obvious. It can not only realize data parallelism, but also pipeline parallelism. Different levels of the pipeline process different data packets, which makes the processing of different data more convenient without waiting, and its delay is only microseconds. From the perspective of data throughput, the data processing acceleration capability of the new generation of FPGA is theoretically comparable to that of GPU. At the same time, thanks to the continuous improvement of semiconductor technology, the power of FPGA devices is also well controlled. Therefore, the heterogeneous processor combination of CPU+FPGA is favored by more and more people.


There is another technical option that we have to mention, that is ASIC. From the perspective of performance alone, the dedicated ASIC chips manufactured for specific network data acceleration purposes are undoubtedly the most competitive in terms of throughput, latency, and power consumption. However, there are two factors that have kept them out of the market from data center users: first, the R&D and tape-out costs of ASIC are getting higher and higher. Unless there is a sufficient scale, there is no economic advantage; second, once the data processing task requirements change, the functionally fixed ASIC will be "wasted". If FPGA is used, there is no need to worry about this problem. You only need to reprogram and redefine the function of the device, which is a very effective guarantee for the user's investment. This is the advantage of FPGA in flexibility.


Table 1. Performance comparison of several data processing architectures in computationally intensive tasks.


It can be said that in heterogeneous processing architecture, although each technology has its own advantages, FPGA has the most balanced performance in all aspects, which can maximize the benefits obtained by users. It is not difficult to understand why Intel was willing to spend a huge amount of money to acquire Altera, the world's second largest FPGA manufacturer, a year ago. This move undoubtedly endorsed the future status of FPGA in the data center. At the same time, Xilinx, which is the leader in the FPGA industry, has become more active and eye-catching in recent years. In terms of horizontal cooperation, it has jointly promoted open data acceleration architecture with AMD, ARM, Huawei, IBM, Mellanox, Qualcomm, etc. to create an ecological chain; in terms of vertical cooperation, it has successively tied up with Internet giants such as Amazon and Baidu, allowing FPGA to take its core position in the future core data application processing fields such as artificial intelligence, video processing, natural language processing, financial analysis, and network security. Obviously, the "fire" of big data has ignited FPGA. Whoever can seize the opportunity will be able to catch fire in the big data craze.


Figure 1. Xilinx FPGAs are used in Baidu’s data centers and will support Baidu’s driverless cars in the future.


Figure 2: Tencent’s FPGA cloud server provides users with FPGA cloud rental services.




Continue to follow Avnet

there are more

Highlights


Please click on the " Read the original article ” for more original content.


Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号