Inspur launches new product with Metabrain to promote innovation

Publisher:chi32Latest update time:2019-04-19 Source: 半导体行业观察Keywords:wave Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

The popularity of 5G and artificial intelligence (AI) in the past few years has not only driven the prosperity of applications, algorithms, and related ASIC chips, but also promoted innovation on the server side.


On the one hand, the massive amount of data generated by AI requires servers to provide more computing power support, and on the other hand, the application scenarios brought by 5G require servers to make corresponding changes. As a leading server supplier in China, Inspur is leading and promoting this change.


At the Inspur Cloud Data Center Partner Conference (IPF) with the theme of "Converging Innovation" held a few days ago, this leading company shared with us some of their views on the current and future development of AI and servers.


Ranked third in the server market and number one in AI servers


At IPF 2018, Inspur set the goal of becoming the world's number one server company. Although this is a difficult goal, we can see that Inspur's servers achieved a growth rate of 84.4% in 2018, which is 2.5 times the global server growth rate. Among the top three server manufacturers in the world, Inspur, which ranks third, is the only company whose server shipments in 2018 increased year-on-year compared with 2017. In China, Inspur is the undisputed leader in servers, with a 31% share far ahead of other competitors.


As for AI servers, Inspur is the undisputed winner. Data shows that 51.4% of the domestic market for this business is contracted by Inspur. The company has also rapidly developed into a global leading AI server supplier by virtue of its early layout in AI servers.


According to Mr. Wang Endong, Academician of the Chinese Academy of Engineering and Executive President of Inspur Group


According to Wang Endong, an academician of the Chinese Academy of Engineering and executive president of Inspur Group, this was mainly achieved through the leading products they released and cooperation with multiple customers. Wang Endong pointed out that Inspur has established a full-stack AI ecosystem, diversified AI acceleration cards, more than 20 AI servers, a mature AI development PaaS platform, and powerful AI framework optimization capabilities.


At the same time, they have also made huge investments in the artificial intelligence ecosystem, which has helped them establish extensive cooperation in this industry.



According to Wang Endong, Inspur launched more than 400 joint solutions with partners last year, with sales reaching more than 20 billion yuan. The company now has more than 9,000 partners in different industries, and the overall growth rate of cooperation performance with partners has reached an astonishing 116%. As for artificial intelligence, Inspur worked with partners last year to sort out more than 20 solutions for industry application scenarios of artificial intelligence, accumulating a lot of experience and industry cases.


"Among the current top 100 AI companies, 80% are partners of Inspur, and they are all in-depth cooperation partners. They include BAT Internet companies, as well as emerging AI unicorns such as Fourth Paradigm. There are also a large number of companies that have cooperated with us," Wang Endong emphasized.


Release "Metabrain" to create the strongest AI server ecosystem


Even though Inspur has prepared a lot of "ammunition" for the AI ​​server market, as Mr. Peng Zhen, vice president of Inspur Group, said, AI is spreading rapidly to the communications, finance, radio and television, medical and manufacturing industries, which requires more computing power.


In his opinion, intelligent computing is the direction of transformation that each of us must think deeply about. This is why Inspur has made decisions such as using hardware reconstruction and software definition to support the development of cloud; using big data as a cognitive method to support the transformation of the entire intelligent computing; and using deep learning optimization algorithms to provide a driving force for business transformation for its intelligent computing.


Inspur Meta-Brain System consists of:


Based on this thinking, Inspur released their "Metabrain" platform and other diverse products at IPF 2019 to help the artificial intelligence industry take off.


According to reports, Inspur's "Meta-brain" is the carrier and embodiment of Inspur's full-stack artificial intelligence capabilities. It includes Inspur's world-leading scenario-based artificial intelligence infrastructure, diverse deep learning frameworks and tools, and the latest "tangible" products such as the artificial intelligence PaaS platform and AutoML Suite. It also embodies Inspur's "intangible" capabilities such as artificial intelligence algorithm optimization and system optimization services accumulated over the years. The so-called "meta" means the beginning of everything, and neurons are also the basic elements of brain neural networks. According to Inspur's plan, the "Meta-brain" will provide the most basic and original innovation support for artificial intelligence, empower ecological partners, accelerate the process of industrial artificial intelligence, and promote the flourishing of the artificial intelligence industry.


Let’s take a look at the composition of the Inspur Metabrain system:


Computing: The Inspur AI computing platform, AI ultra-high-speed computing accelerator card, ultra-low latency RDMA network and ultra-high bandwidth parallel storage jointly provide ultimate AI computing performance;


Framework layer: Inspur provides the TensorFlow-Opt optimization framework and FPGA computing acceleration engine TF2, which are the fastest training frameworks on the public cloud, for the most popular TensorFlow framework. It also supports mainstream frameworks such as Caffe, Caffe-MPI, and MXNet.


PaaS layer: The newly developed AI PaaS platform is aimed at AI enterprise training scenarios. It can realize containerized deployment, visual development, centralized management, etc., effectively connect the development environment, computing resources and data resources, and improve development efficiency;


Algorithm layer: The newly developed AutoML Suite enables non-professionals to build network models with high accuracy through minimal operations, greatly reducing the threshold and cost of AI development and application. In the 2018 NIPS Automatic Machine Learning Challenge, Inspur worked with teams from Beijing University of Posts and Telecommunications and Central South University to achieve the third best result in the world in the top international competition in the field of automatic machine learning.


Service layer: Professional AI optimization services including AI software and hardware system-level optimization, AI framework and algorithm-level optimization, application consulting and system design.


Other products include F10A, F37X and TF2, the AI ​​computing acceleration engine, which are their AI acceleration hardware masterpieces. These hardware and software will provide strong support for relevant developers.


Partnering with Intel to Promote Traditional Server Upgrades


In addition to AI servers and related ecological products, traditional servers are also an area that Inspur will continue to focus on in the future. However, Peng Zhen, vice president of Inspur Group, also pointed out that as the scale of data centers grows, they will face a series of problems such as construction and management, which will bring new challenges to servers.


In order to solve related problems, Inspur and Intel worked together to develop a high-density optimized four-way cloud platform.


Mr. Wang Fei, General Manager of Intel Data Center Platform R&D and Architecture Division in China, pointed out that this 2U four-way platform named Crane Mountain uses Intel's second-generation Xeon Scalable Processor Cascade Lake. Because it is a four-way platform, it can support up to 112 CPU cores and 48 memory sticks, which can provide a lot of memory capacity. It is understood that such a single node with a four-way configuration can meet many high-performance requirements at the same time.


"Such a high-density design can better achieve efficient management and reduce OPEX. According to their estimates, the CAPEX of the entire system can be reduced by 7 to 12 percentage points, which means that the OPEX can be reduced by 5 to 7 percentage points, which can significantly reduce the overall cost of the data center," Wang Fei emphasized. This system has new highlights and innovations in design, which was achieved by the engineers of Inspur and Intel.


Wang Fei said that this is a design optimized for virtual machines and can support a large memory capacity. At the same time, in order to improve the system's heat dissipation efficiency and reduce OPEX, they also deliberately designed the CPU position of this product to be somewhat misaligned. The front panel of this system can support hot-swappable modules, which provides flexible configuration, facilitates after-sales operation and maintenance, and also reduces OPEX.


"In short, we believe that the four-way cloud platform design can better meet the new demands of infrastructure," Wang Fei added.


Peng Zhen also said that Inspur and Intel have an idea, which is to replace the two-way servers in the public cloud with four-way servers on a large scale. Their goal is to replace more than half of the two-way servers. But this is undoubtedly a very large and challenging task.


"Inspur contributed a four-way server design that has been widely used in the Internet to Intel. The two parties have jointly opened this design through OCP and contributed it to all our industry partners. We hope to use a mature design like Inspur to drive industry partners to promote the migration from two-way servers to four-way servers. This will be a huge change and will also be a huge challenge," Peng Zhen emphasized.


Some thoughts on the future of AI computing


Wang Endong repeatedly emphasized at the IPF 2019 Summit that computing power is the productivity of modern society, and artificial intelligence will be the core of this computing power. However, with the changes in terminals and upstream applications, how artificial intelligence computing can satisfy customers, meet the market, and better provide computing power support for customers has become a key issue of concern to the industry. Wang Endong said: "Based on Inspur's exploration and practice in AI technology, products, and models, we believe that to achieve this goal, we must achieve openness, integration, and agility."

[1] [2]
Keywords:wave Reference address:Inspur launches new product with Metabrain to promote innovation

Previous article:MediaTek launches i300 and i500 series processor chips to boost the development of AIoT ecosystem
Next article:Intel helps Alibaba Cloud build new video edge intelligent services

Recommended ReadingLatest update time:2024-11-16 17:41

Why are the first batch of star AI companies facing bankruptcy?
An epidemic has accelerated the ebb of the capital economy, and a large number of naked swimmers have been forced to go ashore: the US stock market crashed, Chinese stocks were shorted, and the investment giant SoftBank Group fell from grace... Everything happened too fast! However, what was really unexpected was t
[Embedded]
Why are the first batch of star AI companies facing bankruptcy?
(STM32) Use DAC to output WAVE audio waveform
notes: I originally wanted to use PWM to output audio, but no matter how I debugged it, the frequency of the PWM audio was always wrong. Later, I used DAC instead. Configuration: Chip: STM32F103VET DAC: DAC channel 2 (8-bit right-aligned), timer TIM7 interrupt changes DAC value WAVE data: stored on the chip in const f
[Microcontroller]
Latest Internet of Things Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号