The popularity of 5G and artificial intelligence (AI) in the past few years has not only driven the prosperity of applications, algorithms, and related ASIC chips, but also promoted innovation on the server side.
On the one hand, the massive amount of data generated by AI requires servers to provide more computing power support, and on the other hand, the application scenarios brought by 5G require servers to make corresponding changes. As a leading server supplier in China, Inspur is leading and promoting this change.
At the Inspur Cloud Data Center Partner Conference (IPF) with the theme of "Converging Innovation" held a few days ago, this leading company shared with us some of their views on the current and future development of AI and servers.
Ranked third in the server market and number one in AI servers
At IPF 2018, Inspur set the goal of becoming the world's number one server company. Although this is a difficult goal, we can see that Inspur's servers achieved a growth rate of 84.4% in 2018, which is 2.5 times the global server growth rate. Among the top three server manufacturers in the world, Inspur, which ranks third, is the only company whose server shipments in 2018 increased year-on-year compared with 2017. In China, Inspur is the undisputed leader in servers, with a 31% share far ahead of other competitors.
As for AI servers, Inspur is the undisputed winner. Data shows that 51.4% of the domestic market for this business is contracted by Inspur. The company has also rapidly developed into a global leading AI server supplier by virtue of its early layout in AI servers.
According to Mr. Wang Endong, Academician of the Chinese Academy of Engineering and Executive President of Inspur Group
According to Wang Endong, an academician of the Chinese Academy of Engineering and executive president of Inspur Group, this was mainly achieved through the leading products they released and cooperation with multiple customers. Wang Endong pointed out that Inspur has established a full-stack AI ecosystem, diversified AI acceleration cards, more than 20 AI servers, a mature AI development PaaS platform, and powerful AI framework optimization capabilities.
At the same time, they have also made huge investments in the artificial intelligence ecosystem, which has helped them establish extensive cooperation in this industry.
According to Wang Endong, Inspur launched more than 400 joint solutions with partners last year, with sales reaching more than 20 billion yuan. The company now has more than 9,000 partners in different industries, and the overall growth rate of cooperation performance with partners has reached an astonishing 116%. As for artificial intelligence, Inspur worked with partners last year to sort out more than 20 solutions for industry application scenarios of artificial intelligence, accumulating a lot of experience and industry cases.
"Among the current top 100 AI companies, 80% are partners of Inspur, and they are all in-depth cooperation partners. They include BAT Internet companies, as well as emerging AI unicorns such as Fourth Paradigm. There are also a large number of companies that have cooperated with us," Wang Endong emphasized.
Release "Metabrain" to create the strongest AI server ecosystem
Even though Inspur has prepared a lot of "ammunition" for the AI server market, as Mr. Peng Zhen, vice president of Inspur Group, said, AI is spreading rapidly to the communications, finance, radio and television, medical and manufacturing industries, which requires more computing power.
In his opinion, intelligent computing is the direction of transformation that each of us must think deeply about. This is why Inspur has made decisions such as using hardware reconstruction and software definition to support the development of cloud; using big data as a cognitive method to support the transformation of the entire intelligent computing; and using deep learning optimization algorithms to provide a driving force for business transformation for its intelligent computing.
Inspur Meta-Brain System consists of:
Based on this thinking, Inspur released their "Metabrain" platform and other diverse products at IPF 2019 to help the artificial intelligence industry take off.
According to reports, Inspur's "Meta-brain" is the carrier and embodiment of Inspur's full-stack artificial intelligence capabilities. It includes Inspur's world-leading scenario-based artificial intelligence infrastructure, diverse deep learning frameworks and tools, and the latest "tangible" products such as the artificial intelligence PaaS platform and AutoML Suite. It also embodies Inspur's "intangible" capabilities such as artificial intelligence algorithm optimization and system optimization services accumulated over the years. The so-called "meta" means the beginning of everything, and neurons are also the basic elements of brain neural networks. According to Inspur's plan, the "Meta-brain" will provide the most basic and original innovation support for artificial intelligence, empower ecological partners, accelerate the process of industrial artificial intelligence, and promote the flourishing of the artificial intelligence industry.
Let’s take a look at the composition of the Inspur Metabrain system:
Computing: The Inspur AI computing platform, AI ultra-high-speed computing accelerator card, ultra-low latency RDMA network and ultra-high bandwidth parallel storage jointly provide ultimate AI computing performance;
Framework layer: Inspur provides the TensorFlow-Opt optimization framework and FPGA computing acceleration engine TF2, which are the fastest training frameworks on the public cloud, for the most popular TensorFlow framework. It also supports mainstream frameworks such as Caffe, Caffe-MPI, and MXNet.
PaaS layer: The newly developed AI PaaS platform is aimed at AI enterprise training scenarios. It can realize containerized deployment, visual development, centralized management, etc., effectively connect the development environment, computing resources and data resources, and improve development efficiency;
Algorithm layer: The newly developed AutoML Suite enables non-professionals to build network models with high accuracy through minimal operations, greatly reducing the threshold and cost of AI development and application. In the 2018 NIPS Automatic Machine Learning Challenge, Inspur worked with teams from Beijing University of Posts and Telecommunications and Central South University to achieve the third best result in the world in the top international competition in the field of automatic machine learning.
Service layer: Professional AI optimization services including AI software and hardware system-level optimization, AI framework and algorithm-level optimization, application consulting and system design.
Other products include F10A, F37X and TF2, the AI computing acceleration engine, which are their AI acceleration hardware masterpieces. These hardware and software will provide strong support for relevant developers.
Partnering with Intel to Promote Traditional Server Upgrades
In addition to AI servers and related ecological products, traditional servers are also an area that Inspur will continue to focus on in the future. However, Peng Zhen, vice president of Inspur Group, also pointed out that as the scale of data centers grows, they will face a series of problems such as construction and management, which will bring new challenges to servers.
In order to solve related problems, Inspur and Intel worked together to develop a high-density optimized four-way cloud platform.
Mr. Wang Fei, General Manager of Intel Data Center Platform R&D and Architecture Division in China, pointed out that this 2U four-way platform named Crane Mountain uses Intel's second-generation Xeon Scalable Processor Cascade Lake. Because it is a four-way platform, it can support up to 112 CPU cores and 48 memory sticks, which can provide a lot of memory capacity. It is understood that such a single node with a four-way configuration can meet many high-performance requirements at the same time.
"Such a high-density design can better achieve efficient management and reduce OPEX. According to their estimates, the CAPEX of the entire system can be reduced by 7 to 12 percentage points, which means that the OPEX can be reduced by 5 to 7 percentage points, which can significantly reduce the overall cost of the data center," Wang Fei emphasized. This system has new highlights and innovations in design, which was achieved by the engineers of Inspur and Intel.
Wang Fei said that this is a design optimized for virtual machines and can support a large memory capacity. At the same time, in order to improve the system's heat dissipation efficiency and reduce OPEX, they also deliberately designed the CPU position of this product to be somewhat misaligned. The front panel of this system can support hot-swappable modules, which provides flexible configuration, facilitates after-sales operation and maintenance, and also reduces OPEX.
"In short, we believe that the four-way cloud platform design can better meet the new demands of infrastructure," Wang Fei added.
Peng Zhen also said that Inspur and Intel have an idea, which is to replace the two-way servers in the public cloud with four-way servers on a large scale. Their goal is to replace more than half of the two-way servers. But this is undoubtedly a very large and challenging task.
"Inspur contributed a four-way server design that has been widely used in the Internet to Intel. The two parties have jointly opened this design through OCP and contributed it to all our industry partners. We hope to use a mature design like Inspur to drive industry partners to promote the migration from two-way servers to four-way servers. This will be a huge change and will also be a huge challenge," Peng Zhen emphasized.
Some thoughts on the future of AI computing
Wang Endong repeatedly emphasized at the IPF 2019 Summit that computing power is the productivity of modern society, and artificial intelligence will be the core of this computing power. However, with the changes in terminals and upstream applications, how artificial intelligence computing can satisfy customers, meet the market, and better provide computing power support for customers has become a key issue of concern to the industry. Wang Endong said: "Based on Inspur's exploration and practice in AI technology, products, and models, we believe that to achieve this goal, we must achieve openness, integration, and agility."
Previous article:MediaTek launches i300 and i500 series processor chips to boost the development of AIoT ecosystem
Next article:Intel helps Alibaba Cloud build new video edge intelligent services
Recommended ReadingLatest update time:2024-11-16 17:41
- Popular Resources
- Popular amplifiers
- e-Network Community and NXP launch Smart Space Building Automation Challenge
- The Internet of Things helps electric vehicle charging facilities move into the future
- Nordic Semiconductor Launches nRF54L15, nRF54L10 and nRF54L05 Next Generation Wireless SoCs
- Face detection based on camera capture video in OPENCV - Mir NXP i.MX93 development board
- The UK tests drones equipped with nervous systems: no need to frequently land for inspection
- The power of ultra-wideband: reshaping the automotive, mobile and industrial IoT experience
- STMicroelectronics launches highly adaptable and easy-to-connect dual-radio IoT module for metering and asset tracking applications
- This year, the number of IoT connections in my country is expected to exceed 3 billion
- Infineon Technologies SECORA™ Pay Bio Enhances Convenience and Trust in Contactless Biometric Payments
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- [The third stop of Shijian’s ADI journey] Learn about cutting-edge IoT devices and solutions, and win Kindle and other gifts
- Are there any netizens working in the field of motors?
- Evaluation report summary: Mir MYS-8MMX
- 2020 Share the beauty of Qingdao in my eyes
- Zigbee Technology Exchange
- Allwinner V853+XR829 Tina wireless network wifimanger2.0 user guide
- Confusion about TFT LCD interface
- What is the function of TI DSP GEL file?
- EEWORLD University ---- stm32f407 video tutorial
- How to use DSP software waiting?