Leading the way in artificial intelligence: Intel’s unique methodology

Publisher:EE小广播Latest update time:2022-02-14 Source: EEWORLD Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Intel is the leading company and is more capable of leading the era of artificial intelligence applications


abstract


  • Artificial intelligence is the fastest growing computing workload and its complexity is also increasing, placing increasing demands on compute, power, and bandwidth.

  • We are at a turning point: artificial intelligence is gradually breaking through the data center. With the advent of the era of artificial intelligence applications, the future of artificial intelligence lies beyond the data center.

  • Intel uses its unique methodology to drive the development of artificial intelligence from cloud to end to edge.

  • Intel's AI strategy is to accelerate the popularization of AI by lowering the entry barrier for users. Based on Intel® Xeon® processors and Intel's powerful product portfolio and open software ecosystem, we can not only lead the development of AI, but also deeply influence broader industry trends, so that everyone can use AI more conveniently.


 image.png

Sandra Rivera

Executive Vice President and General Manager of Data Center and Artificial Intelligence Group at Intel Corporation


At its core, artificial intelligence (AI) is the ability for machines to recognize patterns and make accurate predictions based on them. As AI models continue to become more sophisticated and complex, the need for more compute, memory, bandwidth, and power is increasing.


AI is the fastest growing computing workload and one of the four superpowers that Intel believes will have a transformative impact on the world. Although it was born in the data center, I believe the future of AI lies outside the data center. The era of AI applications on the client and edge has arrived. In order to extend AI from the cloud to the edge, the community needs a more open and holistic solution to accelerate and simplify the entire data modeling and deployment pipeline. Our strategy is to replicate the measures taken by the company for other major technology transformations in history: open to more customers, accelerate the popularization of AI and promote larger-scale applications.


Intel is one of the few companies in the world that is better positioned to lead the world into the next era of AI. Based on our strong ecosystem and open software, as well as the critical CPU, GPU, ASIC and other architectures that can meet the specific needs of countless AI use cases, this will enable us to lead the market and lay a solid foundation for ubiquitous open AI.


A range of leading architectures with AI features


When it comes to AI, many people immediately think of deep learning training and graphics card performance. Since training is often massively parallel, graphics cards have received a lot of attention, but this is only part of AI. Most solutions in AI practice will include a combination of classic machine learning algorithms and medium- and low-complexity deep learning models, and these functions have been integrated into modern CPUs such as Xeon.

 

image.png


Currently, AI data pipelines mainly run on Xeon processors. Through built-in acceleration and optimization software, we can make Xeon processors run faster. Based on this, on the one hand, we use Sapphire Rapids to improve the overall performance of AI by up to 30 times compared with the previous generation of products; at the same time, we also introduce more AI workloads to Xeon processors to reduce the need for independent accelerators, thereby further improving the product competitiveness of Xeon processors. For Intel products such as Xeon, AI capabilities and optimizations are not a new concept. Therefore, we plan to expand this approach to integrate AI into every product we deliver to data centers, clients, edges, graphics cards and many other fields.


For deep learning training that really performs best on GPUs, we want customers to have the freedom to choose the compute that best suits their AI workloads. Current GPUs are proprietary and closed, but we have a Habana Gaudi AI processor for specific fields and a Ponte Vecchio graphics card built for high-performance computing, which will be based on open industry standards. We are very pleased with the progress Gaudi has made so far. AWS announced the general availability of the Habana Gaudi-based DL1 instance in the fourth quarter of 2021, which has a 40% better price-performance ratio than existing graphics card-based instances and has performed well in early Gaudi usage tests.


Build a mature ecosystem to attract more customers


Specific models, algorithms, and requirements vary by use case and industry. For example, an autonomous vehicle company needs to address issues such as perception (using object detection, localization, and classification), high-definition mapping, and route planning, and take actions that need to adapt to dynamic environments. In addition, chatbots for technical support software also need to understand the technical terminology of specific companies and industries to accurately answer relevant questions. Similarly, the requirements for AI software and hardware vary by customer, market segment, workload, and design point. Device-side, embedded, and client AI systems need to achieve low-latency inference under power and heat constraints. There is also a growing need for AI developed in the cloud to have edge perception capabilities so that solutions developed in the cloud can be deployed at the edge, and vice versa.


All of these factors are driving innovation across the board, from the data center to the network to the edge, and impacting system-level hardware architectures such as high-bandwidth and large-capacity memory, fast interconnects, and intelligent software.


 image.png


The biggest growth point in the end-to-end AI pipeline is in the model deployment and AI inference stages. Today, more than 70% of AI inference runs on Xeon processors, and one of the fastest growing AI inference use cases is the intelligent edge, where Xeon has been deeply involved for many years.


Over the past eight months, I have been in close communication with key customers to gain a deeper understanding of their needs and workloads. These exchanges not only provide us with insights into the needs of high-impact customers such as cloud service providers, but also show us how strategic partners can help us understand the key areas where our own product portfolio is applied. Currently, there are tens of thousands of cloud instances running on Intel processors, and it is growing faster than any other architecture. At the same time, there are hundreds of billions of lines of code written for the x86 architecture, and the industry has installed hundreds of millions of Xeon processors. Therefore, Intel is in a unique position to not only drive industry development horizontally through industry standards, but also vertically in areas such as automation and medical care where the needs are more specialized.


An open software stack for AI developers


Hardware is only part of the solution, so we always adhere to the "software first" concept in our AI strategy. Among them, "software first" includes secure AI software components, which enable users to take advantage of the unique software and security features of Xeon processors, such as confidential computing through Intel® Software Guard Extensions (Intel® SGX) to protect critical data and software in use. Intel® SGX is the industry's first and most deployed hardware-based trusted execution environment for data centers. On this basis, our Xeon product roadmap also covers more confidential computing technologies, which will also consolidate our technological leadership.


We have spent years optimizing the most popular open source frameworks and libraries for CPUs, and we have the broadest portfolio of domain-specific accelerators developed based on open standards, making code easier to port and avoiding lock-in. In addition, to enhance our technological leadership and continue to drive innovation, we continue to delve deeper into technology, hoping to create open AI that covers everything from the cloud and data center to the client, edge, and more.


While enabling Intel optimizations by default in AI frameworks is critical to driving large-scale silicon adoption, we still need to meet the needs of all types of AI developers, such as framework developers working at the bottom of the software stack, low-code or no-code subject matter experts working higher in the stack, and all engineering and operations personnel who deploy, run, train, and maintain AI models (MLOps). Although their roles are very different, each stage of the AI ​​workflow has a common goal of quickly scaling from concept to the real world at the lowest cost and risk, which also means they need choice and open solutions based on common frameworks that are easy to deploy and maintain.


Based on this, we have developed BigDL and OpenVino™. Among them, BigDL supports large-scale machine learning on existing big data infrastructure; and through hundreds of pre-trained models, OpenVino™ can accelerate and simplify the deployment of reasoning on many different hardware. Through consistent standards and APIs, Intel provides developers working on the underlying AI stack with composable or optimized building blocks, and provides optimized and productized tools and kits for low-code developers, helping AI developers thrive. We continue to deepen our research in AI accelerators and security, which will enable us to make these key computing elements widely available across all customers, market segments, and products.


Intel Promotes AI Everywhere


At this stage, AI is already profoundly changing all walks of life, and in the future it is also expected to improve the lives of everyone on the planet, but only if it can be more easily deployed on a large scale. We believe that lowering the entry barrier to AI requires the right set of AI technologies. Through practice, we have verified a successful model to accelerate the next era of AI innovation: by helping to define the development environment through open source work, we will be able to develop and influence customer solutions, thereby affecting the entire industry. We predict that by 2026, the market size of Intel AI logic chips will exceed US$40 billion. We are seizing this opportunity with strong strength, and I am confident about the future.

[1] [2]
Reference address:Leading the way in artificial intelligence: Intel’s unique methodology

Previous article:Intel and partners build new artificial intelligence lab
Next article:Intel and Zhizaiyifang join hands to build a smart manufacturing factory

Recommended ReadingLatest update time:2024-11-16 13:40

The technological version of Pearl Harbor? The original design of Intel's processor came from Japan?
The great opportunities in this world never fall from the sky, they may also be given up by others!   Large order from Japanese In 1969, a Japanese trading company came to Silicon Valley to visit Intel. They wanted to make a calculator and were willing to pay Intel $60,000 to produce 12 chips. They also took out their
[Embedded]
The technological version of Pearl Harbor? The original design of Intel's processor came from Japan?
Using AI to improve AI and make AI applications more efficient and transparent
While pursuing his PhD in systems design engineering at the University of Waterloo, Alexander Wong didn’t have enough money to buy the hardware he needed for his computer vision experiments, so he invented a technique to make neural network models smaller and faster.   “Alexander Wong was doing a project presentation
[Internet of Things]
Using AI to improve AI and make AI applications more efficient and transparent
Intel's new processors are rumored to be priced at rock-bottom prices, poised to beat AMD
AMD's new processors are selling like hot cakes, making big brother Intel feel threatened. Intel cut the price of its new products in half, and it is rumored that it plans to spend $3 billion to repel the enemy. BusinessKorea, techradar, and FierceElectronics reported that Intel previously released the 10th-generati
[Embedded]
Autonomous driving cannot be separated from AI. NavInfo's "intelligent car brain" enters the fast lane
On September 17, the 2019 NavInfo User Conference with the theme of "New and Far-reaching" kicked off at the Shanghai Tower. AI opens the cognitive era of "smart car brain" It has been two years since NavInfo first proposed the top-level strategy of "Smart Car Brain" in 2017. During these two years, NavInfo has gradua
[Mobile phone portable]
CES 2024: Intel enters the automotive market to accelerate the realization of "AI everywhere"
Intel announced an acquisition agreement with Silicon Mobility SAS to build advanced electric vehicle energy management technology and launch a new AI-enhanced software-defined automotive SoC. NEWS HIGHLIGHTS News Highlights • Intel plans to acquire Silicon Mobility SAS to use its advanced technology to improv
[Automotive Electronics]
CES 2024: Intel enters the automotive market to accelerate the realization of
Intel and Ecovacs join forces to accelerate the application of commercial robots
Smart robots are the "crown jewel" of smart applications. Intel has been committed to collaborative innovation with ecosystem partners to promote the implementation of robot applications and accelerate the realization of industrial scale and commercialization. Recently, Intel partner Ecovacs Commercial Robots released
[Internet of Things]
Ministry of Industry and Information Technology: Using 5G, AI, and blockchain technologies to enhance the resilience of the manufacturing industry chain
On July 15, Xin Guobin, member of the Party Leadership Group and Vice Minister of the Ministry of Industry and Information Technology, presided over an expert seminar on the ministry's major soft project "Research on the Expanded Application of New Technologies such as 5G and AI in the Field of Intelligent Manufacturi
[Mobile phone portable]
If Baidu is a ferryman, how can it cross the river of AI?
The sustainable development of society and economy is driven by the continuous emergence of general purpose technologies. The so-called general purpose technologies, in simple terms, have multiple uses, are applied to almost all parts of the economy, and have huge technical complementarity and spillover effects.     
[Embedded]
If Baidu is a ferryman, how can it cross the river of AI?
Latest Industrial Control Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号