Intel is the leading company and is more capable of leading the era of artificial intelligence applications
abstract
Artificial intelligence is the fastest growing computing workload and its complexity is also increasing, placing increasing demands on compute, power, and bandwidth.
We are at a turning point: artificial intelligence is gradually breaking through the data center. With the advent of the era of artificial intelligence applications, the future of artificial intelligence lies beyond the data center.
Intel uses its unique methodology to drive the development of artificial intelligence from cloud to end to edge.
Intel's AI strategy is to accelerate the popularization of AI by lowering the entry barrier for users. Based on Intel® Xeon® processors and Intel's powerful product portfolio and open software ecosystem, we can not only lead the development of AI, but also deeply influence broader industry trends, so that everyone can use AI more conveniently.
Sandra Rivera
Executive Vice President and General Manager of Data Center and Artificial Intelligence Group at Intel Corporation
At its core, artificial intelligence (AI) is the ability for machines to recognize patterns and make accurate predictions based on them. As AI models continue to become more sophisticated and complex, the need for more compute, memory, bandwidth, and power is increasing.
AI is the fastest growing computing workload and one of the four superpowers that Intel believes will have a transformative impact on the world. Although it was born in the data center, I believe the future of AI lies outside the data center. The era of AI applications on the client and edge has arrived. In order to extend AI from the cloud to the edge, the community needs a more open and holistic solution to accelerate and simplify the entire data modeling and deployment pipeline. Our strategy is to replicate the measures taken by the company for other major technology transformations in history: open to more customers, accelerate the popularization of AI and promote larger-scale applications.
Intel is one of the few companies in the world that is better positioned to lead the world into the next era of AI. Based on our strong ecosystem and open software, as well as the critical CPU, GPU, ASIC and other architectures that can meet the specific needs of countless AI use cases, this will enable us to lead the market and lay a solid foundation for ubiquitous open AI.
A range of leading architectures with AI features
When it comes to AI, many people immediately think of deep learning training and graphics card performance. Since training is often massively parallel, graphics cards have received a lot of attention, but this is only part of AI. Most solutions in AI practice will include a combination of classic machine learning algorithms and medium- and low-complexity deep learning models, and these functions have been integrated into modern CPUs such as Xeon.
Currently, AI data pipelines mainly run on Xeon processors. Through built-in acceleration and optimization software, we can make Xeon processors run faster. Based on this, on the one hand, we use Sapphire Rapids to improve the overall performance of AI by up to 30 times compared with the previous generation of products; at the same time, we also introduce more AI workloads to Xeon processors to reduce the need for independent accelerators, thereby further improving the product competitiveness of Xeon processors. For Intel products such as Xeon, AI capabilities and optimizations are not a new concept. Therefore, we plan to expand this approach to integrate AI into every product we deliver to data centers, clients, edges, graphics cards and many other fields.
For deep learning training that really performs best on GPUs, we want customers to have the freedom to choose the compute that best suits their AI workloads. Current GPUs are proprietary and closed, but we have a Habana Gaudi AI processor for specific fields and a Ponte Vecchio graphics card built for high-performance computing, which will be based on open industry standards. We are very pleased with the progress Gaudi has made so far. AWS announced the general availability of the Habana Gaudi-based DL1 instance in the fourth quarter of 2021, which has a 40% better price-performance ratio than existing graphics card-based instances and has performed well in early Gaudi usage tests.
Build a mature ecosystem to attract more customers
Specific models, algorithms, and requirements vary by use case and industry. For example, an autonomous vehicle company needs to address issues such as perception (using object detection, localization, and classification), high-definition mapping, and route planning, and take actions that need to adapt to dynamic environments. In addition, chatbots for technical support software also need to understand the technical terminology of specific companies and industries to accurately answer relevant questions. Similarly, the requirements for AI software and hardware vary by customer, market segment, workload, and design point. Device-side, embedded, and client AI systems need to achieve low-latency inference under power and heat constraints. There is also a growing need for AI developed in the cloud to have edge perception capabilities so that solutions developed in the cloud can be deployed at the edge, and vice versa.
All of these factors are driving innovation across the board, from the data center to the network to the edge, and impacting system-level hardware architectures such as high-bandwidth and large-capacity memory, fast interconnects, and intelligent software.
The biggest growth point in the end-to-end AI pipeline is in the model deployment and AI inference stages. Today, more than 70% of AI inference runs on Xeon processors, and one of the fastest growing AI inference use cases is the intelligent edge, where Xeon has been deeply involved for many years.
Over the past eight months, I have been in close communication with key customers to gain a deeper understanding of their needs and workloads. These exchanges not only provide us with insights into the needs of high-impact customers such as cloud service providers, but also show us how strategic partners can help us understand the key areas where our own product portfolio is applied. Currently, there are tens of thousands of cloud instances running on Intel processors, and it is growing faster than any other architecture. At the same time, there are hundreds of billions of lines of code written for the x86 architecture, and the industry has installed hundreds of millions of Xeon processors. Therefore, Intel is in a unique position to not only drive industry development horizontally through industry standards, but also vertically in areas such as automation and medical care where the needs are more specialized.
An open software stack for AI developers
Hardware is only part of the solution, so we always adhere to the "software first" concept in our AI strategy. Among them, "software first" includes secure AI software components, which enable users to take advantage of the unique software and security features of Xeon processors, such as confidential computing through Intel® Software Guard Extensions (Intel® SGX) to protect critical data and software in use. Intel® SGX is the industry's first and most deployed hardware-based trusted execution environment for data centers. On this basis, our Xeon product roadmap also covers more confidential computing technologies, which will also consolidate our technological leadership.
We have spent years optimizing the most popular open source frameworks and libraries for CPUs, and we have the broadest portfolio of domain-specific accelerators developed based on open standards, making code easier to port and avoiding lock-in. In addition, to enhance our technological leadership and continue to drive innovation, we continue to delve deeper into technology, hoping to create open AI that covers everything from the cloud and data center to the client, edge, and more.
While enabling Intel optimizations by default in AI frameworks is critical to driving large-scale silicon adoption, we still need to meet the needs of all types of AI developers, such as framework developers working at the bottom of the software stack, low-code or no-code subject matter experts working higher in the stack, and all engineering and operations personnel who deploy, run, train, and maintain AI models (MLOps). Although their roles are very different, each stage of the AI workflow has a common goal of quickly scaling from concept to the real world at the lowest cost and risk, which also means they need choice and open solutions based on common frameworks that are easy to deploy and maintain.
Based on this, we have developed BigDL and OpenVino™. Among them, BigDL supports large-scale machine learning on existing big data infrastructure; and through hundreds of pre-trained models, OpenVino™ can accelerate and simplify the deployment of reasoning on many different hardware. Through consistent standards and APIs, Intel provides developers working on the underlying AI stack with composable or optimized building blocks, and provides optimized and productized tools and kits for low-code developers, helping AI developers thrive. We continue to deepen our research in AI accelerators and security, which will enable us to make these key computing elements widely available across all customers, market segments, and products.
Intel Promotes AI Everywhere
At this stage, AI is already profoundly changing all walks of life, and in the future it is also expected to improve the lives of everyone on the planet, but only if it can be more easily deployed on a large scale. We believe that lowering the entry barrier to AI requires the right set of AI technologies. Through practice, we have verified a successful model to accelerate the next era of AI innovation: by helping to define the development environment through open source work, we will be able to develop and influence customer solutions, thereby affecting the entire industry. We predict that by 2026, the market size of Intel AI logic chips will exceed US$40 billion. We are seizing this opportunity with strong strength, and I am confident about the future.
Previous article:Intel and partners build new artificial intelligence lab
Next article:Intel and Zhizaiyifang join hands to build a smart manufacturing factory
Recommended ReadingLatest update time:2024-11-16 13:40
- Popular Resources
- Popular amplifiers
- Molex leverages SAP solutions to drive smart supply chain collaboration
- Pickering Launches New Future-Proof PXIe Single-Slot Controller for High-Performance Test and Measurement Applications
- CGD and Qorvo to jointly revolutionize motor control solutions
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Nidec Intelligent Motion is the first to launch an electric clutch ECU for two-wheeled vehicles
- Bosch and Tsinghua University renew cooperation agreement on artificial intelligence research to jointly promote the development of artificial intelligence in the industrial field
- GigaDevice unveils new MCU products, deeply unlocking industrial application scenarios with diversified products and solutions
- Advantech: Investing in Edge AI Innovation to Drive an Intelligent Future
- CGD and QORVO will revolutionize motor control solutions
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Thank you for being there, thank you for being polite!
- How to locate the code and data of C program
- Color screen printing limit for color segment screen
- RF Personal Experience Sharing Series
- EnDat 2.2 Absolute Encoder Master Interface Reference Design for C2000 MCUs
- [National Technology N32G457 Review] 3. Porting small embedded shell tool nr_micro_shell
- Design and implementation of RDS function based on S1473X
- 【GD32450I-EVAL】Onboard SDRAM supports LittleVGL
- I recently debugged the Fudan Micro FM33LC026N chip and am preparing to trial-produce a batch. Are there any Fudan Micro agents?
- Capacitor parallel ripple current