MWC22 | Intel launches a new version of OpenVINO to enable developers to accelerate AI reasoning

Publisher:EE小广播Latest update time:2022-02-28 Source: EEWORLD Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

MWC22 | Intel launches a new version of OpenVINO to enable developers to accelerate AI reasoning


The Intel OpenVINO toolkit has undergone a major upgrade to easily accelerate AI inference performance.


image.png


Latest news: Since launching OpenVINO™ in 2018, Intel has helped hundreds of thousands of developers significantly improve AI reasoning performance and expand their applications from edge computing to enterprises and clients. Intel launched a new version of the Intel® Distribution of OpenVINO toolkit on the eve of the 2022 Mobile World Congress in Barcelona. The new features are mainly developed based on developer feedback over the past three and a half years, including more deep learning model choices, more device portability options, and higher reasoning performance with fewer code changes.


Adam Burns, vice president of OpenVINO Developer Tools at Intel Network and Edge Group, said: “The latest version of OpenVINO 2022.1 was developed based on feedback from hundreds of thousands of developers over the past three years to simplify and automate optimization work. The latest version adds hardware auto-discovery and auto-optimization capabilities, allowing software developers to achieve the best performance on any platform. The software, combined with Intel silicon, can achieve significant AI return on investment benefits and can be easily deployed to Intel-based solutions in user networks.”


About OpenVINO: The Intel Distribution of OpenVINO toolkit for high-performance deep learning is developed based on oneAPI to help users deploy more accurate real-world results to production systems faster on a variety of Intel platforms from edge to cloud. Through a simplified development workflow, OpenVINO enables developers to deploy high-performance applications and algorithms in the real world.


Why it matters: Edge AI is transforming every industry, enabling new and enhanced use cases, including manufacturing, health and life sciences applications, as well as retail and security. According to Omdia, global edge AI chipset revenue will reach $51.9 billion by 2025, driven by increasing enterprise demand for edge AI inference. Edge inference reduces latency, reduces bandwidth requirements, and improves performance, meeting the increasing demands for timely processing from emerging IoT devices and applications.


At the same time, developers’ workloads are growing and changing, requiring simpler, more automated processes and tools that have comprehensive intelligence to optimize performance from build to deployment.

About OpenVINO 2022.1 features: With these new features, developers can more easily adopt, maintain, optimize, and deploy code across a wider range of deep learning models. Highlights include:


Easier update API


Reduce code changes when converting from frameworks: Conversion is now reduced by preserving exact formatting; models no longer require layout conversions.

A simpler way to accelerate AI: The Model Optimizer API parameters have been reduced to minimize complexity.

Train with inference in mind: OpenVINO training extensions and the Neural Network Compression Framework (NNCF) provide optional model training templates that can further improve performance while maintaining accuracy for action recognition, image classification, speech recognition, question answering, and translation.


Wider model support


Support for a wider range of natural language programmable models and use cases such as text-to-speech and speech recognition: Dynamic shape support better enables the BERT series and Hugging Face transformer.


Optimization and support for advanced computer vision: The Mask R-CNN series is now further optimized and introduces support for double-precision (FP64) models.


Direct support for PaddlePaddle models: Model Optimizer can now directly import PaddlePaddle models without converting to another framework first.


Portability and performance


Smarter device usage without code modifications: AUTO device mode can automatically discover available system reasoning capabilities based on model requirements, so applications no longer need to understand their computing environment in advance.


Expert optimization built into the tool suite: Improve device performance through automatic batch processing, automatically adjust and customize system configurations and throughput settings for deep learning models to suit developers, thus enabling developers to achieve scalable parallel processing and optimized memory usage experience.


Built for 12th Gen Intel® Core™: Supports hybrid architecture, providing enhanced capabilities for high-performance inference using the CPU and integrated GPU.


About edge adoption: With a “write once, deploy anywhere” approach, developers only need to write applications or algorithms once and then deploy them to a wide range of Intel architectures including CPU, iGPU, Movidius VPU and GNA. As data explodes, Intel develops software that enables developers to process data more intelligently to solve challenges and transform business models. As a result, new and unique AI inference technologies are increasingly being adopted at the edge and extended to enterprises and clients.


As Zeblok's AI platform as a service, AI-MicroCloud is a cloud-to-edge MLDevOps platform that allows customers to mix and match AI independent software developers and vendors on a large scale to deliver edge AI applications while supporting full lifecycle deployment. After integrating Intel OpenVINO software into AI-MicroCloud, the use of Intel processors will greatly enhance AI reasoning performance and minimize the cost of each insight. Zeblok's AI-MicroCloud platform is currently being evaluated to support specific network topologies in cities around the world.


“Our mission is to think about cost per insight,” said Mouli Narayanan, founder and CEO of Zeblok. “By using Intel processors, we have enabled cost-effective and energy-efficient AI inference and generated a very high return on investment. This new version of OpenVINO will create even greater value for our ecosystem.”


American Tower has built six edge data centers and will build more in the future. The company recently acquired CoreSite to accelerate 5G edge deployment. They are working with Intel and Zeblok to provide customers with a complete turnkey solution.


“With American Tower’s edge infrastructure, Intel’s OpenVINO deep learning capabilities, and Zeblok’s AI platform as a service, we can deliver a complete intelligent solution to the market,” said Eric Watko, vice president of innovation at American Tower.


Reference address:MWC22 | Intel launches a new version of OpenVINO to enable developers to accelerate AI reasoning

Previous article:Codasip releases new RISC-V embedded core to support AI/ML edge customization
Next article:New Intel® vPro® platform launched, enriching product portfolio to meet commercial computing needs of all types of enterprises

Recommended ReadingLatest update time:2024-11-16 21:30

Intel: Moving into the discrete graphics market
In this article, Intel Fellow Aditya Navale will share the evolution of graphics processors from achieving better pixel rendering to solving complex computing challenges for humans . - Aditya Navale, Intel Fellow, Director of GPU Core IP Architecture In fact, we are as excited as
[Home Electronics]
Intel: Moving into the discrete graphics market
Start with measurement to determine the potential of AI algorithms
Jeff Harris, Vice President of Global Corporate and Product Marketing, Keysight Technologies Artificial intelligence (AI) algorithms have three basic core elements: 1) the ability to make measurements; 2) knowing how many of those measurements require further processing; and 3) the ability to pr
[Test Measurement]
Start with measurement to determine the potential of AI algorithms
A comprehensive analysis of Huawei's Kirin 970 chip: Does it reach the level of AI chips?
Huawei has put a lot of thought into this "world's first" title. As a company that is proud of its "self-developed" products, Huawei not only uses another company's (Cambrian) neural network processing unit (NPU) on the Kirin 970, but also exclusively learned that the Kirin 970's entire set of embedded AI solutions fo
[Mobile phone portable]
A comprehensive analysis of Huawei's Kirin 970 chip: Does it reach the level of AI chips?
The International Semiconductor Industry Association estimates that TSMC and Intel will build 2nm wafer fabs within this year
On March 28, the Semiconductor Industry Association (SEMI) recently released an industry chain report, stating that chip giants TSMC and Intel are expected to build 2-nanometer wafer fabs before the end of this year. IT Home cited a SEMI report, which estimated that TSMC's monthly production capacity of 8-inch
[Semiconductor design/manufacturing]
Too much AI, robots and trackers in the workplace is not a good thing, study finds
Technological innovation in the workplace, especially the widespread use of new technologies such as artificial intelligence, robots and trackers, has triggered deep thinking about the quality of life. As more and more jobs are replaced by automation and surveillance devices become more popular, people are b
[robot]
The secret code for the era of big industry: new technologies such as cloud computing and artificial intelligence
In 2019, I believe that both technology practitioners and traditional industries have clearly sensed a driving force - cloud computing is entering every detail of industrial production. In particular, cloud computing brings more than just a place for data to live in the cloud, but also outputs various new technologies
[Embedded]
The secret code for the era of big industry: new technologies such as cloud computing and artificial intelligence
Georgia Tech uses artificial intelligence to develop new digital antenna array for edge computing
Georgia Tech researchers have developed a new method to intelligently process data near antenna subarrays, closer to where the data is generated. The research combines technologies such as machine learning, field programmable gate arrays (FPGAs), graphics processing units (GPUs) and new radio frequency image process
[Network Communication]
Apple China official website launches iOS 18 introduction page: AI is not mentioned at all
On July 4, Apple China’s official website launched the iOS 18 introduction page, which details the new design and new features of iOS 18. It is worth noting that the page does not mention the biggest highlight of iOS 18 - AI. Apple focused on introducing various AI features at this year's WWDC, and release
[Mobile phone portable]
Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号