Article count:1933 Read by:13229860

Account Entry

OpenVINO™ DevCon 2023 is back! Intel inspires developers’ unlimited potential with innovative products

Latest update time:2023-06-03
    Reads:

On the fifth anniversary of the release of the OpenVINO tool suite, Intel launched the OpenVINO DevCon China Series Workshop 2023, aiming to continue to help developers learn systematically and steadily improve through monthly workshops. Based on this, Intel successfully held the first OpenVINO DevCon 2023 series of events with the theme of "Fifth Anniversary and New Features" , and released the new and more powerful Intel® OpenVINO 2023.0 version at this event. In order to help AI developers simplify their work processes and improve deployment efficiency to a greater extent.




Every innovative breakthrough in AI technology will bring new opportunities and challenges to developers. In the wave of AIGC, Intel continues to empower developers by creating and iterating innovative products represented by the OpenVINO tool suite to resolve development problems in various AI application scenarios, optimize the development experience, and fully unleash the innovation of developers. vitality.


--Dr. Zhang Yu

Chief Technology Officer, Network and Edge Group, Intel China

Senior Principal AI Engineer at Intel



As a deep learning inference tool, OpenVINO has helped hundreds of thousands of developers significantly improve AI inference performance, enabling it to achieve high performance "write once, deploy anywhere" with just a few lines of code, and automatically select Optimal hardware configuration to improve development efficiency. Because OpenVINO enables efficient and accurate inference of trained neural network models on different hardware platforms, this technology has been widely used in various fields such as education, retail, medical and industry, providing industry customers with efficient and in-depth Learning reasoning techniques brings great value. For example, in a test paper correction scenario, the handwriting written by students and teachers on dot-coded test papers can be collected in real time using smart pens, and then uploaded to the cloud server respectively. The cloud rendering server will render the handwriting into an image, and further detect and recognize it through the target detection model and OCR model to convert the written content into text. Finally, the recognition results will be integrated with the question bank to generate a homework report. Teachers can provide personalized teaching to students based on homework reports. In this process, the model inference speed quantified through OpenVINO precision perception is more than ten times improved compared to the floating-point model, and the recall rate and accuracy of the quantized model are equivalent to those of the floating-point model. While meeting the accuracy requirements In this case, the requirement of real-time reasoning is achieved.


Since the launch of the OpenVINO ™ tool suite in 2018 , Intel has paid close attention to market demand, focused on future development trends, and continued to iteratively update its support for models from computer vision to natural language processing, constantly giving it higher performance and making it more Easier to use, more flexible, more open and more comprehensive. The OpenVINO ™ 2023.0 version released by Intel based on this event has added the following advantages based on the previous ones:


More integrations, minimized code changes: OpenVINO 2023.0 makes it easier to move from training models to deployment. When optimizing the model, developers do not need to convert the TensorFlow model offline, but can do it automatically. Developers can take standard TensorFlow models and load them directly into OpenVINO Runtime or OpenVINO Model Server. Offline conversion to OpenVINO format is still encouraged when maximum performance is required .


Broader model support: OpenVINO 2023.0 has a wider range of generative AI models (CLIP BLIP, Stable Diffusion 2.0, etc.), text processing models (GPT, Transformer model, etc.) and other key models (Detectron2, PaddleSlim, RNN-T etc.) support. Developers no longer need to change to static input when taking advantage of GPUs (CPUs will be enabled in 2022), giving them more flexibility in coding. At the same time, the Neural Network Compression Framework (NNCF) became an option for quantization tools. By compressing the data in your model, you can more easily achieve large model performance improvements.


Excellent portability and performance: The CPU device plug-in now provides thread scheduling on Intel® 12th Generation Core processors and above, and developers can choose to run on the E-core, P-core, or both based on application priority. Run inference, optimizing performance or energy savings as needed. Additionally, OpenVINO will default to the best performing format regardless of which plugin a developer uses . At the same time, OpenVINO 2023.0 also improves model caching on the GPU with more efficient model loading/compilation.


In addition to continuous updates and iterations of OpenVINO , Intel also recently released a public beta version of the "Intel® Developer Cloud for the Edge" platform . The platform can meet the needs of developers to access hardware resources such as Intel's latest architecture CPU, GPU, VPU and FPGA, and can call other software resources such as Intel's latest OpenVINO tool suite without configuration. At the same time, the platform also supports containerization and bare metal application deployment, which can ensure that developers obtain real performance data, accelerate the development, verification and deployment process of artificial intelligence solutions, and improve application development efficiency and product selection optimization."Intel® Developer Cloud for the Edge" platform will continue to optimize the user experience in China, introduce local reference implementations and related edge devices, expand basic service hardware to support more user access, and meet the diverse needs of different users for test equipment.


With the support of innovative products, Intel is also working closely with industry ecological partners and actively contributing to standardization organizations and open source organizations.



Domestically, Intel has promoted a number of open source projects, including OpenVINO , oneAPI, etc. These projects provide more choices and support for local developers and enterprises. At the same time, we also conduct in-depth cooperation with domestic manufacturers, innovative enterprises and research institutions to jointly explore and practice the application and innovation of network and edge computing, and actively participate in various developer activities and technology summits to provide developers with more Opportunities for technical exchange and learning.


--Wang Shen

Intel Networking and Edge Group OpenVINO

Asia Pacific Product Marketing and Developer Ecosystem Director



Intel has always believed that developers are the core driving force for the innovation and development of the AI ​​industry. In the future, Intel will continue to adhere to the ecological concept of "water conservancy without dispute", embrace the power of openness and open source, help developers innovate based on Intel software and hardware products at the edge, support the large-scale implementation of AI, and promote digital intelligence in thousands of industries. Upgrading.




Want to see more "core" information

Tell us with your likes and watching ~





© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its affiliates. Other names and brands mentioned in this article are the property of their respective owners.


Two black technologies have been created in ten years, breaking the barriers between Android and Windows

Intel Flex series GPU releases software update package to expand support for new features such as Windows cloud gaming

Exploring Ruixuan: A new force in the game, born to love creation!


Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号