Nowadays, intelligent robots have become the main trend of future development in various industries, and the research and application of intelligent robots are attracting more and more attention. The application of intelligent robot technology can be seen in all aspects of human life.
Intel China Research Institute has been conducting research in the field of robotics for more than three years, from the initial innovation of the hardware platform to the release of the adaptive human-computer interaction capability library, and then to the release of the Robotics 4.0 White Paper with partners in Nanjing at the end of June this year (download address: http://robotplaces.mikecrm.com/kdQ63Ad), and has achieved gratifying results.
Song Jiqiang, Director of Intel China Research Institute
At the Robot Hero Conference, Dr. Song Jiqiang, director of Intel China Research Institute, delivered a keynote speech entitled "Towards the Robot 4.0 Era of Cloud-Edge-End Integration", sharing how Intel China Research Institute views the Robot 4.0 era and what technologies Intel can provide in terms of cloud-edge-end integration. Dr. Song Jiqiang entered the field of robotics from an IT background. He believes that AI, 5G communications, computing models, etc. all have the potential to make great contributions to the development of the robotics field.
The following content is compiled based on the transcript of President Song Jiqiang’s speech:
Future keywords: Robotics 4.0
News Watch
First, President Song Jiqiang analyzed what Robot 4.0 is. In 2017, Intel, IDC International Data Center, and China Academy of Information and Communications Technology jointly released the white paper "Robot 3.0 New Ecology in the Era of Artificial Intelligence", which divided the development of robots into three eras. The robot industry has been around for nearly 90 years since the 1930s, and the real systematization and IT integration also began 50 years ago. The first stage was mainly to automate robots through electricity and motors, and to do mechanical and electronic things. At that stage, robots knew nothing about environmental perception. They were stronger than humans, had better records of repeated work than humans, and were not tired, but they had to be isolated to ensure safety, because they didn't know whether you were a person or a piece of material, which was a problem.
Robots in the 2.0 era have added various sensors to allow them to sense their surroundings and avoid various accidents. We call this stage the digital era. Up to now, there are still many robots in the 1.0 and 2.0 eras in our entire industry.
The third is the intelligent era. Artificial intelligence has developed through deep learning, which has enabled many technologies to be applied at a lower cost, such as visual technology and voice technology, which can improve the capabilities of robots. The development speed of these three stages is getting faster and faster. The first stage took decades, the second stage took more than ten years, and the third stage may evolve to the next generation in a few years.
When will the next generation begin? From the development and changes of the market, the total growth rate of sales of industrial robots and service robots has been rising in the past seven or eight years. The growth rate of service robots has begun to slow down. From the perspective of quantity, there is still a certain growth, but the acceleration of growth is not enough. In fact, it is because the dividends of this wave of AI have been almost used up. The visual and voice capabilities have been almost explored, so there is an urgent need for some new technical factors to join in, so that artificial intelligence can be combined with other technologies to generate greater thrust.
Where are the new growth points for artificial intelligence?
News Watch
Currently, AI has the ability to improve the personalization of robots and achieve some natural interactions. However, it is not perfect. AI's capabilities are still insufficient, the confidence level is not high, and some things do not have sufficient data to train.
The advantage of 5G is obvious. It can combine the capabilities of the robot itself with the capabilities of the cloud, and even bring in human capabilities. Many of us now talk about "people in the loop". People and robots cannot be connected directly, but through the network and communication. With the help of cloud capabilities, the robot's computing and storage can be expanded, which requires a good network to support. So we see that in new development opportunities, 5G can help robots expand their capabilities, enhance the interaction between robots and humans, and truly make "cloud in the loop" and "people in the loop" a good job.
How can we achieve leapfrog development in the field of robotics through the combination of AI and 5G? Let us now use a three-stage rocket model to explain it to you.
The first step is to find key scenarios, whether they are industrial, commercial or household, or in the fields of elderly care, medical care, etc., to make users truly feel that robots can indeed reduce costs and increase efficiency, or make things possible that were not possible before, and achieve breakthrough applicability.
For these scenarios, it is found that AI alone is not enough. It must rely on humans to supplement its capabilities. Just like autonomous driving, it is graded and there is a gradual process. The enhancement and supplementation by humans gradually decreases, making the machine more and more autonomous. In this process, a very seamless and efficient way of putting people in the loop must be constructed, which is called artificial enhancement.
When artificial enhancement and the capabilities of the machines themselves reach availability, they must be scaled up. Scaling up means that more robots can provide more services while reducing marginal costs, rather than repeating many previous development and design procedures for each new robot. This requires the use of the power of the cloud and the network to better integrate robots with edge computing and cloud computing to reduce the production cost, use cost, and operating cost of the robot itself. This will better achieve scale and bring greater benefits.
Cloud-based intelligent robots
News Watch
In 2010, Google proposed Cloud Robotics, which connected front-end robots and back-end data centers in industrial and special scenarios, and put some work in the cloud to solve some specific problems.
First, we need to solve the local perception problem of the robot itself, so that the cloud, together with other sensors and inputs, can see the overall situation. Second, the intelligence, computing and storage capabilities of the front-end equipment are limited, so we need to use cloud computing and cloud storage to enhance its capabilities. Third, when the robot needs to be remotely controlled to do something, it can be remotely controlled.
The cloud brain can at least accomplish these tasks when connected to the robot body, but this model cannot meet diverse needs. The allocation of the front and back ends needs to be customized, and the network delay problem will be more serious, so it is not suitable for many large-scale applications. What is a better solution?
With 5G technology, edge computing can be realized. Originally, the network between the device and the cloud only transmits data, that is, a data transmission channel, with neither storage nor computing capabilities. But with 5G, the network has such capabilities, which is edge computing capabilities. For example, edge computing capabilities can be constructed in the access network, and edge computing capabilities can also be constructed at the edge of the cloud. With 5G technology, many combinations can be made in the areas of network latency, data transmission rate, and data protection, and they can be deployed according to the needs of specific applications. We think this is a very good model, and when 5G begins to be commercially deployed and when mobile edge computing begins to have standards and equipment, the robotics field must use this technology.
Building the robots of the future
News Watch
Here are two examples, one is in the industrial field. Currently, the highest level in the industrial field is called Industry 4.0, which hopes to achieve flexible manufacturing. For example, the robot equipment in multiple manufacturing processes can automatically reorganize a production line according to the needs of the order. This requires good cooperation between robots. In the past, it was pre-programmed and defined, but in flexible manufacturing, the program cannot be designed in advance. For example, the delivery location and time may be biased. How can these robots achieve cooperation? The first is that the perception ability must be strong enough and real-time enough; the second is that the communication between them must be communication with clear real-time restrictions, preferably in the local area network of edge computing, because the cloud cannot guarantee real-time communication. Therefore, edge computing is actually a necessary path for the real implementation of flexible manufacturing, personalization, and intelligence of Industry 4.0.
In the personal field, robots serve different scenarios and different people. It is difficult for manufacturers to prepare in advance. They can only enable robots to learn and understand different scenarios in advance. Therefore, robot manufacturers provide capabilities rather than various functions. When robots have the ability to learn, they can build maps in new scenarios, construct cognition of the environment, understand the objects to be served, and for unfamiliar objects, they can use some active interaction methods and a small amount of data to recognize the new object, learn it, and form an adaptive service capability.
As robots gradually penetrate into people's daily lives, all the data they collect are privacy-related data. First, there is no need to upload a large amount of data to the cloud. Second, uploading this data to the cloud will cause privacy issues. It should be isolated in the edge server, and the user should authorize the robot to learn and use it. This is also the reason why edge computing is very important in the field of commercial or household robots.
Previous article:With the advent of the 5G era, the development prospects of the electronic components market are extremely promising
Next article:Deep Learning Industrial Machine Vision Processing
- Popular Resources
- Popular amplifiers
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Download from the Internet--ARM Getting Started Notes
- Learn ARM development(22)
- Learn ARM development(21)
- Learn ARM development(20)
- Learn ARM development(19)
- Learn ARM development(14)
- Learn ARM development(15)
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Automatic License Plate Recognition System Based on ARM Cortex
- come on
- Why is Ie in this common emitter circuit negative?
- From distributor to service provider, how Avnet deploys the Internet of Things
- [Fudan Micro FM33LC046N] Similar timer module
- [RVB2601 development board trial experience] GPIO external interrupt test
- Shanghai high salary recruitment: IC design/verification, BSP, Linux driver, ISP driver engineer
- Is the power of the resistor enough? Please help me analyze it.
- PCB Anti-ESD Design Principles
- There is a problem when using timer TIMER1 for delay in PIC microcontroller. The loop enters an infinite loop. Please help me find out what the problem is?