Focus on Vision Guided Robotics (VGR)

Publisher:睿智之光Latest update time:2021-08-16 Source: eefocusKeywords:Robot Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

We’ve all seen videos of robots assembling cars at a rapid pace with little human intervention. Industrial robots like these have reduced costs and increased productivity in nearly every area of ​​manufacturing, but they have one major drawback—they can’t “see.” Programmed to repeat the exact same movements over and over again, they can’t detect and manipulate objects that vary in shape, size, and color, or that are touching and stacked on top of each other. Therefore, if the product changes or a new product is added to the line, the robot must be reprogrammed. If product components are delivered to the line via traditional hoppers and vibrating tables, the vibratory feeders must be modified.

 

Dealing with chaos

Now, a new generation of robots guided by advanced machine vision is enabling robots to do far more than the repetitive tasks common in mass production. Driven by smaller, more powerful, and cheaper cameras and other vision sensors, increasingly sophisticated robotic algorithms, and processors with machine vision-specific hardware accelerators, these vision-guided robotics (VGR) systems are rapidly transforming manufacturing and fulfillment processes.

 

VGR makes robots more adaptable and easier to implement in industries with frequent new product introductions and short production cycles, including medical device and pharmaceutical manufacturing, food packaging, agricultural applications, life sciences, and more.[1]

 

For example, a leading global automaker that operates a large plant in China uses Teledyne DALSA’s GEVA 1000 vision system to ensure that robots on two assembly lines firmly grasp parts and place them on fast-moving conveyors. In the past, parts were manually lifted and placed by humans. Automation has increased productivity by about six times. Systems like this are suitable for environments where clutter cannot be avoided or is too costly to eliminate, or where line speeds are too fast for workers. Advanced systems can solve even the most challenging VGR applications possible, which is picking randomly distributed objects of different sizes, shapes, and weights from boxes in factories and distribution centers (such as Amazon’s large network of automated fulfillment centers).

 

Out of order crawling

Robotic grabbing of parts out of order from bins is extremely challenging because the VGR system must locate and grab specific parts in a cluttered environment. As the robot takes parts from the bin, other parts may constantly move positions and change orientations. The system must identify the correct objects, determine the order to pick them up, and calculate how to grab, lift, and place them without colliding with other objects or bin walls. This requires high-performance machine vision hardware, sophisticated software, and enough computing power to process large amounts of visual data in real time.


Machine vision hardware can range from compact smart cameras with integrated vision processors (Teledyne DALSA’s BOA Spot) to sophisticated laser and infrared sensors and high-resolution, high-speed cameras.

 

What about 3D vision?

VGR systems typically use more than one type of sensor to build a 3D image. For example, a robot with a 3D area sensor can locate and grab randomly placed parts in a bin. A 2D camera then instantly detects the orientation of each part so that the robot can place them correctly on a conveyor.

 

Some VGR systems combine 3D time-of-flight (ToF) scanning with snapshot 3D image capture, achieving resolutions that can handle a wider range of objects than scanning systems alone, but without the need to move the camera as with traditional snapshot camera systems. ToF scanning measures the time it takes for light from a laser to travel between the camera and the surface of an object to determine its depth, and has the advantage of working in any lighting conditions.

 


Structured light 3D systems, such as Microsoft's Kinect sensor for video games, project an invisible infrared light pattern onto an object and then generate a 3D depth image by detecting distortions of that light pattern using a 2D camera. This process can be used for 3D mapping of multiple objects in a picking bin.

 

Powerful hardware and algorithms

These advanced vision systems are able to process large amounts of data using hardware accelerators such as FPGA processors and application-specific integrated circuits (ASICs). This enables them to process thousands of SKUs on production lines and in order fulfillment applications.

 

A key component of advanced VGR systems is the algorithms that prevent the robot and its end-of-arm gripper from colliding with the sides of the bin or other objects. This interference avoidance software must be very powerful because a different path needs to be planned each time an item is removed from a bin, and parts are often stacked and difficult to distinguish.

 

Looking ahead

The growing availability of VGR software, including the robot- and sensor-agnostic open-source Robot Operating System (ROS), will enable robotics integrators to more quickly and easily provision VGR systems and introduce new, more powerful sensors as they become available.

 

At the same time, machine vision and robotics vendors are working closely together to make VGR easier to use. For example, machine vision vendors have developed tools that make it easier for engineers to model and optimize sensors for robotic cells. They are also developing Windows-based VGR systems that are easy for end customers to use.

 

As a result of these innovations, nearly 50% of robots in consumer electronics (above board level) and other light assembly in Asia now use VGRs. As random picking technology quickly becomes a flexible, well-understood and interchangeable commodity, it can be used by small and medium-sized companies that want to reduce human intervention, improve safety and quality, and productivity.


Keywords:Robot Reference address:Focus on Vision Guided Robotics (VGR)

Previous article:Structural composition and design requirements of servo system
Next article:​A Prequel to China's Commercial Spaceflight

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号