Rapidly build mobile robot prototypes through a graphical development platform

Publisher:DazzlingSpiritLatest update time:2012-03-17 Keywords:Graphics Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Rapidly build mobile robot prototypes through a graphical development platform

In a broad sense, robots mainly include three categories: mobile robots, manipulators, and educational robots. Manipulators and educational robots already have relatively mature industry solutions, while mobile robots are complex in structure and flexible in application. Currently, they are not commercialized to a high degree and are mainly in the stage of cutting-edge research. They have always been the focus of scientists and engineers. This article will mainly discuss the rapid prototyping and development of mobile robots.

Introduction to Mobile
Robots Mobile robots have a wide range of applications, as shown in Figure 1, ranging from military and aerospace applications - unmanned aerial vehicles (UAVs), unmanned underwater vehicles (UUVs) and unmanned ground vehicles (UGVs), to industrial and agricultural equipment - harvesting robots, intelligent farming machinery, and household service robots. As the application fields and environments vary, robots need to have a corresponding degree of autonomy, which also brings different technical difficulties to the development of robots. Fully autonomous robots usually involve key technologies such as control systems, self-positioning, real-time vision, and multi-sensor fusion, while teleoperated robots often focus on research in bidirectional force feedback control, virtual environment modeling, and force perception interfaces.

Figure 1 Application areas of mobile robots


Although the classification of robots is complex and has many key technologies according to different application scenarios, they have some common structures and components. They are a comprehensive system that integrates many electromechanical systems and subsystems, and work in coordination through the organic combination of these components and subsystems. Due to the complex structure and flexible application of mobile robots, although some subsystems have ready-made software and hardware tools and solutions, how to quickly integrate the subsystems and conduct early overall functional verification has become a key link in determining the success or failure of robot design.

Frontier Methods in Robotics Design: Graphical System Design
Competitions between the Google X PRIZE, FIRST (For Inspiration and Recognition in Science and Technology), RoboCup, and the Defense Advanced Research Projects Agency (DARPA) have advanced innovation in robotics. Innovative developers have pushed the frontier methods of robotics to graphical system design. With the LabVIEW graphical programming platform, robotics experts can quickly prototype complex robotic solutions. They don't have to worry about the underlying implementation details and can focus on solving the engineering problems at hand.


Robot design usually includes the following parts of work, as shown in Figure 2.

Figure 2 Robot design platform


● Sensor connection: connect to gyroscope, CCD, photoelectric, ultrasonic and other sensors to obtain and process information.


● Control design and simulation: Design the robot’s control algorithm based on the working environment and application requirements.


● Embedded control: The embedded control system is equivalent to the "brain" of the robot, making control decisions based on algorithms to complete tasks such as management coordination, information processing, and motion planning.


● Motion control (actuator): According to specific work instructions, the robot's servo control and motion execution are completed through the drive controller, encoder and motor.


● Network communication and control: The communication network between the robot’s subsystems completes distributed control and real-time control.


In the past, teams of mechanical engineers, electrical engineers, and programmers led robotics development independently, due to the large vertical depth of knowledge involved in each field, using their own traditional tools. LabVIEW and NI hardware provide a unique, versatile platform that unifies robotics development by providing a standard set of tools that can be used by all robotics designers.


With LabVIEW, designers can develop advanced robots without becoming computer experts or programmers. For example, a student with limited LabVIEW and machine vision experience designed an algorithm that allowed a robot to track a red ball using its IEEE 1394 camera and NI machine vision development module in just a few hours. Engineers can use LabVIEW and NI hardware to quickly design and develop prototypes of complex algorithms using a powerful graphical programming language; and easily deploy control algorithms to PCs, FPGAs, microcontrollers, or real-time systems through code generation; and can also connect to almost all sensors and actuators. In addition, through LabVIEW and NI hardware platforms, multiple interfaces such as CAN, Ethernet, serial ports, and USB can be supported to easily build communication networks for robot systems. Now, domain experts can not only do the work of mechanical engineers, but also become robot designers.

Example Analysis
This example shows how Virginia Tech used NI LabVIEW to design a fully autonomous ground vehicle to participate in the DARPA Urban Challenge.


The DARPA Urban Challenge requires the design of a fully autonomous ground vehicle that can navigate autonomously in an urban environment. Throughout the race, the fully autonomous vehicle needs to travel 60 miles in 6 hours, passing through various traffic conditions such as roads, intersections, and parking lots. At the beginning of the competition, participants will be given a mission file road network map and specify the checkpoints that need to be visited in a certain order.


In order to reach the checkpoint as quickly as possible, the vehicle needs to consider the speed limit of the selected road, possible road congestion, and other traffic conditions. The vehicle must obey traffic rules while driving, pay attention to safe driving and avoidance at intersections, properly handle interactions with other vehicles, and avoid static and dynamic obstacles at speeds of up to 30 miles per hour.


The team from Virginia Tech needed to develop a fully autonomous ground vehicle in 12 months, and they divided the development task into four main parts: basic platform, perception system, decision planning and communication architecture.


Each part is developed based on NI's hardware and software platform: NI hardware is used to interact with the existing vehicle system and provide an operating interface; the LabVIEW graphical programming environment is used to develop system software, including communication architecture, sensor processing and target recognition algorithms, laser rangefinders and vision-based road detection, driving behavior control, and underlying vehicle interfaces.


1 Basic Platform
Virginia Tech’s Odin is a 2005 Ford Escape hybrid off-road vehicle, as shown in Figure 3, and has been modified to some extent for autonomous driving. The NI CompactRIO system interacts with the Escape control system to control the throttle, steering wheel, steering, and brakes through drive-by-wire. Students developed the path curvature and speed control system using the LabVIEW Control, Design, and Simulation Module, and implemented it on the CompactRIO hardware platform using the LabVIEW Real-Time Module and FPGA Module, thus establishing an independent vehicle control platform. At the same time, students used the LabVIEW Touch Screen Module and the NI TPC-2006 touch screen to build a user interface and installed it on the console.

Figure 3: Virginia Tech's Odin


2 Perception System
In order to meet the competition rules of the Urban Challenge, Odin needs to be able to locate its own position, detect the surrounding road conditions and available driving lanes, identify all obstacles in the path, and appropriately classify obstructing vehicles. Odin is equipped with a variety of sensors to meet these requirements, including 3 four-plane laser range finders (LRF) mounted on the bumper, 4 LRFs and 2 computer vision cameras mounted on the roof rack, and a high-precision global positioning system/inertial measurement unit system (GPS/IMU).


For each type of perception requirement, multiple sensors are used to achieve maximum fidelity and reliability. To achieve flexible sensor fusion, the planning software ignores the raw sensor data and uses a set of sensor-independent perception information generated by specific task components. For example, the localization component uses the LabVIEW Kalman filter to track the position and direction of the vehicle; the road detection component uses the NI Vision Development Module to combine camera and LRF data to determine the road surface condition and each lane of the nearby road segment; the object classification component uses LabVIEW to process LRF data to detect and classify obstacles, and then predict the path and next action of dynamic obstacles and other vehicles.

Figure 4 Odin system composition framework


3 Decision Planning
The route planning component uses the A* search algorithm to determine which sections of the road Odin should pass through to traverse all checkpoints. The driving behavior component uses a behavior-based LabVIEW state machine architecture to comply with traffic rules and guide the vehicle along the planned route. The motion planning component performs iterative trajectory search, avoids obstacles and guides the vehicle along the ideal trajectory. Finally, the decision system passes the motion sequence to the vehicle control interface and converts it into a drive control signal.


4 Communication Architecture
The entire communication framework is developed based on LabVIEW, which implements the Society of Automotive Engineers (SAE) AS-4 Joint Architecture for Unmanned Systems (JAUS) protocol. Each software module is a JAUS component, and the interaction between all modules is completed through the LabVIEW framework. Each software module can run asynchronously as an independent component under Windows or Linux operating system. Multiple programming languages ​​are required to complete the entire communication architecture. Due to the openness of LabVIEW, LabVIEW software modules can be easily called or interfaced in other programming environments.


5 Advantages of Using LabVIEW
The LabVIEW platform provides an intuitive, easy-to-use debugging environment that allows the development team to monitor the operation of the source code in real time, thereby conveniently implementing hardware-in-the-loop debugging. Through the LabVIEW development environment, the team can quickly build system prototypes and speed up the design cycle. In addition, the seamless connection between LabVIEW and hardware is essential for performing certain key operations such as sensor processing and vehicle control. Due to the complexity of the Urban Challenge problem and the short development time, these factors played a key role in the success of the development team.

Summary
Graphical system design is essential to continue to accelerate innovation in robotics design. Complex traditional tools can hinder progress in robotics. LabVIEW provides a comprehensive, scalable platform that spans the design, prototyping, and deployment phases, so engineers can focus less on minor implementation details and more on the robot. They can use the same powerful platform to program controllers ranging from microcontrollers to FPGAs; send and receive signals from nearly any sensor and actuator; design and simulate dynamic control systems; and implement interfaces to remotely monitor or control robots. The LabVIEW graphical system design platform encourages more sophisticated robot designs by providing a unified platform for all robot designers.

Keywords:Graphics Reference address:Rapidly build mobile robot prototypes through a graphical development platform

Previous article:Automatic power control circuit
Next article:Motor control circuit diagram

Latest Analog Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号