Home > Other >Special Application Circuits > How to build an autonomous robot using the DonkeyCar platform

How to build an autonomous robot using the DonkeyCar platform

Source: InternetPublisher:走马观花 Keywords: robot Updated: 2024/12/10

    Inspired by the work of Plantvillage.psu.edu and iita.org, we wanted to build an autonomous robot using the DonkeyCar platform.

    The robot can move around a farm environment without damaging existing plants or soil, and uses object detection to find and mark diseased crops with an environmentally friendly touch. Traditionally, in most high-tech cases, we have to manually check large farms using our phones to mark crops, which takes a lot of our time and energy. In addition, the various phones used don’t necessarily have all the features needed to complete the task efficiently, or they have to wait for someone with the right equipment. A unified robotic platform that operates around the farm will solve these problems and make marking much faster. The speed will also make it easier to share the platform between multiple farms.

    challenge:

    Keep the size/weight of the robot small enough so as not to damage the crop itself.

    Navigate without damaging existing crops.

    Find a way to safely mark diseased crops.

    Find a dataset and farm to test the platform

    background:

    In addition to finding common ground on approach, technology, timing, etc., we also had to develop a framework with meeting schedules, repositories, meeting technology, etc. Essentially, all the components of a professional project had to be in place, except without the pay. We had no budget, but that was not a problem because we shared a common vision and the will to execute. We all had a great time during this time, and it was an amazing learning experience.

    Build the bot:

    Work on the chassis, autonomous navigation, and image classification began immediately and progressed well. However, we encountered significant unexpected challenges and delays related to the chassis and drive systems.

    In short, we didn't anticipate such varied terrain between test greenhouses, and motors, wheels, wiring, controls, etc. that performed well in scenario A were overwhelmed in scenario B. We dialed in a working chassis for all of our environments through a multitude of mods. So we had to make a lot of time and budget constraints, but the final product exceeded our initial minimum viable configuration goals. The final design at the time of submission is described below.

    Camera lever:

    To be able to watch the raised plant beds and possibly upgrade to a mobile camera that can watch the tops and bottoms of the tomato plants, we built a camera pole using a carbon fiber pole from a garage sale. The pole holds 2 3D printed clamps for navigation and sorting the camera. We also added 1.2v solar lighting to the pole, as well as a 12v multi-color status light on top of the pool. Yes, that is a repurposed pill container painted black on the top of the pole. The end result is pretty cool!

    The cameras are RaspberryPi cameras connected to two different Pis powered by a USB charger. The reason for using 2Pi's is that both classification and navigation use neural networks that require a lot of processing power. Also, the classification camera must be pointed at the plant, while the navigation camera must be pointed forward. There must also be a light on the top of the pole as an indicator. When looking for a bright enough RGB light, we found that they cost upwards of $100, so we made our own using the light from a speaker, a small plastic bag for reflection and enclosed in an empty medicine bottle. Since the light requires 12 volts our Arduino output is 5 volts which we connect to a relay. The connection requires a common ground with the Arduino and 3 wires, the green and blue lights we put on pins 7, 8 and 11 of the Arduino. We can simulate the RGB spectrum of these lights by using the analogWrite function to provide different values ​​to all three wires. Note that for correct coloring all three need to be written, otherwise the color previously written on any one pin may show unexpected results.

    chassis:

    Our experiments with plastic chassis with wheels and tracks at StoneCoop and GrowingHope farms proved unsuccessful, and both options would dig into the sand that favored the plants. We stripped one of the makeshift chassis versions down to a lot of the plastic gear before upgrading to metal and the ability to handle higher currents:

    We ultimately chose the MountainArkSR13 chassis due to its powerful motors and large wheels, and assembled it following the instructions below.

    We modified the MountainArk, adding a platform to separate the computing technology from the power supply, and gave the Farmaid a touch of style with a custom-painted lightweight shell and unique logo.

    After assembling the chassis, we need motors and batteries to power them. Although the chassis comes with a battery box, we decided to use a 12V lithium polymer battery because we already had one available and had used it with the old chassis. The motors are connected to the battery using a junction box for higher current consumption.

    We originally used a stock L298 motor controller that we had, but found that the current was too low to power the 320RPM motors that we now had. So we switched to an IBT-2 motor controller that was donated by another makerspace member. The problem with the IBT-2 motor controllers is that they can only control 1 motor, so we had to wire up 4 of them.

    The details of IBT-2 can be seen here: http://www.hessmer.org/blog/2013/12/28/ibt-2-h-bridge-with-arduino/ In order to save wiring space, we spliced ​​the left and right PWM lines, and the headers connected the L-PWM and R-PWM of the left motor and the right motor to each other.

    Another space saving technique we used was to connect all of the motors' enable pins directly to the 5V from the Arduino.

    After this, the only parts of the motors we need to connect directly to the Arduino are the PWM pins. On the left side, we connect the R_PWM of the left motor to pin 6 on the Arduino and the L_PWM to pin 5. Note that the R_PWM pins on the two left controllers and the L_PWM on the two left controllers are spliced, so a forward command to one will move both forwards and a reverse command to one will reverse both wheels on the left. The same splicing is done on the right side. The R_PWM on the right is connected to pin 9 of the Arduino and the L_PWM is connected to pin 10 of the Arduino. For collision detection, we first tried a Garmin LiDAR that one of our group members had, but we had a hard time getting it to work so we decided to use an SR04 Ultrasonic Sensor.

    We also added another sensor later on, but because of the way the timer interrupt is used, we cannot use it while manually controlling the robot. Note that we made another Arduino routine that only uses the sensor to move the robot between obstacles, but this does not conform to the behavior cloning method.

    drive:

    Since we couldn't use a donkey-like chassis as it wouldn't be able to drive in our given environment, we had to write our own driving code. To do this we used two inspirations, DonkeyCar's own approach and a series of videos by YouTuber Sentdex. The driving model is based on DonkeyCar, except instead of regression and mean squared error, we use classification to classify between 7 buttons using images. We also converted it to a fully convolutional neural network to make it faster and consistent with new research. In testing we found that it was constantly outputting a button being pressed, unlike in training where we had a key being pressed after several intervals. To fix this we later added some code to the Arduino script to output the time elapsed between button presses.

    Disease classification:

    For classification, we used the MobileNetSSD model because of its relatively small size and the fact that it already had a method for uploading to an Android app. We acquired the data by using 5-10 second videos and created a script to extract images from these videos. The videos themselves were placed in folders named after the disease and the plant. We made sure to take these videos in different conditions and at different locations. The entire training dataset consisted of about 2000 images. We also made a website to showcase the output of the classification as well as an overall map of the greenhouse and the health of its plants. The website uses XML data to create this grid.

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号