Home > Other >Special Application Circuits > Design of a tomato sorting machine based on Raspberry Pi

Design of a tomato sorting machine based on Raspberry Pi

Source: InternetPublisher:小胖友 Keywords: Sorting Machine Updated: 2024/12/17

Manual sorting projects are one of the most laborious and time-consuming tasks. Manual sorting, whether it is fruits or vegetables or anything else, requires a lot of manpower and time. So, in this tutorial, we try to build a tomato sorter that can differentiate between red tomatoes and green tomatoes.

Components required for sorting machine

hardware

Raspberry Pi

Pi Camera Module

2 × servo motors

software

Edge Pulse Studio

Getting Started with Edge Impulse

To train a machine learning model with Edge Impulse Raspberry Pi, create an Edge Impulse account, authenticate your account, and start a new project.

poYBAGLNOJaACCynAALdt_DGBKU728.png

Installing Edge Impulse on the Raspberry Pi

Now to use Edge Impulse on Raspberry Pi, you first have to install Edge Impulse and its dependencies on Raspberry Pi. Use the following command to install Edge Impulse on Raspberry:

curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -

sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps

sudo npm install edge-impulse-linux -g --unsafe-perm

Now run Edge Impulse using the following command:

EdgePulse Linux

You will be asked to log in to your Edge Impulse account. You will then be asked to select a project and finally select a microphone and camera to connect to that project.

poYBAGLNOJKARI5nAAU9bdiErDY648.png

Now since Edge Impulse runs on Raspberry Pi, we have to connect the Pi Camera model with the Pi for image acquisition. Connect the Pi Camera as shown below:

poYBAGLNOI6Aa7ZFAAUIXFcxAe0986.png

Create a dataset

As mentioned earlier, we use Edge Impulse Studio to train our image classification model. To do this, we must collect a dataset with samples of objects that we wish to classify using the Pi camera. Since the goal is to classify red and green tomatoes, you need to collect some sample images of red and green tomatoes in order to differentiate between the two.

You can collect samples via your phone, Raspberry Pi board, or you can import datasets into your Edge Impulse account. The easiest way to load samples into Edge Impulse is to use your phone. To do this, you must connect your phone to Edge Impulse.

To connect your phone, click on Devices, then click on Connect New Device.

poYBAGLNOIqAGqIdAAC_MDNu7M8125.png

Now in the next window click on "Use your phone" and a QR code will appear. Use your phone to scan the QR code using Google Lens or other QR code scanner apps. This will connect your phone with Edge Impulse studio.

poYBAGLNOIWAMwXEAAIIOGPZ99M567.png

After connecting your phone with Edge Impulse Studio, you can now load samples. To load samples, click on Data Collection. Now on the Data Collection page enter the tag name and select Camera as the sensor. Click Start Sampling.

pYYBAGLNOIKAJ7uMAAC-VbYprmU444.png

This will save the tomato image to the Edge Impulse cloud. Take 50 to 60 images from different angles. Once you have uploaded the samples, now set the label to “Green Tomato” and collect another 50 to 60 images. In addition to the samples of green and red tomatoes, also collect some samples of the indeterminate case in case there is nothing in the frame.

pYYBAGLNOH6AerJhAAKZ3f4UT5Y918.png

These samples are used for training the module, and in the next step, we will collect test data. The test data should be at least 20% of the training data.

Training the model

When our dataset is ready, now we will create an impulse for our data. To do this, visit the Create an impulse page.

poYBAGLNOHqABM2SAAHzADZhhyw533.png

Now on the Create Impulse page, click on Add Processing Block and then click on the Add button next to the Image block to add a processing block that will standardize the image data and reduce the color depth. After that, click on the Transfer Learning (Image) block to get a pre-trained model for image classification, on which we will perform transfer learning to tune it for our tomato recognition task. Then click on Save Impulse.

pYYBAGLNOHeABHN2AAItAz8sAdM804.png

Next, go to the "Image" sub-item under the "Pulse Design" menu item and click on the "Generate Features" tab and then click on the green "Generate Features" button.

poYBAGLNOHOAFiEGAAFCtY1JBKA613.png

After that, click the "Transfer learning" sub-item under the "Impulse design" menu item and click the "Start training" button at the bottom of the page. Here we use the default MobileNetV2. If necessary, you can use a different training model.

pYYBAGLNOG-ALk3GAAFU5tr-ueo276.png

It will take some time to train the model. After training the model, it will show the training performance. For me, the accuracy is 75% and the loss is 0.58. We can now test our trained model. To do this, click on the “Real-time Classification” tab in the left menu, and then you can take a sample image using the Raspberry Pi camera.

Deploy the trained model on Raspberry Pi

After the training process is complete, we can deploy the trained Edge Pulse image classification model to the Raspberry Pi. There are two ways to do this, one is to use the edge pulse linux runner command. This will automatically compile the trained model with full hardware acceleration, download the model to the Raspberry Pi, and then start classification without writing any code, the other way is to download the model file and then use the python SDK example for image classification.

The first method is very simple. Enter the following command in the terminal window:

edge-impulse-linux-runner

If the edge-impulse-linux command is already running, press Control-C to stop it, then enter the above command. If you have already assigned a project and want to clear it to start a new one, use the following command:

Edge Pulse Runner -- Clean

This will connect the Raspberry Pi to the Edge Impulse cloud and download the recently trained model, and start the video stream. The results will be displayed in the terminal window.

pYYBAGLNOFKAEO5VAAOr6QSAM4o577.png

You can also open the video stream on your browser using the Raspberry Pi IP address. But since our goal is to build a red and green tomato sorter, we have to use the second method, which is to use the python SDK example. Download the model file by:

edge-impulse-linux-runner --Download modelfile.eim

Clone this repository now to get Python examples for object classification, speech recognition, and more:

git clone https://github.com/edgeimpulse/linux-sdk-python

Here, we will classify red and green tomatoes, so we will use the classify.py example from the examples folder of this repository. Run this code using the following command:

python3 classification.py model file.eim

Where modefile.eim is the trained model file name. Make sure this file is in the same folder as the code.

After running the code, it will print the probability of the detected objects as shown in the following figure:

pYYBAGLNOGiAAsOuAASeCMn6Cu8890.png

Now we have to make some adjustments to the code so that we can move the servos based on the detection. The predicted label and score are stored in the label and score variables, so we store these score values ​​in an array and then assign these three values ​​to three different values ​​so that we can easily compare them and move the servos accordingly. The complete code with all the changes is also provided at the end of the document.

For tags within tags:

score = res['result'][‘classification’][label]

print('%s: %.2f' % (label, score), end='')

data.append(score)

print('', flush=True)

Green = circle(data[0], 2)

Red = circle (data[1], 2)

Uncertain = round (data [2], 2)

if (green>=0.45 and frame_count%10 ==0):

While (green>=0.35):

pwm1.ChangeDutyCycle(12.0)

time.sleep(0.500)

pwm1.ChangeDutyCycle(2.0) #close

time.sleep(0.250)

pwm.ChangeDutyCycle (7.0)

time.sleep(0.450)

pwm.ChangeDutyCycle (2.0)

Green = 0.01

if (red>=0.50 and frame_count%10 ==0):

While (red>=0.50):

pwm1.ChangeDutyCycle (7.0)

time.sleep(0.500)

pwm1.ChangeDutyCycle (2.0)

time.sleep(0.250)

pwm.ChangeDutyCycle (7.0)

time.sleep(0.450)

pwm.ChangeDutyCycle (2.0)

Red = 0.01

Raspberry Pi sorting machine circuit diagram

To move the tomatoes, we connected two servo motors to the Raspberry Pi. One servo is used to move the tomatoes one by one and the second servo is used to place the tomatoes into their respective boxes.

pYYBAGLNOBSAKdHHAADia-3nJ0k534.png

As shown in the circuit diagram, the first servo is connected to GPIO 25 and the second servo is connected to GPIO 17 of the Raspberry Pi. Both servos are powered by the 5V and GND pins of the Raspberry Pi.

Build the Sorter Setup

Now, with the training and encoding part done, let's move on to the next part which is to make a complete setup for sorting tomatoes. We used 2mm thick white Sunboard and two servo motors. The first servo motor is used to move the tomatoes one by one and the second servo motor is used to put the tomatoes into the box according to their color. After connecting all the parts together, this sorting machine will look like this:

poYBAGLNOBCAA4nMAARfktK51kw699.png

Now to test the setup, place some tomatoes in the tray, place a tomato under the camera, and launch the code on the Raspberry Pi.

pYYBAGLNOA2AGIocAAYLgIo4sOQ167.png

The complete working of this project is shown in the video given below. Apart from sorting the tomatoes based on their color, we can also sort them based on the state of whether they are rotten or not. If you have any questions, drop them in the comment section or you can use our forum to start a discussion.

Code

#!/usr/bin/env python

Import Resume 2

Importing the operating system

Import system, getopt

Import signal

Import time

Import ImageImpulseRunner from edge_impulse_linux.image

Import RPi.GPIO as GPIO

Runner = None

show_camera = false

frame_count=0

Servo Pin = 25

Servo 1 = 17

GPIO.setmode(GPIO.BCM)

GPIO.setup(servo_pin, GPIO.OUT)

GPIO.setup(servo1, GPIO.OUT)

# Set up PWM process

pwm = GPIO.PWM(servo_pin, 50) # 50 Hz (20 ms PWM period)

pwm1 = GPIO.PWM(servo1, 50)

pwm.start(7) # Rotate 90 degrees to start PWM

pwm1.start(7)

pwm1.ChangeDutyCycle (2.0)

pwm.ChangeDutyCycle(2.0) #close

Now define():

return round(time.time()*1000)

def get_webcams():

port_ids = []

For ports in range (5):

print("Looking for camera on port %s: " %port)

camera = cv2.VideoCapture(port)

if camera.isOpened():

ret = camera.read()

If you re:

backendName =camera.getBackendName()

w = camera.get(3)

h = camera.get(4)

print("Found camera %s (%sx %s) on port %s" % (backendName, h, w, port))

port_ids.append(port)

camera.release()

Return port_ids

def sigint_handler(sig, frame):

print('interrupt')

if (runner):

runner.stop()

System exit (0)

signal.signal(signal.SIGINT, sigint_handler)

Define help():

print('python classification.py <path_to_model.eim> <camera port ID, only required if there is more than 1 camera>')

define main(argv):

frame_count=0

try:

opts, args = getopt.getopt(argv, "h", ["--help"])

Except getopt.GetoptError:

help()

System Exit (2)

For opt, arg in opts:

If options are added ('-h', '--help'):

help()

sys.exit()

if len(args) == 0:

help()

System Exit (2)

Model = Parameters[0]

dir_path = os.path.dirname(os.path.realpath(__file__))

modelfile = os.path.join(dir_path, model)

print('model:' + model file)

With ImageImpulseRunner(modelfile) as the runner:

try:

model_info = runner.init()

print('Runners loaded for "' + model_info['project']['owner'] + ' / ' + model_info['project']['name'] + '"'

labels = model_info['model_parameters']['labels']

If len(args) > 2:

videoCaptureDeviceId = int(args[1])

Others:

port_ids = get_webcams()

if len(port_ids) == 0:

raise Exception('Can't find any webcams')

If len(args) <= 1 and len(port_ids) > 1:

raise Exception("More than one camera was found. Add the camera port ID as a second parameter to this script")

videoCaptureDeviceId = int(port_ids[0])

camera = cv2.VideoCapture(videoCaptureDeviceId)

ret = camera.read()[0]

If you re:

backendName = camera.getBackendName()

w = camera.get(3)

h = camera.get(4)

print("Camera %s (%sx %s) on port %s has been selected." %(backendName, h, w, videoCaptureDeviceId))

camera.release()

Others:

raise Exception("Unable to initialize the selected camera.")

next_frame = 0 # limited to ~10 fps here

For res, img in runner.classifier(videoCaptureDeviceId):

if (nextframe > now()):

time.sleep((next_frame - now()) / 1000)

# print('Classification run response', res)

data = []

Frames = Frames + 1

print("Number of frames:", framee_count)

if "category" in res["result"].keys():

print ('Result (%d ms.) ' % (res['timing']['dsp'] + res['timing']['classification']), end='')

For tags within tags:

score = res['result'][‘classification’][label]

# print(score)

print('%s: %.2f' % (label, score), end='')

data.append(score)

print('', flush=True)

Green = circle(data[0], 2)

Red = circle (data[1], 2)

Uncertain = round (data [2], 2)

Print (green, red, unsure)

if (green>=0.25 and framee_count%10 ==0):

While (green>=0.25):

pwm1.ChangeDutyCycle(12.0)

print("Green tomatoes detected")

time.sleep(0.500)

pwm1.ChangeDutyCycle(2.0) #close

time.sleep(0.250)

pwm.ChangeDutyCycle (7.0)

time.sleep(0.450)

pwm.ChangeDutyCycle (2.0)

Green = 0.01

# time.sleep(2)

if (red>=0.50 and frame_count%10 ==0):

While (red>=0.50):

pwm1.ChangeDutyCycle (7.0)

print("Red tomatoes detected")

time.sleep(0.500)

pwm1.ChangeDutyCycle (2.0)

time.sleep(0.250)

pwm.ChangeDutyCycle (7.0)

time.sleep(0.450)

pwm.ChangeDutyCycle (2.0)

Red = 0.01

# time.sleep(2)

Others:

time.sleep(0.01)

# print ('%s: %.2f' % (Green, Red, Uncertain), end ='')

if (show_camera):

cv2.imshow('edgeimpulse', img)

if cv2.waitKey(1) == ord('q'):

rest

elif "bounding_boxes" in res["result"].keys():

print('Found %d bounding boxes (%d ms.)' % (len(res["result"]["bounding_boxes"]), res['timing']['dsp'] + res['timing']['classification']))

For bb in res["result"]["bounding_boxes"]:

print('%s (%.2f): x=%dy=%dw=%dh=%d' % (bb['label'], bb['value'], bb['x'], bb[ 'y'], bb['width'], bb['height']))

next_frame = now() + 100

at last:

if (runner):

runner.stop()

# frame_count=0

if __name__ == "__main__":

main(sys.argv[1:])

cap_release()

cv2.destroyAllWindows()

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号