1. Introduction to the topic
Topic analysis:
(1) To achieve the above requirements, the most basic thing is to first let the car patrol the line. This work uses TI 's development board MSP430F5529 as the main control chip, the DRV8701E full-bridge drive circuit to drive the motor, and the K210 camera for tracking.
(2) Followed by digital recognition. OpenMV , k210 , Raspberry Pi and other visual microcontrollers can be used for identification. Since the template matching efficiency of openmv is not high and the accuracy is low, Raspberry Pi has never been used. Therefore, this work uses K210 to take photos and samples of the simulated scene to create a ward number recognition training set. The training set is processed and a laptop GPU is used to train yolo2. The target detection model finally achieves high-precision recognition processing of ward numbers.
2. Plan design and demonstration
This question mainly involves modules such as microcontroller control, camera tracking, and camera digital recognition. The following is the solution selection and demonstration.
1.1 Selection of basic controller
Option 1 : Use low-power 16 -bit MSP430 launchpad for control to save resources. In this experiment, operations such as driving the motor and button control can be completed, and the single-chip microcomputer main control circuit board is already available. The optional single-chip microcomputer cannot independently complete the camera head search. Because of the requirements of trace and camera digital recognition, this controller was selected.
Option 2 : Use the STM32F1 series. This microcontroller has faster processing speed, more ports, and is moderately priced. It is commonly used as a controller for automatic control systems and can meet the control requirements of this question. However, the basic control links of the car do not have strict performance requirements and are not selected for the time being.
1.2 Selection and demonstration of car models
Plan 1 : A four-wheel car model that is steered by a servo. Its servo control is flexible and easy. And it only needs to perform closed-loop speed control on the rear wheel and closed-loop steering control on the front wheel to complete the tracking task. However, it is more difficult to control the car model when it is reversing or rotating 180 degrees, which is detrimental to the requirement of not hitting the black line.
Plan 2 : Use two DC reduction motors with Hall encoder speed measurement plus a universal wheel and transform the three-wheeled car model into the car model we use. Our design has a high degree of freedom, which is more conducive to us building a more adaptable car model according to the requirements of the topic. We can adjust the center of gravity of the car through our own design, and perform closed-loop speed and direction control. And the control of the two motors is relatively simple.
Option 3 : Choose an Arduino car model that is widely used in teaching aids on the market . It has a detailed structure splicing tutorial. The car is light and can assist in easily completing photoelectric tube tracking and parking. The reduction motor on it can perform ramp hovering without using a control algorithm. The car module has a high degree of integration and is more convenient to use, but there is no Encoder speed measurement has the disadvantage of not being able to close the speed loop, and it will relatively reduce the opportunities for hands-on practice.
1.3 Selection of sensor modules
Option 1 : Use the OpenMV camera integrated with the STM32 processor . This camera module is easy to use and reduces the burden of main control. However, it is slightly expensive and the processing power is still relatively limited. It is estimated that it will be difficult and extreme to meet the requirements, so it is not selected for the time being.
Option 2 : Use the K210 development board and its onboard camera. The camera has high enough pixels, strong accuracy and processing power, and is not particularly expensive. The key point is that the chip has hardware ( KPU ) to support running deep learning models, which can satisfy both camera tracking and camera digital recognition, so this camera module was selected.
Option 3 : Use OpenART development kit. This kit also has powerful performance and has been used in the AI category of college student competitions. It contains three neural network inference engines with their own merits, namely NN , NNCU and TF . It is estimated that the task can be completed, but we do not know enough about the suite and may encounter unsolvable problems in neural network deployment. Therefore, this package is not selected for the time being.
1.4 Selection of room number identification algorithm
Solution 1 : A solution using multiple template matching. The template matching scheme is relatively simple to implement, but the implementation scenario has great limitations. Of course it is capable of performing fixed-point static recognition, but when the car is driving, it is difficult to match the desired results, and the more precise the template matching, the higher the template and code requirements required, and the implementation effect is not ideal.
Option 2 : Use OCR recognition. Room number recognition, that is, digital recognition, can be regarded as text recognition. OCR is a deep learning algorithm for text recognition, which has been widely used. The code verification solution on the notebook side is feasible, but the solution involves multiple open source libraries and is not suitable for microcontroller scenarios. However, it is very difficult to deploy OCR on the existing small system board, so this algorithm is not considered.
Option 3: Recognize by running the k210 target detection (yolo2) model. This solution is not easy to operate. You need to train a suitable yolo2 digital recognition model yourself, but the method is completely feasible. K210 has powerful performance and can achieve the goal better, so this method is selected.
3. Hardware circuit part
MSP430F5529主控板
本作品是电赛前绘制的,初衷是担心题目限定使用TI的板子,故使用立创EDA设计了一块MSP430的主控。
主板3D视图
主板设计框图
(1)电源模块
由于主控板各模块所需额度工作电压不同,故绘制了不同电源模块将电池电压转换为3.3V、5V、6V电压分别给不同模块供电,同时引出一些电源引脚可以给主板之外的模块供电。
(2)接口电路
主板上留出了大量外设接口,可以很方便地连接其它外设。比如:UART接口、I2C接口、编码器接口、驱动板接口、OLED/TFT接口、舵机接口等。
(3)调试电路
调车过程中,小车有很多参数需要时刻调整(比如PID参数),如果每次都通过重新烧录程序来该参数的话,会花费大量不必要的时间,故绘制了五向按键、拨码开关等按键电路,配合OLED或TFT显示屏可以很方便地进行调参。
(4) MSP430接口电路
由于使用的是现成的MSP430最小系统板,故只需要给其留个接口就行。
(5) 其余电路
蜂鸣器模块:本作品主要用它来作赛道特殊元素识别成功标志,比如当摄像头识别到十字路口,蜂鸣器会嘀一声,表示识别成功。
LED电源指示灯:用于电源模块输出电压指示。
74LVC244A缓冲电路:起隔离和缓冲作用
四、软件部分
1.PID速度与方向控制算法
PID 算法是将偏差的比例(Proportion)、积分(lntegral)和微分(Differential) 通过线性组合构成控制量,用这一控制量对被控对象进行控制,这样的控制器称 PID 控制。
简单来说,偏差 = 用户设定的期望值 - 传感器采集回来的当前值,将偏差进行 P、I、D 三个环节的计算,再进行求和、输出。
在小车设计中速度环用增量式 PID,方向环采用位置式 PID 计算,其计算公式如下: 增量式: PWM+= PID[KP] * iError+ PID[KD] * (iError - dir->LastError);
位置式:PWM= PID[KP] * iError+ sptr->SumError+ PID[KD] * (iError - sptr->LastError);
因此,我们通过编码器采集当前速度,以 PWM 波作为输出,配合 PID 算法 即可实现速度闭环控制,让小车在不同条件下能匀速。以摄像头坐标与中值坐标 作为偏差,即可控制小车及时调节方向。
PID部分代码:
2.简单巡线算法
对于小车的巡线,我们采用了自主设计的单颜色取值计算的方法。
我们使用的k210同时进行房间号识别与路线循迹,其上摄像头读取到的图片是RGB565格式的图片。我们将图片分为三个颜色通道,分别是红、绿、蓝。小车引导路线是红色的,因此我们只需要使用红色通道即可将大部分环境影响隔绝。然后我们对得到的红色通道图片进行二值化,计算得到一个可用的巡线图像。我们计算路线的宽度,通过不同位置的路线宽度情况就可以识别到特殊的位置(十字路口),配合房间号识别也可以加强部分路线的巡线精度。
巡线部分代码:
3.房间号识别算法
房间号识别我们采用的是目标检测(yolo2)模型的算法来识别,yolo是一种很精确的深度学习模型算法。
使用YOLO来检测物体,其流程是比较简单明了的,主要分为三个部分:卷积层,目标检测层,NMS筛选层:
1、将图像设定为224 * 224作为神经网络的输入
2、运行神经网络,得到一些bounding box坐标、box中包含物体的置信度和class probabilities
3、进行非极大值抑制,筛选Boxes
这些流程都在k210进行部署,主要难点在于神经网络模型的训练。我们总共采集了两千多张模拟房号图片,最终使用了八百多张照片作为训练集。
Since we have a total of 8 room numbers, that is, 8 labels, the data set is actually far from enough on average, so training parameter tuning is particularly important. If the parameters are slightly modified, overfitting will only recognize fixed angles, or underfitting will not recognize anything.
4. Test plan and test results
4.1 Test methods
4.1.1 Speed control: When the car is running on the wooden board, observe the speed response curve returned by Bluetooth, and then use the dichotomy method to adjust the PID parameters.
4.1.2 Time control: Test the time required to reach the finish line with different parameters and speeds multiple times. Then take the best parameters and speed to get the shortest running time.
4.2 Test data
4.2.1 Number recognition
We used K210 to deploy a neural network to identify the numbers required by the question. We made a large number of self-made number recognition data sets, as shown in the figure
Time is limited, so the number of our data sets is actually far from enough, so we need particularly good model training parameters to make up for the shortcomings of the data set. The following are the model training parameters that we think are well adjusted.
In the end, the recognition rate of the model we trained basically reached more than 90% . Obviously, the training effect of our model is still good. This is our error visualization picture.
When the highly similar numbers 1 and 7 are put together for recognition, the model we trained can still complete the task.
4.2.2 PID debugging
The virtual oscilloscope speed response curve is shown in the figure below
5. Summary
For this work, we used MSP430F5529LP as the main control chip to meet the debugging and control needs. Through the simplification and optimization of the module, the entire mechanical structure of the car is ingenious and reliable. Two reduction motors and a universal wheel were used for motion control. The K210 development board was used to take photos and samples of the simulation site to create a ward number recognition training set. The training set was processed and a laptop GPU was used to train the yolo2 target detection model. Finally, high-precision identification processing of ward numbers is achieved.
6. Physical display
appendix:
Demo video:
All reference designs on this site are sourced from major semiconductor manufacturers or collected online for learning and research. The copyright belongs to the semiconductor manufacturer or the original author. If you believe that the reference design of this site infringes upon your relevant rights and interests, please send us a rights notice. As a neutral platform service provider, we will take measures to delete the relevant content in accordance with relevant laws after receiving the relevant notice from the rights holder. Please send relevant notifications to email: bbs_service@eeworld.com.cn.
It is your responsibility to test the circuit yourself and determine its suitability for you. EEWorld will not be liable for direct, indirect, special, incidental, consequential or punitive damages arising from any cause or anything connected to any reference design used.
Supported by EEWorld Datasheet