[Image recognition classification & motion detection & analog signal processing system based on Raspberry Pi 400, fifth post] Project conclusion & documentation - 11.0...
[Copy link]
This post was last edited by donatello1996 on 2022-11-3 14:06
[Image Recognition Classification & Motion Detection & Analog Signal Processing System Based on Raspberry Pi 400, Part 5] Project Summary & Documentation - 11.03 Update
Author: Li Gong
Forum nickname: donatello1996
Introduction
This system is based on the Raspberry Pi 400 card computer, connected to the I2C sensors MPU6050, BMP280, AHT20, and the USB UVC high-definition driver-free camera. It has functions such as image recognition and classification, motion detection, analog signal processing, signal peak detection, signal peak statistics, and MP3 player. It is a general-purpose multifunctional project, one or more of which can be combined and further developed according to the needs of embedded projects on the market, making full use of the Raspberry Pi 400 CPU signal computing performance and graphical display performance.
This report includes the system hardware block diagram and introduction, the program architecture block diagram and introduction, and the operating results. It does not include the source code. The source code is sent separately to the official organizer of Digi-Key in the form of a tar compressed package.
System Hardware Block Diagram
The hardware block diagram is as follows. The hardware functions are very simple. It only uses the USB Host interface to connect the UVC camera, the I2C1 interface to connect three I2C sensors, the HDMI interface to connect the display screen for QT interface display/MP3 music playback, and Ethernet/WIFI to build a TCP-based HTTP WEB server. The following is a real shot:
System software block diagram (process/engineering architecture block diagram)
The
project is divided into two processes (projects), namely the main project and the qtproj project. The main project is only responsible for camera acquisition and display, tflite artificial intelligence image recognition and classification; the qtproj project is only responsible for I2C data acquisition and graphical display, MP3 player, signal frequency Fourier transform, signal peak detection, and signal peak statistics.
Main project:
pthread_mutex_init(&pmt , NULL);
pthread_create(&tid_grab_mjpeg , NULL , Thread_V4l2_Grab_Mjpeg , NULL);
pthread_detach(tid_grab_mjpeg);
pthread_create(&tid_tcp_web_recv , NULL , Thread_TCP_Web_Recv , NULL);
pthread_create(&tid_tcp_web_recv_tflite , NULL , Thread_TCP_Web_Recv_Tflite , NULL);
// pthread_create(&tid_tcp_web_send , NULL , Thread_TCP_Web_Send , NULL);
//pthread_create(&tid_tcp_web_send , NULL , Thread_TCP_Web_Send_Only_MJPEG , NULL);
pthread_create(&tid_tflite , NULL , Thread_Tflite , NULL);
pthread_create(&tid_tcp_web_send , NULL , Thread_TCP_Web_Send_Only_JPEG_File , NULL);
pthread_create(&tid_tcp_web_send_tflite , NULL , Thread_TCP_Web_Send_Only_JPEG_File_Tflite , NULL);
qtproj project:
pthread_create(&tid_line_chart , nullptr , Thread_Line_Chart , this);
connect(this , SIGNAL(Signal_Raw_data()) , this , SLOT(Raw_data_CounterUpdate_Line_Chart()));
connect(this , SIGNAL(Signal_Raw_data_Collection()) , this , SLOT(Raw_data_Collection_CounterUpdate_Line_Chart()));
Function description of each part
Software function design
My work has 9 functions:
- The first two functions, camera image acquisition and display, camera image artificial intelligence recognition and classification of objects, these two functions are done in the HTTP WEB server. After the Raspberry Pi 400 runs the C++ process main, you can use the Raspberry Pi
IP: 50011 and IP: 50012 are two addresses: ports to access the original camera images and the camera images that have been classified by artificial intelligence object recognition. To use the jpg image codec software library, you need to install libjpeg62-turbo:
The source code is as follows:
Camera capture image thread:
void * Thread_V4l2_Grab_Mjpeg(void *arg)
{
pic_data pic_temp;
while(1)
{
int type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if(ioctl(fd_video , VIDIOC_STREAMON , &type) < 0)
{
printf("Unable to start capture.\n");
break;
}
struct v4l2_buffer buff;
buff.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buff.memory = V4L2_MEMORY_MMAP;
if(ioctl(fd_video , VIDIOC_DQBUF, &buff) < 0)
{
printf("camera VIDIOC_DQBUF Failed.\n");
usleep(1000*1000);
break;
}
pthread_mutex_lock(&pmt);
memcpy(pic_tmpbuffer , pic.tmpbuffer , buff.bytesused);
//pic_tmpbuffer = pic.tmpbuffer;
pic.tmpbytesused = buff.bytesused;
pic_tmpbytesused = pic.tmpbytesused;
//if(build_file == true)
//{
int jpg_fd = open(MJPEG_FILE_NAME , O_RDWR | O_CREAT , 00700);
if(jpg_fd == -1)
{
printf("open ipg Failed!\n ");
break ;
}
int writesize = write(jpg_fd , pic.tmpbuffer , pic.tmpbytesused);
printf("Write successfully size : %d\n" , writesize);
close(jpg_fd);
//}
pthread_cond_broadcast(&pct);
pthread_mutex_unlock(&pmt);
printf("pic.tmpbytesused size : %d\n",pic.tmpbytesused);
if(ioctl(fd_video , VIDIOC_QBUF, &buff) < 0)
{
printf("camera VIDIOC_QBUF Failed.\n");
usleep(1000*1000);
break;
}
}
}
Thread for AI processing of images:
void * Thread_Tflite(void *arg)
{
clock_t start, finish;
double totaltime;
//VideoCapture cap(0);
//cap.set(CAP_PROP_FRAME_WIDTH, inpWidth);
//cap.set(CAP_PROP_FRAME_HEIGHT, inpHeight);
String weights = "./model_tflite/frozen_inference_graph.pb";
String prototxt = "./model_tflite/ssd_mobilenet_v1_coco.pbtxt";
Net net = readNetFromTensorflow(weights, prototxt);
//while (cap.read(frame))
while (1)
{
pthread_mutex_lock(&pmt);
pthread_cond_wait(&pct , &pmt);
Mat frame = imread("/home/proj/1.jpeg");
pthread_mutex_unlock(&pmt);
start = clock();
Size frame_size = frame.size();
Size cropSize;
if (frame_size.width / (float)frame_size.height > WHRatio)
{
cropSize = Size(static_cast<int>(frame_size.height * WHRatio),
frame_size.height);
}
else
{
cropSize = Size(frame_size.width,
static_cast<int>(frame_size.width / WHRatio));
}
Rect crop(Point((frame_size.width - cropSize.width) / 2,
(frame_size.height - cropSize.height) / 2),
cropSize);
Mat blob = blobFromImage(frame, 1. / 255, Size(300, 300));
//cout << "blob size: " << blob.size << endl;
net.setInput(blob);
Mat output = net.forward();
//cout << "output size: " << output.size << endl;
Mat detectionMat(output.size[2], output.size[3], CV_32F, output.ptr<float>());
frame = frame(crop);
float confidenceThreshold = 0.50;
for (int i = 0; i < detectionMat.rows; i++)
{
float confidence = detectionMat.at<float>(i, 2);
if (confidence > confidenceThreshold)
{
size_t objectClass = (size_t)(detectionMat.at<float>(i, 1));
int xLeftBottom = static_cast<int>(detectionMat.at<float>(i, 3) * frame.cols);
int yLeftBottom = static_cast<int>(detectionMat.at<float>(i, 4) * frame.rows);
int xRightTop = static_cast<int>(detectionMat.at<float>(i, 5) * frame.cols);
int yRightTop = static_cast<int>(detectionMat.at<float>(i, 6) * frame.rows);
ostringstream ss;
ss << confidence;
String conf(ss.str());
Rect object((int)xLeftBottom, (int)yLeftBottom,
(int)(xRightTop - xLeftBottom),
(int)(yRightTop - yLeftBottom));
rectangle(frame, object, Scalar(0, 255, 0), 2);
//cout << "objectClass:" << objectClass << endl;
String label = String(classNames[objectClass]) + ": " + conf;
//cout << "label"<<label << endl;
int baseLine = 0;
Size labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
rectangle(frame, Rect(Point(xLeftBottom, yLeftBottom - labelSize.height),
Size(labelSize.width, labelSize.height + baseLine)),
Scalar(0, 255, 0), -1);
putText(frame, label, Point(xLeftBottom, yLeftBottom),
FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 0));
}
}
finish = clock();
totaltime = finish - start;
cout << "识别该帧图像所用的时间为:" << totaltime <<"ms"<< endl;
pthread_mutex_lock(&pmt_tflite);
imwrite("/home/proj/2.jpeg" , frame);
pthread_cond_broadcast(&pct_tflite);
pthread_mutex_unlock(&pmt_tflite);
waitKey(1);
}
//cap.release();
//waitKey(0);
return 0;
}
Thread for sending image files to HTTP WEB server:
void * Thread_TCP_Web_Send_Only_JPEG_File(void *arg)
{
while(1)
{
if(flag_keep_alive && flag_post_once)
{
flag_post_once = 0;
HTTP_Send_Jpeg_File_Stream(fd_socket_conn , "/home/proj/1.jpeg" , &pmt , &pct);
}
}
}
void * Thread_TCP_Web_Send_Only_JPEG_File_Tflite(void *arg)
{
while(1)
{
if(flag_keep_alive_tflite && flag_post_once_tflite)
{
flag_post_once_tflite = 0;
HTTP_Send_Jpeg_File_Stream(fd_socket_conn_tflite , "/home/proj/2.jpeg" , &pmt_tflite , &pct_tflite);
}
}
}
- Using BMP280 to detect atmospheric pressure and using MPU6050 to detect motion both use the same method, i.e. ioctl(), to access the I2C1 bus of the Raspberry Pi 400:
BMP280 initialization:
int BMP280_Init()
{
uint8_t ret , i2c_read_data[2];
I2C_Device_Read(I2C_ADDR_BMP280 , &ret , 1 , 0xd0);
if (ret == 0x58)
{
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x88);
dig_t1 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x8a);
dig_t2 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x8c);
dig_t3 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x8e);
dig_p1 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x90);
dig_p2 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x92);
dig_p3 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x94);
dig_p4 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x96);
dig_p5 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x98);
dig_p6 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x9a);
dig_p7 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x9c);
dig_p8 = i2c_read_data[1] << 8 | i2c_read_data[0];
I2C_Device_Read(I2C_ADDR_BMP280 , i2c_read_data , 2 , 0x9e);
dig_p9 = i2c_read_data[1] << 8 | i2c_read_data[0];
return 0;
}
return -1;
}
BMP280 reads air pressure value:
float BMP280_Read_Pressure()
{
I2C_Device_Read(I2C_ADDR_BMP280 , &msb , 1 , 0xf7);
I2C_Device_Read(I2C_ADDR_BMP280 , &lsb , 1 , 0xf8);
I2C_Device_Read(I2C_ADDR_BMP280 , &xlsb , 1 , 0xf9);
pres = (msb * 65536 | lsb * 256 | xlsb) >> 4;
I2C_Device_Read(I2C_ADDR_BMP280 , &msb , 1 , 0xfa);
I2C_Device_Read(I2C_ADDR_BMP280 , &lsb , 1 , 0xfb);
I2C_Device_Read(I2C_ADDR_BMP280 , &xlsb , 1 , 0xfc);
temp = (msb * 65536 | lsb * 256 | xlsb) >> 4;
var1 = (temp / 16384.0 - dig_t1 / 1024.0)*(dig_t2);
var2 = ((temp / 131072.0 - dig_t1 / 8192.0)*(temp / 131072.0 - dig_t1 / 8192.0))*dig_t3;
temp = var1 + var2;
temp /= 5120.0;
var1 = (temp / 2.0) - 64000.0;
var2 = var1 * var1*(dig_p6) / 32768.0;
var2 = var2 + var1 * (dig_p5)*2.0;
var2 = (var2 / 4.0) + ((dig_p4)*65536.0);
var1 = (dig_p3)*var1*var1 / 524288.0 + (dig_p2)*var1 / 524288.0;
var1 = (1.0 + var1 / 32768.0)*(dig_p1);
pres = 1048576.0 - pres;
pres = (pres - (var2 / 4096.0))*6250.0 / var1;
var1 = (dig_p9)*pres*pres / 2147483648.0;
var2 = pres * (dig_p8) / 32768.0;
pres = pres + (var1 + var2 + (dig_p7)) / 16.0;
return pres;
}
MPU6050 initialization:
void MPU6050_Init()
{
uint8_t write_data[1];
write_data[0] = 0x07;
I2C_Device_Write (0x68 , write_data , 1 , SMPLRT_DIV);
//Write to sample rate register
write_data[0] = 0x01;
I2C_Device_Write (0x68 , write_data , 1 , PWR_MGMT_1);
// Write to power management register
write_data[0] = 0;
I2C_Device_Write (0x68 , write_data , 1 , CONFIG);
// Write to Configuration register
write_data[0] = 24;
I2C_Device_Write (0x68 , write_data , 1 , GYRO_CONFIG);
// Write to Gyro Configuration register
write_data[0] = 0x01;
I2C_Device_Write (0x68 , write_data , 1 , INT_ENABLE);
//Write to interrupt enable register
}
MPU6050 reads GYRO value:
I2C_Device_Read(0x68 , mpu6050_read_data , 2 , ACCEL_XOUT_H);
accx = mpu6050_read_data[0] << 8 | mpu6050_read_data[1];
I2C_Device_Read(0x68 , mpu6050_read_data , 2 , ACCEL_YOUT_H);
accy = mpu6050_read_data[0] << 8 | mpu6050_read_data[1];
I2C_Device_Read(0x68 , mpu6050_read_data , 2 , ACCEL_ZOUT_H);
accz = mpu6050_read_data[0] << 8 | mpu6050_read_data[1];
I2C_Device_Read(0x68 , mpu6050_read_data , 2 , GYRO_XOUT_H);
gyrox = mpu6050_read_data[0] << 8 | mpu6050_read_data[1];
I2C_Device_Read(0x68 , mpu6050_read_data , 2 , GYRO_YOUT_H);
gyroy = mpu6050_read_data[0] << 8 | mpu6050_read_data[1];
I2C_Device_Read(0x68 , mpu6050_read_data , 2 , GYRO_ZOUT_H);
gyroz = mpu6050_read_data[0] << 8 | mpu6050_read_data[1];
-The collected signal is displayed in the chart. Here I use the three trigonometric function superposition waves generated by simulation and the actual waves generated from the signal generator to display:
Figure 1 is the time domain signal, and Figure 2 is the frequency domain signal. The time domain signal is not described in detail, and only the code for generating the frequency domain signal is posted:
int FFTW_Mag_Test(float raw_data[] , float fft_data[] , int index_raw_data , int counts = 400)
{
signed short lX , lY;
float X , Y , Mag;
fftw_complex in[counts] , outf[counts] , outb[counts];
fftw_plan p;
int i , j , max_index = 0;
float max = 0;
float lBufOutArray_Real[N] = {0};
float lBufOutArray_Unreal[N] = {0};
float lBufMagArray[N] = {0};
//in = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
//outf = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
//outb = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N);
for(i = 0 ; i < N ; i++)
{
in[0] = raw_data[i + index_raw_data];
in[1] = 0.0;
}
p = fftw_plan_dft_1d(N , in , outb , FFTW_BACKWARD , FFTW_ESTIMATE);
fftw_execute(p);
for(j = 0 ; j < N ; j++)
{
lBufOutArray_Real[j] = outb[j][0];
lBufOutArray_Unreal[j] = outb[j][1];
}
for(i = 0 ; i < N / 2 ; i ++)
{
X = lBufOutArray_Real;
Y = lBufOutArray_Unreal;
Mag = sqrt(X * X + Y * Y);
fft_data = Mag;
}
fftw_destroy_plan(p);
// if(in != NULL)
// fftw_free(in);
// if(outf != NULL)
// fftw_free(outf);
// if(outb != NULL)
// fftw_free(outb);
return 0;
}
There is also a peak detection algorithm:
void Mag_Find_Peaks(float *src, float src_lenth, float distance, int *indMax, int *indMax_len, int *indMin, int *indMin_len)
{
int *sign = (int*)malloc(src_lenth * sizeof(int));
int max_index = 0,
min_index = 0;
*indMax_len = 0;
*indMin_len = 0;
for (int i = 1; i<src_lenth; i++)
{
float diff = src - src[i - 1];
if (diff>0) sign[i - 1] = 1;
else if (diff<0) sign[i - 1] = -1;
else sign[i - 1] = 0;
}
for (int j = 1; j<src_lenth - 1; j++)
{
float diff = sign[j] - sign[j - 1];
if (diff<0) indMax[max_index++] = j;
else if (diff>0)indMin[min_index++] = j;
}
int *flag_max_index = (int *)malloc(sizeof(int)*(max_index>min_index ? max_index : min_index));
int *idelete = (int *)malloc(sizeof(int)*(max_index>min_index ? max_index : min_index));
int *temp_max_index = (int *)malloc(sizeof(int)*(max_index>min_index ? max_index : min_index));
int bigger = 0;
float tempvalue = 0;
int i, j, k;
//波峰
for (int i = 0; i < max_index; i++)
{
flag_max_index = 0;
idelete = 0;
}
for (i = 0; i < max_index; i++)
{
tempvalue = -1;
for (j = 0; j < max_index; j++)
{
if (!flag_max_index[j])
{
if (src[indMax[j]] > tempvalue)
{
bigger = j;
tempvalue = src[indMax[j]];
}
}
}
flag_max_index[bigger] = 1;
if (!idelete[bigger])
{
for (k = 0; k < max_index; k++)
{
idelete[k] |= (indMax[k] - distance <= indMax[bigger] & indMax[bigger] <= indMax[k] + distance);
}
idelete[bigger] = 0;
}
}
for (i = 0, j = 0; i < max_index; i++)
{
if (!idelete)
temp_max_index[j++] = indMax;
}
for (i = 0; i < max_index; i++)
{
if (i < j)
indMax = temp_max_index;
else
indMax = 0;
}
max_index = j;
//波谷
for (int i = 0; i < min_index; i++)
{
flag_max_index = 0;
idelete = 0;
}
for (i = 0; i < min_index; i++)
{
tempvalue = 1;
for (j = 0; j < min_index; j++)
{
if (!flag_max_index[j])
{
if (src[indMin[j]] < tempvalue)
{
bigger = j;
tempvalue = src[indMin[j]];
}
}
}
flag_max_index[bigger] = 1;
if (!idelete[bigger])
{
for (k = 0; k < min_index; k++)
{
idelete[k] |= (indMin[k] - distance <= indMin[bigger] & indMin[bigger] <= indMin[k] + distance);
}
idelete[bigger] = 0;
}
}
for (i = 0, j = 0; i < min_index; i++)
{
if (!idelete)
temp_max_index[j++] = indMin;
}
for (i = 0; i < min_index; i++)
{
if (i < j)
indMin = temp_max_index;
else
indMin = 0;
}
min_index = j;
*indMax_len = max_index;
*indMin_len = min_index;
free(sign);
free(flag_max_index);
free(temp_max_index);
free(idelete);
}
The peak statistics algorithm is divided into two methods: direct printf printing to the terminal and fprintf printing to a file:
struct Peak_Value
{
int peak_count;
};
struct Peak_Stat
{
bool index_valid = false;
float peak_value_max_temp = 0;
struct Peak_Value pv[100] = {0};
};
Peak_Stat ps_100[100];
void Show_FFT_Analysis_Sum()
{
uint8_t i , j;
printf("\n=begin=\n");
for(i = 0 ; i < 100 ; i++)
{
//printf("peakFs[%d] = %d\n" , i , peakFs);
if(ps_100.index_valid == true)
{
printf("x = %d ; y_max_temp = %f ;\n" , i , ps_100.peak_value_max_temp);
for(j = 1 ; j < 100 ; j++)
{
if(ps_100.pv[j].peak_count > 0)
{
printf("x = %d ; y = %d ; peak_count = %d ;\n" , i , j ,
ps_100.pv[j].peak_count);
}
}
printf("\n");
}
}
printf("=end=\n\n");
}
void Write_CSV_File_FFT_Analysis_Sum(int dph , int fyear , int fmon , int fmday , int fhour ,
int fmin , int fsec , uint32_t masp , uint32_t mass)
{
uint8_t i , j;
FILE *fp;
char filename[200];
sprintf(filename , "//home//peaklog//peaklog_%02d_%02d:%02d:%02d_dph%02d.log" , fmday ,
fhour , fmin , fsec , dph);
fp = fopen(filename , "w+");
for(i = 0 ; i < 100 ; i++)
{
if(ps_100.index_valid == true)
{
for(j = 1 ; j < 100 ; j++)
{
if(ps_100.pv[j].peak_count > 0)
{
fprintf(fp , "%d , %d , %d ,\n" , i , j ,
ps_100.pv[j].peak_count);
}
}
fprintf(fp , "\n");
}
}
fclose(fp);
}
The signal peak statistics result is a three-dimensional array, that is, the array of FFT results (two-dimensional array) on the time axis. The X-axis represents the peak position (frequency), the Y-axis represents the peak value of the frequency on the X-axis, and the Z-axis represents the change of the Y-axis on the time axis. Using the signal peak statistics algorithm, any subtle differences in two detections of the same target signal at the same time can be captured, but the Z value that appears too few can be ignored. bool index_valid indicates whether the peak has an intensity greater than or equal to 1, if yes, it is 1, otherwise it is 0, and float peak_value_max_temp = 0; represents the maximum intensity value of the peak. As shown in the figure, write the data to the log file:
The first number in each row represents the signal peak number. Each signal peak corresponds to a frequency. The second number represents the intensity of the signal peak. The third number represents the number of times the signal peak with this intensity appears. For example, 4, 18, 1 means that the intensity of peak 4, 18, appears once.
UDP receives the signal voltage value and processes it:
void MainWindow::This_Thread_Line_Chart()
{
static int count = 0 , count_recv = 0;
int ret , i;
float pres;
while(1)
{
ret = recvfrom(socklen_udp_recv , (short * )&buffer_u16 , sizeof(buffer_u16) ,
0 , (struct sockaddr *)&sockaddr_udp_recv , &udp_addr_len);
if(ret > 0 && buffer_u16[0] == 0xadad && buffer_u16[1] == 0 && buffer_u16[2] == 0xffff)
{
if(flag_ffw_coll_start == true && flag_ffw_coll_stop == false)
{
-MP3 Player:
static QStringList fileNames_temp , fileNames_norepeat;
void MainWindow::on_PB_MP3_SELECT_clicked()
{
QFileDialog *fileDialog = new QFileDialog(this);
fileDialog->setWindowTitle(QStringLiteral("Select mp3 file(s)"));
fileDialog->setDirectory("./");
fileDialog->setNameFilter(tr("File(*.mp3*)"));
fileDialog->setFileMode(QFileDialog::ExistingFiles);
fileDialog->setViewMode(QFileDialog::Detail);
QStringList fileNames;
if (fileDialog->exec())
{
fileNames = fileDialog->selectedFiles();
ui->LE_MP3->setText(fileNames[0]);
}
fileNames_temp += fileNames;
for(int i = 0; i < fileNames_temp.length(); i++)
{
if(!fileNames_norepeat.contains(fileNames_temp))
{
fileNames_norepeat.append(fileNames_temp);
}
}
fileNames_temp.clear();
fileNames_temp = fileNames_norepeat;
ui->listWidget->clear();
ui->listWidget->addItems(fileNames_norepeat);
delete fileDialog;
}
void MainWindow::on_PB_MP3_PLAY_clicked()
{
system("killall -KILL madplay &");
sleep(1);
QString qs1 = "madplay " + ui->LE_MP3->text() + " &";
system(qs1.toLatin1().data());
on_PB_MP3_PAUSE_clicked_1();
}
void MainWindow::on_PB_MP3_PAUSE_clicked_1()
{
system("killall -CONT madplay &");
ui->PB_MP3_PAUSE->setText("PAUSE");
disconnect(ui->PB_MP3_PAUSE,SIGNAL(clicked()),this,SLOT(on_PB_MP3_PAUSE_clicked_1()));
connect(ui->PB_MP3_PAUSE,SIGNAL(clicked()),this,SLOT(on_PB_MP3_PAUSE_clicked()));
}
void MainWindow::on_PB_MP3_PAUSE_clicked()
{
system("killall -STOP madplay &");
ui->PB_MP3_PAUSE->setText("CONT");
disconnect(ui->PB_MP3_PAUSE,SIGNAL(clicked()),this,SLOT(on_PB_MP3_PAUSE_clicked()));
connect(ui->PB_MP3_PAUSE,SIGNAL(clicked()),this,SLOT(on_PB_MP3_PAUSE_clicked_1()));
}
void MainWindow::on_listWidget_clicked(const QModelIndex &index)
{
ui->LE_MP3->setText(ui->listWidget->currentItem()->text());
}
void MainWindow::on_listWidget_doubleClicked(const QModelIndex &index)
{
ui->LE_MP3->setText(ui->listWidget->currentItem()->text());
on_PB_MP3_PLAY_clicked();
}
void MainWindow::on_PB_MP3_PREV_clicked()
{
if(ui->listWidget->currentItem() != nullptr)
{
int index = ui->listWidget->currentRow();
if(index > 0)
index --;
if(index < 0)
index = ui->listWidget->count() - 1;
ui->listWidget->setCurrentRow(index);
ui->LE_MP3->setText(ui->listWidget->currentItem()->text());
on_PB_MP3_PLAY_clicked();
}
}
void MainWindow::on_PB_MP3_NEXT_clicked()
{
if(ui->listWidget->currentItem() != nullptr)
{
int index = ui->listWidget->currentRow();
index ++;
if(index >= ui->listWidget->count())
index = 0;
ui->listWidget->setCurrentRow(index);
ui->LE_MP3->setText(ui->listWidget->currentItem()->text());
on_PB_MP3_PLAY_clicked();
}
}
This function is very simple and I will not elaborate on it.
Software function design details
-In fact, all the functions of the main project can be integrated into the qtproj project, but it is not necessary, which will make the qtproj project code extremely bloated and inconvenient to debug. The camera acquisition and artificial intelligence object recognition and classification functions implemented by the main project do not actually require an additional graphical interface, because the target image is output to the HTTP WEB server, so it does not need to be integrated into the qtproj project.
- Camera acquisition shows that artificial intelligence object recognition classification requires two sets of locks, namely
pthread_mutex_t pmt;
pthread_cond_t pct;
pthread_mutex_t pmt_tflite;
pthread_cond_t pct_tflite;
This is because the HTTP WEB server needs to wait for the camera to complete acquisition and the cached data to be stable before displaying the acquired image, otherwise it will cause screen tearing or loss. Artificial intelligence object recognition and classification also requires a set of locks for the same reason. Artificial intelligence object recognition and classification requires input images and output images, where the input image is the image acquired by the camera, and the output image also needs to wait for the input image to be stable. The lock broadcast positions of the two sets of thread locks can be the same or different.
-qtproj project uses multithreading function. If you do not use the friend class QProcess, you must use the function this pointer passing method:
void * MainWindow::Thread_Line_Chart(void *args)
{
MainWindow * pthis = (MainWindow *)args;
pthis->This_Thread_Line_Chart();
}
FILE *f;
void MainWindow::This_Thread_Line_Chart()
{
-QT Charts chart display must use the signal-slot mechanism, otherwise it will cause the process to crash. The QT Charts chart data update step must be placed in the slot function. The signal can be a timer overflow signal or a custom signal.
-The result of FFT is half the number of signals, that is, if the input signal has 400 horizontal coordinates, the result of FFT is 200 horizontal coordinates. The FFT result is Y-symmetric, that is, the output result of 400 horizontal coordinates can be defined, but the values on the left of the last 200 coordinates are the axisymmetric mirror images of the values of the first 200 coordinates.
Project Summary & Video Demonstration
Total demonstration video:
15a329a3c4a134a3cb4b0d5f09de3009
Raspberry Pi 400 uses the same Broadcom CPU as Raspberry Pi 4 and has 4GB of memory. It has no problem running conventional multimedia applications such as MP3 playback/MP4 playback/UDP sending and receiving/HTTP WEB image sending and even Fourier transform algorithm. There is no lag at all. It's just that the GPU is not at the top level, and there is still a bit of lag when running Vector3D class objects. Then, there is no built-in NPU, and simply using the CPU to run tflite artificial intelligence applications for object classification still lags. In general, the performance of Raspberry Pi 400 is average, and there is no problem with commercialization. My project is also dedicated to tapping the potential of Raspberry Pi 400, and the performance is acceptable.
[Integrated subway security control system based on Raspberry Pi 400] Material unboxing - Raspberry Pi 400
https://en.eeworld.com/bbs/thread-1210217-1-1.html
[Image recognition classification & motion detection & analog signal processing system based on Raspberry Pi 400, first post] MJPEG
https://en.eeworld.com/bbs/thread-1222121-1-1.html
[Image recognition classification & motion detection & analog signal processing system based on Raspberry Pi 400, second post] Use I2C bus to read MPU6050/BMP280 data and build QT program for graphical display (first video)
https://en.eeworld.com/bbs/thread-1222138-1-1.html
[Image recognition classification & motion detection & analog signal processing system based on Raspberry Pi 400, Part 3] Use QTcharts library built into QT program for signal processing/analysis/statistics (Second video)
https://en.eeworld.com/bbs/thread-1222150-1-1.html
[Image recognition classification & motion detection & analog signal processing system based on Raspberry Pi 400, fourth post] Try to make an MP3 player (third video)
https://en.eeworld.com/bbs/thread-1222158-1-1.html
[Image Recognition Classification & Motion Detection & Analog Signal Processing System Based on Raspberry Pi 400 Sixth Post - Supplement] USB Communication between Raspberry Pi 400 and STM32
https://en.eeworld.com/bbs/thread-1223084-1-1.html
|