Abstract:
This paper introduces a rail detection method based on FPGA, including the construction of the hardware platform of the embedded image processing system and the research of the image processing algorithm based on FPGA. The soft core technology based on FPGA is used to complete the basic image processing algorithms such as image enhancement and restoration, edge detection, threshold segmentation, and connected domain search, so as to realize the extraction of the rail area in the image.
Keywords:
FPGA; embedded system; image processing; rail detection
1 FPGA-based embedded system development process
Designing an embedded system mainly includes hardware platform construction and application software writing. Based on FPGA technology, hardware platform construction and software writing can be completed on the corresponding software platform. EDK (Embedded Development Kit) is a tool kit for Xilinx to develop embedded systems. The EDK tool kit mainly includes hardware platform generator, software platform generator, simulation model generator, and software compilation and debugging tools. Its integrated development environment XPS (platform studio) can be used to easily complete the development and design of embedded systems [1]. The design process is shown in Figure 1.
2 Hardware platform construction process
In the analysis system requirements, rail inspection mainly involves image analysis and processing, which includes three main parts: image input, image processing and result display. This project uses the Xilinx Spartan-3a series xc3s700a FPGA development board produced by Yiyuan Company, and the software version is Xilinx10.1. Image input has the following channels: USB interface, RS232 serial port, 100 M Ethernet interface, EDK suite XMD debugging platform direct download, etc. This article converts the image data into .ELF file format and burns it directly into Flash. This article does not pursue the realization of video stream processing, and the image needs to be used multiple times, so it is most reasonable to store the source image in Flash. Image processing is completed by the Microblaze soft core system and the detection program; the image display is displayed on the LCD screen through the VGA output signal of the TFT controller. The specific hardware platform construction process is as follows:
(1) According to the XPS application wizard, establish a minimum system, configure the Microblaze soft core system parameters and add UART peripherals.
(2) Add IP cores and connect them to the corresponding buses, mainly memory controllers, communication control and GPIO.
(3) Add custom IP. Although Xilinx provides many free IPs, free IPs cannot meet all user designs. The custom IPs required for this project include TFT_Controller for controlling the LCD display and Mux_logic IP for multiplexing the memory address bus and data bus. PLB_TFT_Controller mainly generates RGB signals, line field scanning, synchronization signals, etc. Mux_logic IP is used to control the multiplexing of SDRAM and Flash buses. The input is the address bus signal, data bus signal and enable signal generated by the control IP of SDRAM and Flash, and the output is the multiplexed address bus and data bus signal.
(4) Configure the corresponding IP, interconnect the signals, and connect the port that needs to control the hardware to the outside. Allocate address space and add UCF configuration files.
(5) Generate hardware bitstream files and hardware driver files. The hardware structure schematic is shown in Figure 2.
3 Software Design Process
3.1 Rail Detection Principle
In this project, two main schemes for rail detection are considered [2]: based on edge features and based on regional features. (1) The edge feature detection method first detects the edge line in the global range, and then obtains the target edge from the edge map through model or feature constraints. (2) Rail detection based on regional features uses regional statistical characteristics, that is, the unique statistical characteristics of the rail area that are different from the surrounding environment to determine the rail area. Of the two methods, the former detects the rail line more accurately, but it is heavily dependent on the binary threshold; the latter has better noise resistance, but the detected rail line is not accurate enough. This paper mainly discusses rail detection based on regional features.
The process of rail detection based on regional features is shown in Figure 3, which is divided into four steps:
(1) Reduce resolution. Before filtering, reduce the image resolution to eliminate image details and reduce the computational burden of subsequent processing.
(2) Filtering. After the resolution is reduced, there are still many abrupt points in the image. This is because of the presence of various electromagnetic signals on the rails. The images collected by the camera are inevitably contaminated by Gaussian noise and system noise. Considering the image characteristics, median filtering is used, which is very effective in smoothing impulse noise and protecting the sharp edges of the image.
(3) Edge extraction. The edge detection operator is used to check the neighborhood of each pixel and quantify the grayscale change rate, including the determination of direction. The Sobel edge detection operator is flexible in direction and can set different coefficients. It has a good noise suppression effect and a wide range of applications. Therefore, the Sobel operator is selected. At the same time, the rail image does not change much in the horizontal direction, but has a large extension in the vertical direction, so only the vertical edge response of the image is considered.
(4) Connected domain search. After binarization, the edge map contains rail information and many non-rail edges. The eight-connected region search method is used for labeling, and the number of independent connected regions is recorded and labeled. The connected areas are arranged according to their length until the two longest rails are found. Then the left and right rails are identified and marked. After that, the area is filled. Finally, the marked rail area can be seen. [page]
3.2 OpenCV simulation results
The program of this project first implements OpenCV simulation and then transplants it to FPGA. The image processing algorithms provided by OpenCV are very rich, and some programs are written in C language. If handled properly, it is possible to compile and link to generate execution programs for algorithm transplantation without adding new external support. This simulation only uses two OpenCV libraries "cv.h" and "highhui.h", mainly using their image loading, image display and other functions, while the median filter, edge detection, and rail search functions are written by themselves. The simulation results are shown in Figure 4.
3.3 FPGA program migration process
3.3.1 Image input and display [3]
This project converts the image data into .ELF file format and burns it to NOR-Flash. Click Program Flash Memory in the XPS menu and select automatic format conversion to start burning. You can also specify the location of the burned data. The data format conversion is completed using Matlab software. The program is as follows:
fid = fopen(′pic.elf′, ′w′); //Open file
img = imread(′Image03.BMP′); //Read image data
imshow(img); //Display image
fwrite(fid, img.′); //Write data
fclose(fid); //Close file
Since it is a grayscale image, only its brightness value is read. The image resolution is 640×480. Data can be written using the fprintf function or the fwrite function. However, experiments show that when using the fprintf function to write data, the file size is 302 kB and the displayed image is abnormal; while when using the fwrite function to write data, the file size is only 300 kB and the displayed image is normal. This shows that the two functions have different ways of writing data, resulting in different formats of written data.
Image display process: First, read the data from the Flash into the BRAM one line at a time, then shift each brightness value to three bits of R, G, and B, and then read the data from the BRAM to the SDRAM video memory. This cycle is repeated 480 times to display the image. Since the R, G, and B values are the same, the displayed image is a grayscale image. If the data is read directly from the Flash to the SDRAM video memory, there will be irregular and discontinuous black dots on each line of the displayed image, and even abnormal display. The video memory setting is completed in the TFT-Controller IP. The video memory space is 2 MB, and the starting address is the same as the SDRAM starting address.
3.3.2 Image processing program porting [3]
Due to different development environments, the ported program runs on an independent system, and some corrections need to be made to the OpenCV simulation program. The FPGA programming system supports the C language standard library functions, so the print output display function print() and the dynamic memory allocation function malloc() can be used directly. Although the printf() function can also be used to print output results, the purpose is to put the program into the BRAM of 32 KB. Experiments show that it occupies twice as much space as the print() function. In OpenCV, you can directly use the cvShowImage(), cvReleaseImage(), and cvDestroyWindow() functions to display images and release memory space. These functions must be designed by yourself in the transplant program. In the transplant program, the subplot() function is used to display 4 images on the screen (reduced resolution source image, filtered image, threshold segmentation image, and rail detection image), and the DeleteAllPointElems() function is used to release memory space. Other functions, such as the resolution reduction function Dec(), the filtering function filter(), and the edge detection function edge(), can completely use the programs in OpenCV without modification. The main program after transplantation is as follows:
int main()
{ print("\\r\\n-- Entering main() --\\r\\n");
SourceImage=(Xuint8*)malloc(640×480);
DecImage=(Xuint8*)malloc(320×240);
FilterImage=(Xuint8*)malloc(320×240);
EdgeImage=(Xuint8*)malloc(320×240);
ResultImage3=(Xuint8*)malloc(320×240);
//Allocate memory space for the imageif
(SourceImage==NULL)
{print("\\r\\n--mem allo fail--\\r\\n");
exit(1); }//Verify whether the space is allocated successfullyXTft_Initialize
(&Tft, TFT_DEVICE_ID);
//TFT display initializationXromTftTestColor
("black", 0);
//Set the display background to black
flbuf=(unsigned char*)Flash_BASEADDR;
//Set the Flash image base address pointer
p=SourceImage; //Set the source image pointer
for (y=0;y
for(x=0;x
*p++=data1;
} }//Read source image data
dec(SourceImage,DecImage);
filter(DecImage,FilterImage,320);
edge(FilterImage,EdgeImage,320);
//Image resolution reduction, filtering, and edge processing
nt areanum=0;
GetFeature(EdgeImage,320,240,
ConnLabel,pFeatures,&areanum);
//Edge extraction, search for connected areas
GetRailArea(320,240,pFeatures,
areanum, lowLeftRail, lowRightRail);
//Search the rail area and get the left and right rails
int i, j;
for (i=1; i <= areanum; i++){
DeleteAllPointElems(pFeatures[i]); }
//Release memory space
int Left, Right;
for(i=1; i<240; i++){
Left=lowLeftRail[i];
Right=lowRightRail[i];
if((Left>0)&&(Right>0)){
for(j=Left;j<=Right;j++){
*(TrackImage+i*320+j)=255;}}}
//Fill the area between the left and right rails
subplot(DecImage,1);
subplot(FilterImage,2);
subplot(EdgeImage,3);
subplot(TrackImage,4);
//Display 4 processed images
print("-- Exiting main() --\\r\\n");
}
The FPGA image processing result is shown in Figure 5.
This paper implements a rail detection algorithm based on FPGA. First, the OpenCV program simulation is completed, and then it is transplanted to the hardware system built by FPGA. The area where the rail is located can be successfully detected, and the rail can be intelligently extended under certain conditions. The research results show that it takes about 30 seconds to detect a 640×480 resolution image. If it is applied to a real-time video streaming system, the hardware platform design needs to be streamlined to increase the speed. It is also possible to consider using hard-core and multi-core technologies to increase the processing speed to meet the needs of real-time video streaming processing.
Previous article:Tips for FPGA Low-Power Design
Next article:Clock Design Based on FPGA
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications