Optoelectronic hybrid pattern recognition has become an important way to realize the practical and real-time pattern recognition with its advantages of high-speed parallel processing and no crosstalk. It has been widely studied and applied in the fields of target recognition, fingerprint recognition, optical fiber detection, industrial parts recognition, and automobile license plate recognition [1.2], and has achieved good recognition results.
However, in practical applications, the target image to be identified needs to undergo image preprocessing and distortion processing. In response to the real-time processing requirements of images, this paper combines the joint transform correlation recognition system with the dual CPU technology in digital signal processing, and adopts the "FPGA+DSP+ARM" architecture to study and design a new type of optoelectronic hybrid image recognition system. The TMS320C6416 and FPGA are used to complete the acquisition and processing of the target image, and the ARM9 processor is used to complete the acquisition of the relevant power spectrum and target image recognition, thereby achieving fast and accurate distortion invariant pattern recognition. And the system is intelligent and networked.
The optoelectronic hybrid image recognition system can process 25 frames of images per second and can realize true dynamic image recognition, so it has good practicality for image recognition.
1 Optoelectronic hybrid image recognition system
The optoelectronic hybrid image recognition system is a system based on the optoelectronic hybrid joint transform correlator. The structural block diagram of the optoelectronic hybrid image recognition system proposed and designed in this paper is shown in Figure 1.
The ARM9 processor S3C2440 and DSP are in master/slave mode, and the DSP and FPGA are also in master/slave mode. The target image acquisition and processing module composed of DSP and FPGA transmits the target to be identified to DSP through camera 1, and DSP completes the preprocessing and distortion processing of the target image. Then, DSP outputs the joint input image composed of the processed target image and the reference image to the LCD TV in real time. After the joint image is irradiated by the laser beam and passes through the Fourier transform lens 3, the Fourier spectrum of the joint image is formed. After the spectrum is low-pass filtered, the required central spectrum is obtained [3], and it is received by camera 2 and enters the ARM9 processor S3C2440 to complete the amplitude modulation and Fourier inverse transform processing of the image spectrum to obtain the required cross-correlation result. Since the cross-correlation signal of the real target is strong and the cross-correlation signal of the false target is very weak, the real and false target images can be judged by setting a threshold, that is, when the correlation result is greater than the threshold, it is identified as a real target, and when it is less than the threshold, it is identified as a false target. When it is judged as a false target, the DSP is controlled through the communication interface to continue image acquisition and processing to realize image recognition of the next target until the real target is identified.
2 System Design
The photoelectric image recognition system is mainly composed of a target image acquisition and processing module, a photoelectric correlation joint transformation module, and uses TMS320c6416DSP and FPGA to complete the acquisition and processing of the target image, and uses the ARM9 processor S3C2440 to complete the acquisition of the correlation power spectrum and target image recognition.
2.1 TMS320C6416
C64x is the latest member of the C6000 series DSP launched by TI. It adopts the VelociTI1.2 structure, and has made great improvements in the internal CPU functional units, general register groups and their data paths. C64x has 8 independent functional units, including 6 arithmetic logic units that support single 32-bit, dual 16-bit or 4 8-bit data operations in a single cycle, and 2 multipliers that support single-cycle dual 16×16-bit or 4 8×8-bit data operations; the general register group of the internal CPU contains 32 32-bit registers, supports 8-bit and 64-bit fixed-point data, and register A0 can also be used as a conditional register; there are two cross paths inside the general register group, and both can access the register group on the other side through the cross paths; C64x can also use non-aligned access instructions to access words or double words on any byte boundary.
Compared with C62x, the average computing power of C64x per instruction per clock cycle has increased by 7.6 times. Since C64x supports dual 16-bit and 8-bit data and the clock frequency is increased, its image processing capability is about 15 times higher than that of C62x. C64x has a two-level on-chip memory structure of program and address. The first-level memory consists of program (L1P) and data (L1D) cache. Among them, L1P is a 16KB direct-mapped cache with 512 groups of 32B, and L1D is a 16KB two-way set associative cache with 128 groups of 64B. C64x has a different memory bank structure from C621x and C67lx. Its memory bank is located on the 32-bit boundary, so when accessing the same memory bank, the 3LSBs of the address bus are the same. In addition, the C64x has rich peripheral resources, including: a 64-channel enhanced direct memory access (EDMA) controller; an external memory interface EMIFA/EMIFB with a 64-bit/16-bit data bus; a 33MHz, 32-bit PCI interface and a UTOPLA interface for asynchronous transfer mode; a 16-bit or 32-bit host interface; and three multi-channel buffered serial ports.
Improvements in internal structure, parallel processing capability and abundant peripheral resources make C64x have great development potential in the field of image processing.To improve the real-time performance of the system, this paper uses TMS320C6416GLZ with a main frequency of 400 MHz as the target image processing unit to design the recognition system.
2.2 Target image acquisition and processing module
This module is mainly implemented by DSP processor TMS320C6416 and FPGA, and the master/slave mode is adopted between DSP and FPGA. Among them, DSP mainly completes the processing of target image and controls the start of FPGA sampling signal. FPGA completes the sampling control process of target image. Its hardware structure diagram is shown in Figure 2.
The image captured by the camera is first signal conditioned, that is, the image is embedded, phased, amplified, and the synchronization signal is separated. Then, the DSP starts sampling the image signal, that is, controls the FPGA to sample the image, and monitors the sampling completion signal sent by the FPGA through the interrupt query method (FTNT).
The TLC5510 chip of TI company is used for high-speed A/D sampling. TLC5510 is a high-speed parallel ADC with 5V power supply, 8 bits, 20Msps, and a maximum range of 2V. In order to achieve the purpose of real-time processing, this system only collects grayscale images, the frame rate of CCD images is 30Hz, the frame image resolution is 512×512 pixels, and each pixel is quantized by 8 bits.
Driven by the line (HS) and field (VS) synchronization signals and clock signals, FPGA generates A/D sampling control signals to control the sampling process. At the same time, FPGA provides memory address and chip select and read/write control signals. The digital signal is written into the FPGA memory RAM according to the address and when RAM_W is valid, preparing for image preprocessing.
After the sampling is completed, the FPGA generates an external interrupt and sends an interrupt request to the DSP, and the DSP enters the interrupt processing: the FPGA provides the address signal of the RAM, and when RAM_R is valid, the DSP reads the sampled data in the RAM to the synchronous dynamic memory SDRAM in EDMA mode. The SDRAM is 4balaks×512 kb×32b, and the clock frequency is 166 MHz, which ensures the storage capacity and real-time requirements required during operation. After the data transmission is completed, the DSP starts the FPGA to sample the next frame of the image, and the FPGA enters the sampling control processing process again, and the DSP performs preprocessing and distortion processing on the target image data.
After completing the data processing of the target image, the DSP outputs the joint input image consisting of the processed target image and the reference image stored in the ROM to the designated area on the LCD TV in real time for optical information processing.
2.3
The automatic identification module is completed using Samsung's ARM processor. The S3C2440 processor is a 32-bit RISC embedded chip based on the ARM920T core. The CPU main frequency of the ARM core can reach up to 533MHz, and 499MHz is used here. In addition to integrating 3 serial ports, SD card controller, USB Host controller, LCD controller, Nand Flash controller and real-time clock, it also adds industrial control bus (CAN), Camera controller (digital camera interface), PCMCIA interface (can connect wireless network card or modem and other peripherals). In addition, a 96-pin bus slot is used to lead out the local bus of the CPU, which can be connected to other bus devices and communicate with multiple parties. At present, S3C2440 has been widely used in industrial control, multimedia processing, consumer electronics and network communication.
The interface block diagram of the S3C2440 processor is shown in Figure 3. The S3C2440 has a built-in Camera controller and supports image input with a maximum of 4096 × 4096 pixels. Therefore, this system uses a 1.3 million pixel camera for video acquisition and transmission to obtain the joint spectrum image. The Catnera controller is used to complete the data conversion and storage of the spectrum image, and then the spectrum is amplitude modulated and Fourier inverse transformed to obtain the cross-correlation result for discrimination and processing.
In Figure 3, the 64MB NAND Flash uses Samsung's K9F1208 to store application programs; the 2MB NOR Flash uses AMD's AM29LV160DB to store Bootloader and Kernel; the 64MB SDRAM uses Hyundai's HY57V561620 ; the 32KB FRAM (ferroelectric memory) reduces the frequent operation of the Flash, extends the life of the Flash, and prevents data loss when power is off.
As the main control processor, S3C2440 is also responsible for communicating with the host computer and can be connected to the Internet through a network card to realize the intelligence and networking of the system. In addition, data can be accessed through the USB interface.
2.4 System software main process
The main working process of the optoelectronic hybrid image recognition system is shown in Figure 4. After the ARM and DSP are initialized, the DSP program is loaded through the HPI port and the DSP is activated through interruption; after the DSP works, the FPGA is started, and the FPGA controls the A/D sampling chip to collect real-time images.
3 Conclusion
This paper studies and designs a new type of photoelectric image recognition system based on dual CPU technology. The system uses TMS320C6416 and FPGA to complete the acquisition and processing of target images, obtains the joint spectrum of the image through the photoelectric correlation joint converter, and uses S3C2440 to complete the acquisition of the correlation power spectrum and automatic recognition of the target image. The image processing capacity of this recognition system reaches 25 frames/s, thus realizing the image recognition of truly dynamic images. Compared with traditional photoelectric image recognition systems, this system has higher real-time performance and accuracy, and has realized intelligence and networking, and has high practical value.
Previous article:Design of HPI interface between ARM and DSP in video surveillance
Next article:How to connect ARM development board under virtual machine Linux system
- Popular Resources
- Popular amplifiers
- Learn ARM development(16)
- Learn ARM development(17)
- Learn ARM development(18)
- Embedded system debugging simulation tool
- A small question that has been bothering me recently has finally been solved~~
- Learn ARM development (1)
- Learn ARM development (2)
- Learn ARM development (4)
- Learn ARM development (6)
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Detailed explanation of intelligent car body perception system
- How to solve the problem that the servo drive is not enabled
- Why does the servo drive not power on?
- What point should I connect to when the servo is turned on?
- How to turn on the internal enable of Panasonic servo drive?
- What is the rigidity setting of Panasonic servo drive?
- How to change the inertia ratio of Panasonic servo drive
- What is the inertia ratio of the servo motor?
- Is it better for the motor to have a large or small moment of inertia?
- What is the difference between low inertia and high inertia of servo motors?
- Learn to Wind High Frequency Transformers
- Help: D11, D13 selection, thank you
- I see this device is always used in the circuit drive part such as motor controller
- How to solve the problem that the number of pins of CD4019B chip does not match the datasheet
- At 9:43 on June 23, the last satellite of my country's BeiDou-3 system was successfully launched.
- Keil simulator plays RTT multi-threaded lighting
- Sensor circuit problem
- [RVB2601 creative application development] lite LED controller
- IME2020 Western Microwave Conference
- Does anyone know where the slope of 6db/oct in the red circle in this post came from? Can anyone teach me?