Design Method of CPLD Vision System Using Image Sensor

Publisher:liliukanLatest update time:2011-08-03 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Design Method of CPLD Vision System Using Image Sensor

A low-cost embedded vision system is built, which consists of a CMOS image sensor, CPLD, ARM7 microprocessor and SRAM. Among them, the CPLD recognizes the timing, which solves the problem of strict timing synchronization and bus competition of dual CPUs sharing a SRAM in the image acquisition system; the Mealy state machine is written in Verilog language to control the writing of image data into SRAM, and the multi-way data selector realizes bus switching to avoid bus conflicts. The image processing algorithm focuses on efficiency and is implemented based on ARM. The final working rate of the system is 25 frames/s.

Keywords: OV6620, vision system, CMOS, EPM7128, SLPC2214

At present, the research on visual systems has become a hot topic, and there are also developed systems for reference. However, most of these systems are based on PCs, and their application in small embedded systems is limited due to the complexity of algorithms and hardware structures. After the above system collects image data, the visual processing algorithm is implemented on the PC. With the advancement of embedded microprocessor technology, the 32-bit ARM processor system has a very high computing speed and strong signal processing capabilities. It can be used as a processor for visual systems to replace PCs to implement simple visual processing algorithms. The following introduces an embedded visual system based on ARM and CPLD, hoping to share some experience in the development of embedded vision.

1 System Solution and Principle

In the design of embedded vision, there are currently two mainstream solutions:

Solution 1 Image sensor + microprocessor (ARM or DSP) + SRAM
Solution 2 Image sensor + CPLD/FPGA + microprocessor + SRAM

Solution 1 has a compact system structure and low power consumption. During image acquisition, the recognition of the synchronous timing signal output by the image sensor requires the help of ARM interrupts. During interrupt processing, the microprocessor needs to complete program jumps, save context, and other tasks [1], which reduces the speed of image acquisition. It is suitable for occasions that do not require high acquisition speed and low power consumption.

Solution 2 uses CPLD to identify the synchronization timing signal of the image sensor without the interruption of the microprocessor, thereby increasing the system acquisition speed, but the intervention of CPLD will increase the power consumption of the system.

In order to combine the advantages of the above two solutions, "ARM+CPLD+image sensor+SRAM" is used in hardware. This solution makes full use of the programmability of CPLD and combines the advantages of solution 1 through software programming, which is specifically reflected in the following aspects:

① The power consumption can be controlled. For occasions with strict power consumption requirements, the interface of the timing part is connected to the interrupt port of ARM through the programmability of CPLD, and only the bus of the combinational logic is connected, which can reduce the power consumption of CPLD and achieve the effect of solution 1; for situations where the acquisition speed requirements are high but the power consumption requirements are not high, the advantages of CPLD can be fully utilized, and the combinational and timing logic can be used to realize the recognition of the image sensor output synchronization signal and write the image data into SRAM.
② The choice of devices can be diverse. In hardware design, all buses are connected to CPLD; in software design, different modules are packaged separately according to function. In this way, with CPLD as the center, other components of the system can be replaced without changing the CPLD part program, which is conducive to the function upgrade of the system.

As an application of this system, a visual tracking program has been developed to track objects when the target and background colors are in sharp contrast. The center of mass coordinates of the tracked object are calculated based on the color of the object by real-time processing of the data collected by the CMOS camera. The functions of each part of the system are described below.

2 System Hardware

2.1 Hardware composition and connection

The hardware of the system mainly consists of four parts: CMOS image sensor OV6620, programmable device CPLD, 512 KB SRAM and 32-bit microprocessor LPC2214.

OV6620 is a CMOS image sensor produced by OmniVision, an American company. With its high performance and low power consumption, it is suitable for application in embedded image acquisition systems. The input of image data of this system is collected through OV6620. The programmable device CPLD adopts EPM7128S of Altera Company, and the program is written in QuartusII with Verilog hardware programming language. As the data buffer of the system, SRAM is selected as IS61LV5128, and its random access feature provides convenience for image processing programs. LPC2214 can run at a maximum frequency of 60 MHz with the support of PLL (phase-locked loop), providing hardware support for fast image processing.

OV6620 is integrated on a board with an independent 17 MHz crystal oscillator. It outputs three image synchronization timing signals: pixel clock PCLK, frame synchronization VSYNC and line synchronization HREF. At the same time, it can also output image data in RGB or YCrCb format through an 8-bit or 16-bit data bus.

In hardware design, there are two problems that need to be solved:

① Strict timing synchronization of image acquisition;
② Bus arbitration of dual CPUs sharing SRAM.

The key to solving the first problem is how to read the timing output signal of OV6620 in real time and accurately, and write the image data into SRAM accordingly. The solution adopted here is to use CPLD to realize the recognition of timing signal and the writing of image data. CPLD can recognize the edge of the signal in hardware, which is faster. The Mealy state machine written in Verilog language is used to realize the SRAM writing of image data, which is more stable.

The problem of dual CPUs sharing SRAM can be solved through a reasonable connection method. Considering the programmability of CPLD, the data bus of OV6620, the address and data bus of LPC2214, and the bus of SRAM are all connected to CPLD. By programming to control the connection between buses, as long as the mutual exclusivity of the bus is guaranteed in software, that is, there is only one controller (CPLD or LPC2214) to operate the SRAM bus at the same time, bus conflicts can be effectively avoided. In this way, arbitration on hardware can be guaranteed by software, and this process can be achieved by programming a multi-way data selector in CPLD.

The connection relationship between the various components is shown in Figure 1.

Click here to view the image in a new window
Figure 1 System structure diagram

As shown in Figure 1, the bus of the microprocessor is connected to the CPLD. In situations where there are strict requirements on power consumption, it is only necessary to connect the pin corresponding to the synchronization timing signal of OV6620 to the interrupt pin of LPC2214 connected to the CPLD in the CPLD, and the system can be converted into the form of Scheme 1. For CPLD, the pins are only connected to the combinational logic, which reduces power consumption. The specific working process of Scheme 1 can be found in reference [1]. For situations where the acquisition speed is required to be higher, the program source code of the CPLD part can be found on the website of this journal. Click here to view the image in a new window
Figure 2 OV6620 output timing diagram

In Verilog language, the detection of rising edge is realized by always statement. For example, the detection of rising edge of clock signal cam_pclk: Click here to view the image in a new window
Figure 3 Line graph obtained by line processing

Based on the results obtained, more information about the tracked object can be calculated:

① Calculate the area of ​​the region. Calculate the length of each line segment l(n), and then accumulate l(n) to obtain the area value S of the tracking region.

Click here to view the image in a new window

② Calculate the horizontal coordinate of the center of mass.

Click here to view the image in a new window

③ Calculate the vertical coordinate of the center of mass.

Click here to view the image in a new window

④ Identify the shape of the object. Based on the length of each row of tracking points and the number of consecutive tracking points that meet the requirements in the same row, the shape of the object as seen from the camera can be known. In particular, when detecting lines on a plane, it is possible to identify whether there are branches, which is something that the frame processing mode cannot do.

It should be pointed out that although the row processing mode can obtain more information about the tracking target, the way of processing each row increases the burden on the processor and the processing speed is not as fast as frame processing.

4. Improve the system's operating speed

At present, the system works at a rate of 25 frames/s in frame processing mode. As a verification of the system function, the algorithm used here is color tracking. If only pure image acquisition is performed without image processing, the system can reach the maximum working rate of OV6620, that is, 60 frames/s. In terms of image processing, different image processing program efficiencies have a greater impact on the system's operating frequency. Here are some suggestions for improving program efficiency under general ARM processors:

① Inlining can improve performance by removing the overhead of sub-function calls. If the function is not called in other modules, a good suggestion is to use static to identify the function; otherwise, the compiler will compile the function as non-inline in the inline decoding.
② In the ARM system, when the number of parameters is ≤4 during the function call, they are passed through R0~R3; when the number of parameters is >4, they are passed by pushing on the stack (requiring additional instructions and slow memory operations). Usually limit the number of parameters to 4 or less. If it is unavoidable, put the first 4 commonly used parameters in R0~R3.
③ In the for(), while() do…while() loop, use "subtract to 0" instead of "add to a certain value". For example:

for (loop = 1; loop <= total; loop++) //ADD和CMP
替换为:for (loop = total; loop != 0; loop--) //SUBS

The first method requires two instructions, ADD and CMP, while the second method only requires one instruction, SUBS.

④ ARM cores do not contain division hardware. Division is usually implemented using a runtime library function, which takes many cycles to run. Some division operations are handled as special cases during compilation. For example, the division by 2 operation uses a left shift instead of the remainder operator "%", and modulo arithmetic is usually used. If the modulus of this value is not a power of 2, a lot of time and code space will be spent to avoid this situation. The specific method is to use if() for status checking.

For example, if the range of count is 0~59:
count = (count+1) % 60;
replace it with the following statement:
if (++count >= 60)
count = 0;

⑤ Avoid using large local structures or arrays. Consider using malloc/free instead.
⑥ Avoid using recursion.

Conclusion

This paper introduces an embedded vision system based on ARM and CPLD, which can realize color tracking. In terms of hardware design, image acquisition and image processing are separated, which is more conducive to the upgrade of system functions. The visual processing algorithm pays more attention to the efficiency and real-time performance of processing, and there are two modes to choose from according to different needs. Finally, some suggestions and methods to improve the efficiency of the program are given. Compared with the PC-based vision system, this system has low power consumption and small size, and is suitable for applications in fields such as mobile robots.

Reference address:Design Method of CPLD Vision System Using Image Sensor

Previous article:Design and implementation of 3G wireless video terminal based on embedded Linux
Next article:Comparative Analysis of Task Switching Methods in Embedded Operating Systems

Latest Industrial Control Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号