2312 views|0 replies

56

Posts

0

Resources
The OP
 

Playing with Zynq Serial 44——[ex63] Image smoothing processing of MT9V034 camera [Copy link]

1 System Overview

As shown in the figure, this is the principle block diagram of the entire video acquisition system. At the beginning of power-on, the FPGA needs to initialize the register configuration of the CMOS Sensor through the IIC interface . These basic initialization parameters, that is, the initialization data corresponding to the initialization address are stored in a pre-configured FPGA on-chip ROM . After the initialization configuration is completed, the CMOS Sensor can continuously output a standard RGB video data stream. The FPGA detects its synchronization signals, such as clock, line frequency, and field frequency, to collect image data from the data bus in real time. The MT9V034 camera can output a normal video stream by default initialization data, so no IIC initialization configuration is actually performed in the FPGA .

Inside the FPGA , the collected video data first passes through a FIFO to convert the original synchronous data stream at 25MHz to a frequency of 50MHz . Then the data is sent to the asynchronous FIFO that writes the DDR3 cache . Once the data in this FIFO reaches a certain amount, it will be written to DDR3 through the AXI HP0 bus . At the same time, the AXI HP0 bus will also read the image data cached in DDR3 , cache it in the FIFO , and finally send it to the LCD driver module for display. The LCD driver module continuously sends requests to read image data and drives the LCD display to display video images.

In addition to the DDR3 cache and display of the original image mentioned above, this example also performs multi-line cache and smoothing operations on the image before caching the original image to DDR3 to obtain a new smoothed image stream, which is written to DDR3 via the AXI HP1 bus . The AXI HP1 bus also reads the processed image for display based on the request of the LCD display module. Finally, on the VGA LCD monitor, you can see that the image on the left is the original image, and the image on the right is the smoothed image.

2 Image smoothing and filtering

2.1 Basic Concepts

From a statistical point of view, any noise whose statistical characteristics do not change with time is called stationary noise, while the noise whose statistical characteristics change with time is called non-stationary noise. If the amplitude is basically the same, but the location of the noise is random, it is called salt and pepper noise; if the amplitude of the noise is random, according to the distribution of the amplitude, there are two types: Gaussian noise and Rayleigh noise, which are called Gaussian noise and Rayleigh noise respectively.

Image filtering, that is, suppressing the noise of the target image while retaining the image details as much as possible, is an indispensable operation in image preprocessing . The quality of its processing will directly affect the effectiveness and reliability of subsequent image processing and analysis.

Eliminating noise components from an image is called smoothing or filtering. It is common for the energy of a signal or image to be mostly concentrated in the low and mid-frequency bands of the amplitude spectrum, while in the higher frequency bands, the information of interest is often drowned out by noise. Therefore, a filter that reduces the amplitude of high-frequency components can reduce the effects of noise.

There are two purposes of image filtering: one is to extract the features of the object as the feature pattern of image recognition; the other is to eliminate the noise mixed in when the image is digitized to meet the requirements of image processing. There are also two requirements for filtering processing: one is not to damage important information such as the outline and edge of the image; the other is to make the image clear and have a good visual effect.

Smoothing filtering is a spatial domain filtering technology for low-frequency enhancement. It has two purposes: one is blurring; the other is to eliminate noise. Smoothing filtering in the spatial domain is generally performed using a simple averaging method, which is to find the average brightness value of neighboring pixels. The size of the neighborhood is directly related to the smoothing effect. The larger the neighborhood, the better the smoothing effect. However, if the neighborhood is too large, smoothing will cause greater loss of edge information, making the output image blurred. Therefore, the size of the neighborhood needs to be reasonably selected.

Regarding filters, a vivid metaphor is: we can imagine the filter as a window containing weighted coefficients. When using this filter to smooth an image, we put this window on the image and look . Let's take an example of how filters are used in our daily lives: the skin smoothing function of beauty. If the bumps on our face are compared to noise, then the filtering algorithm is used to remove these noises, making our skin look smooth in selfies.

2.2 Filtering algorithm

The various filtering algorithms are as follows:

Limiting filter method (also known as program judgment filter method)

Median filter

Arithmetic mean filtering

Gaussian filtering

Recursive average filtering (also known as sliding average filtering)

Median average filtering method (also known as pulse interference-proof average filtering method)

Amplitude-limited filtering

First-order lag filtering method

Weighted recursive average filtering method

Anti-jitter filtering method

Limiting and de-jittering filtering method

Kalman filter (non-extended Kalman)

2.3 Mean Filtering

The mean filter is a common filter in image processing, which is mainly used to smooth noise. Its principle is to use the average value of the pixels around a certain pixel to achieve the effect of smoothing noise.

For example, pixels 1 to 8 are the 8 pixels around point ( x,y ) . The simplest mean filter is to average ( x,y ) and the surrounding 8 pixels to replace the original point ( x,y ).

The advantages of this filtering method are obvious, the algorithm is simple and the calculation speed is fast. The disadvantage is that it reduces the noise while blurring the image, especially the edges and details of the scene.

2.4 Weighted mean filter

Since we have noticed that the importance of the center point and the surrounding pixels is different, we can improve the mean filter to obtain the image smoothing filtering effect while minimizing the loss of image edges and details to a certain extent.

Based on 1/16 weighted mean filtering, our Matlab code is as follows:

clear

clc

I1=imread('.\lena.jpg');

I=im2double(I1);

[m,n,c]=size(I);

A=zeros(m,n,c);

% 1 2 1

% 1/16 * 2 4 2

% 1 2 1

%for R

for i=2:m-1

for j=2:n-1

A(i,j,1)=I(i-1,j-1,1)+I(i+1,j-1,1)+I(i-1,j+1,1)+I( i+1,j+1,1)+2*I(i+1,j,1)+2*I(i-1,j,1)+2*I(i,j+1,1)+ 2*I(i,j-1,1)+4*I(i,j,1);

end

end

%for G

for i=2:m-1

for j=2:n-1

A(i,j,2)=I(i-1,j-1,2)+I(i+1,j-1,2)+I(i-1,j+1,2)+I( i+1,j+1,2)+2*I(i+1,j,2)+2*I(i-1,j,2)+2*I(i,j+1,2)+ 2*I(i,j-1,2)+4*I(i,j,2);

end

end

%for B

for i=2:m-1

for j=2:n-1

A(i,j,3)=I(i-1,j-1,3)+I(i+1,j-1,3)+I(i-1,j+1,3)+I( i+1,j+1,3)+2*I(i+1,j,3)+2*I(i-1,j,3)+2*I(i,j+1,3)+ 2*I(i,j-1,3)+4*I(i,j,3);

end

end

B=A/16;

%output

imwrite(B,'lena.tif','tif');

imshow('.\lena.jpg');title('origin image');figure

imshow('lena.tif');title('image after average filter')

The filtering effect is as follows.

The Matlab source code, the original image Lena.jpg and the comparison image are stored in the project\zstar_ex63\matlab folder.

3 Image smoothing processing based on FPGA

The average_filter.v module in the project folder project \zstar_ex63\zstar.srcs\sources_1\new implements 1/16 weighted average filtering of images. The functional block diagram of this module is as follows. Two FIFOs are used to cache the front and back rows respectively, that is, the three groups of data streams entering the image processing are the images of the n-1th row, the nth row , and the n+1th row. The input data stream and the images cached by the two FIFOs are controlled at the same position, and the register caches the image values of the two pixels before and after. In this way, the synchronous processing of the data between the center pixel point and the front and back columns and the upper and lower rows can be realized.

4 Assembly Instructions

The MT9V034 camera module is connected to the Zstar Zynq development board through the Zstar ISB baseboard ( P3 ) , and the VGA is also connected to the Zstar Zynq development board through the Zstar ISB baseboard . The VGA board also needs to be connected to the VGA display. The connection diagram is shown in the figure.

5. Board-level debugging

This example corresponds to the ex63 example project, and the prepared BOOT.bin is placed in the project path "zstar_ex63\zstar.sdk\BOOT" . You can also refer to the document "Playing with Zynq- Example: [ex51] Making the boot file BOOT.bin.pdf of the naked running program " to make a BOOT.bin file containing a .bit file , copy it to the TF card, insert it into the card slot of the Zstar development board, make assembly connections, and power on. You can see that the VGA monitor displays two images on the left and right at the same time. The left image is the original image, and the right image is the smoothed image.



This content is originally created by EEWORLD forum user ove . If you want to reprint or use it for commercial purposes, you must obtain the author's consent and indicate the source

This post is from FPGA/CPLD
 

Just looking around
Find a datasheet?

EEWorld Datasheet Technical Support

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号
快速回复 返回顶部 Return list