Home > Other >Special Application Circuits > How to create a low-cost imaging system using FPGA and CMOS

How to create a low-cost imaging system using FPGA and CMOS

Source: InternetPublisher:国民男神经 Keywords: fpga CMOS image sensor Updated: 2024/05/24

Not all imaging systems need to be expensive. Solutions can be created directly using cost-optimized FPGAs and CMOS image sensors.

introduce

Developing embedded vision systems does not require expensive FPGAs or SoCs, large frame buffer memories, and external cameras.

We can develop very powerful image processing systems using cost-optimized FPGAs/SoCs directly connected to CMOS sensors. This allows creating a solution that not only achieves cost targets but is also compact and energy-efficient.

Interfacing directly with the sensor is different with cameras as we have done previously. When we interface with a camera we receive the video signal via HDMI, CameraLink, etc. which is fairly straight forward.

When we interface with an image sensor, we usually receive images in different formats, such as MIPI or parallel formats, and before receiving video, we need to first configure the imager to operate as we need.

Typically, imagers need to be configured via I2C or SPI, and the number of commands sent over the interface can be large.

To demonstrate how we can integrate sensors with a cost-optimized FPGA into this project, we will investigate integrating

TDNext 1.26 Megapixel Pmod

Art S7-50

Since the Arty S7 does not provide HDMI or other video outputs directly on the board, this example will use an Avnet 10" touchscreen. However, this is optional. Another option for outputting the final image is the Digilent Pmod VGA. This Pmod can also be used to implement a very low cost solution.

Interfacing with the TDNext Pmod is very simple and can be divided into two elements: video and configuration.

The video interface consists of 10-bit pixels (split into 8 bits and 2 LSB), frame and line valid, pixel clock and reference clock (24 MHz).

The configuration interface consists of an I2C connection to the imaging device and an I2C IO expander to generate a reset to the imager.

The solution is architected as follows, a soft core processor will be used to configure the imager via I2C. However, although the image processing path will be implemented within the FPGA, since this is a low-cost application, the solution will not implement an external frame buffer in DDR memory, but rather the image processing pipeline will be implemented entirely within the FPGA.

The design will also use a soft-core processor to control video timing and other related configuration tasks of the image processing path.

background

TDNext is a color imager, which means that the imager applies a Bayer pattern that filters the wavelengths at each pixel. This means that during the integration period, each pixel only accumulates photons of red, green, or blue wavelengths.

At the completion of the integration time, each pixel is read out as an 8-bit or 10-bit pixel. This pixel value is called a RAW8 or RAW10 pixel. To recreate the color image, the values ​​of surrounding pixels containing pixels of different wavelengths are combined using a debayer algorithm.

Vivado Build

The first thing we need to do is create the Vivado platform that will receive the image from the TDNext Pmod.

To create the block diagram, we will primarily use IP cores from the Vivado library, but we will use the camera interface block and the output block from the Avnet library.

The first step is to install the board definition files, which enables Vivado to understand the configuration of the Arty S7.

Once downloaded, these files should be installed in the Vivado directory under the following path:

《Installation path》/Vivado/《Version》/data/boards/board_files/

This will allow you to select the Arty S7 board as the target board for creating a new Vivado project.

Once the board is installed, the next step is to create a new project, block diagram, and build the MicroBlaze system.

With the MicroBlaze system up and running, the next step is to add the video processing pipeline. The processing chain will use the following IP blocks

CAM Interface - Interfaces with TDNext video interface

Video to AXIS - Convert parallel video to AXI Streaming format

Sensor Demosaic - Converts RAW pixel values ​​representing R, G or B to 24-bit RGB format

Video Timing Generator - Generates video timing signals for output formats

AXI Stream to Video Out - Convert AXI Stream to parallel video

ZED ALI3 Controller - IP module to drive 10-inch touch screen

AXI IIC - Connects to the MicroBlaze This will be used to configure the Imager

AXI UART - connected to the MicroBlaze, used to report system status to the user

If we use Pmod VGA, we do not need to use the ZED ALI3 Controller IP block.

Before we add the Zed ALI3 and CAM interfaces, we need to reconfigure the IP core so that it can be included in a Spartan 7 design. We do this from the IP Catalog view, selecting the desired IP core and clicking Edit IP in the packager.

This will open a new project and enable you to select the Comparability tab and add support for Spartan 7 devices. Repackage the design and update the IP library in the Vivado project.

Once the IP is updated to support Spartan 7, we can complete the design. The completed block diagram should look like this.

Unlike the previous heterogeneous SoC based examples, this example uses an external frame buffer. This example does not use VDMA to read and write from the external frame buffer, which requires a different configuration of AXIS to Video and VTC.

Normally, AXIS to video is configured as a master and the VTC is not controlled. However, in this method, AXIS to video is configured as a slave and the VTC generator clock enable is controlled.

This approach allows the AXIS to Video IP block to control the timing of the syncs by enabling and disabling the VTCs so they match the timing of the syncs in the processing pipeline.

In AXI Stream, the start of a frame is indicated by TUser and the end of a line is indicated by TLast.

The key customizations of the IP blocks are:-

Video input to AXI 4 stream

Sensor demosaic settings

AXI IIC Settings

I also included several integrated logic analyzers (ILAs) in the design to enable internal monitoring and debugging of the system status.

The total utilization of Arty S7-50 after the project is completed is shown in the figure below.

We can use the additional resources necessary to implement the image processing algorithms using HLS. If we want to save resources, we can use the minimal footprint of MicroBlaze and remove ILA.

Writing software in the SDK

After the Vivado hardware is generated, the next step is to write the application software that will configure the imager and IP core on the video processing core.

Therefore, the software will do the following

Initialize AXI IIC, VTC and interrupt controller

Set up the interrupt controller to generate AXI related interrupts - this includes three interrupt service routines. One each for IIC transmit, receive and status.

Configuring timing on VTC for 10" display

Reset the camera via I2C and light up the LED on the PMOD

Detecting cameras via I2C, we are looking to detect MT9M114

Initialize the camera over the I2C link - this will take a few seconds to program all the commands

To initialize the imager, I have converted the Zynq-based library provided with the TDM114 example design into a format that can be used with AXI IIC.

Once the camera is initialized, we will be able to see the video on the ILA connected to the video stream of the AXI Stream component.

Monitoring the I2C communication on the back of the TDNext Pmod shows the communication between the Arty S7 and TDNext.

Once a camera is detected, the application will download several I2C camera configuration settings.

The progress will be reported using the AXI UART

Once the camera is initialized, we can use ILA to verify that the imager is producing video and that it is the resolution we configured.

We do this by using ILA and directly inspecting the received video in the FPGA.

The image above shows a line width of 1280 pixels, which is what we expect.

The received pixels are converted from parallel format to an AXI stream.

AXI Stream is a unidirectional bus used to transfer data from a master to a slave as a data stream and it does not contain an address channel. To control the flow and transfer video timing information via AXI stream, the following signals are used

TReady - Set by the downstream external device when ready to receive data

TValid - Asserted by the sending peripheral when the output data is valid

TUser - Emitted for frame start

TLast - issued for the end of a line

The second ILA can be used to ensure that the AXI flow is generated correctly.

Since we don't have VDMA, it is important that the video output on the AXIS stream is one continuous block and that TValid is not asserted and deasserted during active pixels.

We can ensure that Tvalid is continuous by using the pixel clock in the image processing chain.

The library APIs used in this project are as follows, except for camera_initial.h which contains the IIC configuration data. All other header files are provided by Xilinx depending on the hardware configuration.

Device addresses and identifiers

The main loop of the application can be seen below

int main()

{
u32 Status;
XIic_Config *iic_conf;
XVtc VtcInst;
XVtc_Config *vtc_config;
XVtc_Timing vtcTiming;
XVtc_SourceSelect SourceSelect;
XV_demosaic_Config *mosaic_config;
init_platform();
printf("www.adiuvoengineering.com S7 Imager example ");
mosaic_config = XV_demosaic_LookupConfig(XPAR_XV_DEMOSAIC_0_DEVICE_ID);
XV_demosaic_CfgInitialize(&mosaic,mosaic_config,mosaic_config->BaseAddress);
XIntc_Initialize(&InterruptController, int_dev);
SetUpInterruptSystem();
iic_conf = XIic_LookupConfig(IIC_dev);
Status = XIic_CfgInitialize(&iic, iic_conf, iic_conf->BaseAddress);
if (Status != XST_SUCCESS) {
 printf("XIic initial is fail ") ;
return XST_FAILURE;
}
XIic_SetSendHandler(&iic, &iic, (XIic_Handler) SendHandler);
XIic_SetRecvHandler(&iic, &iic, (XIic_Handler) ReceiveHandler);
XIic_SetStatusHandler(&iic, &iic,(XIic_StatusHandler) StatusHandler);
vtc_config = XVtc_LookupConfig(XPAR_VTC_0_DEVICE_ID);
XVtc_CfgInitialize(&VtcInst, vtc_config, vtc_config->BaseAddress);
vtcTiming.HActiveVideo = 1280;
vtcTiming.HFrontPorch = 65;
vtcTiming.HSyncWidth = 55;
vtcTiming.HBackPorch = 40;
vtcTiming.HSyncPolarity = 0;
vtcTiming.VActiveVideo = 800;
vtcTiming.V0FrontPorch = 7;//8;
vtcTiming.V0SyncWidth = 4;
vtcTiming.V0BackPorch = 12;
vtcTiming.V1FrontPorch = 7;
vtcTiming.V1SyncWidth = 4;
vtcTiming.V1BackPorch = 12;
vtcTiming.VSyncPolarity = 0;
vtcTiming.Interlaced = 0;
memset((void *)&SourceSelect, 0, sizeof(SourceSelect));
SourceSelect.VBlankPolSrc = 1;
SourceSelect.VSyncPolSrc = 1;
SourceSelect.HBlankPolSrc = 1;
SourceSelect.HSyncPolSrc = 1;
SourceSelect.ActiveVideoPolSrc = 1;
SourceSelect.ActiveChromaPolSrc= 1;
SourceSelect.VChromaSrc = 1;
SourceSelect.VActiveSrc = 1;
SourceSelect.VBackPorchSrc = 1;
SourceSelect.VSyncSrc = 1;
SourceSelect.VFrontPorchSrc = 1;
SourceSelect.VTotalSrc = 1;
SourceSelect.HActiveSrc = 1;
SourceSelect.HBackPorchSrc = 1;
SourceSelect.HSyncSrc = 1;
SourceSelect.HFrontPorchSrc = 1;
SourceSelect.HTotalSrc = 1;
XVtc_RegUpdateEnable(&VtcInst);
XVtc_SetGeneratorTiming(&VtcInst,&vtcTiming);
XVtc_SetSource(&VtcInst, &SourceSelect);
XVtc_EnableGenerator(&VtcInst);
XIic_Reset(&iic);
PCA9534_CTRL ();
Detect_Camera();
Soft_Reset_Camera();
Initial_Camera();

XV_demosaic_Set_HwReg_width(&mosaic,0x500);
XV_demosaic_Set_HwReg_height(&mosaic,0x31f);
XV_demosaic_Set_HwReg_bayer_phase(&mosaic,0x1);
XV_demosaic_EnableAutoRestart(&mosaic);
XV_demosaic_Start(&mosaic);
while(1){
}
cleanup_platform();
return 0;
}

Running the entire software application allowed me to capture the image below of my conference badge collection.

I need to tweak some settings to increase the integration time, however, the basic image processing pipeline is working as we expect.

in conclusion

It is easy to create a vision processing system that works directly with the imager, rather than the camera. This often allows for a more cost-effective and potentially more responsive solution as the processing chain is significantly reduced.

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号