In-depth analysis of common video signal transmission characteristics and conversion

Publisher:cw57324588Latest update time:2011-11-10 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

1. Component Signal

The optical system of the camera decomposes the light beam of the scene into three basic colors: red, green and blue. The photosensitive equipment then converts the three monochrome images into separate electrical signals. In order to identify the left edge and top of the image, synchronization information is added to the electrical signal. The synchronization information of the display terminal and the camera can be added to the green channel, sometimes to all three channels, or even transmitted as one or two independent channels. The following are several common synchronization signal addition modes and representation methods:

- RGsB: The synchronization signal is attached to the green channel and transmitted through three 75Ω coaxial cables.
- RsGsBs: The synchronization signal is attached to the red, green and blue channels and transmitted through three 75Ω coaxial cables.
- RGBS: The synchronization signal is used as an independent channel and transmitted through four 75Ω coaxial cables.
- RGBHV: The synchronization signal is used as two independent channels, horizontal and vertical, and transmitted through five 75Ω coaxial cables.

RGB component video can produce high-quality images from the camera to the display terminal, but transmitting such a signal requires at least three independent channels to be processed separately so that the signals have the same gain, DC bias, time delay and frequency response. The transmission characteristics of component video are as follows:

- Transmission medium: 3-5 shielded coaxial cables
- Transmission impedance: 75? - Common connector: 3-5×BNC connector
- Wiring standard: Red = red primary color (R) signal line, green = green primary color (G) signal line, blue = blue primary color (B) signal line, black = horizontal synchronization (H) signal line, yellow = field synchronization (V) signal line, common ground = shielded network cable (see attached figure VP-03)

2. Composite-Video

Due to the unequal gain or DC bias error between the component video signal channels, the color displayed on the terminal will have slight changes. At the same time, due to the length error of multiple transmission cables or the use of different transmission paths, the color signal will have timing deviation, resulting in blurred image edges, and even multiple separated images in severe cases.

Inserting NTSC or PAL codecs makes the video signal easy to process and transmit along a single line, which is composite video. Composite video format is a compromise solution to long-distance transmission. Chroma and brightness share the frequency bandwidth of 4.2MHz (NTSC) or 5.0-5.5MHz (PAL), and there is a relatively large crosstalk between them, so frequency response and timing issues must still be considered. Multi-level codecs should be avoided. The transmission characteristics of composite video are as follows:

- Transmission medium: single shielded coaxial cable
- Transmission impedance: 75? - Common connectors: BNC connector, RCA connector
- Wiring standard: pin = coaxial signal line, shell common ground = shielded network cable (see attached figure VP-01)
3. Color difference signal (Y, RY, BY)

When processing video signals to transmit images, RGB component video is not the most bandwidth efficient method because all three component signals require the same bandwidth.

Human vision is more sensitive to changes in brightness details than changes in color, so we can use the entire bandwidth for brightness information and use the remaining available bandwidth for color difference information to improve the bandwidth utilization of the signal.

Processing the video signal components into luminance and color difference signals can reduce the amount of information that should be transmitted. Using a full-bandwidth luminance channel (Y) to represent the luminance details of the video signal, the bandwidth of the two color difference channels (RY and BY) is limited to approximately half of the luminance bandwidth, which still provides sufficient color information. With this approach, the conversion between RGB and Y, RY, BY can be achieved through a simple linear matrix. The bandwidth limitation of the color difference channels is implemented after the linear matrix. When the color difference signals are restored to RGB component video display, the luminance details are restored at the full bandwidth, while the color details are limited to an acceptable range.

Color difference signals also have different formats and different application ranges. In the commonly used composite PAL, SECAM and NTSC standards, the coding coefficients are different, as shown in the following table:

4. Digital Video (SDI)

Digital video also has many different formats and is used in different ranges. Here it refers to "Serial Digital Video" (Signal-Digital Interface), generally abbreviated as SDI interface.

After gamma correction, the RGB signal is transformed into a brightness component Y and two chromaticities Pb and Pr in a linear matrix. Since the human eye is more sensitive to changes in brightness details than changes in color, the brightness signal Y passes through the transmission system with a higher bandwidth (5.5MHz for SDTV). After low-pass filtering, the sampling frequency of the brightness signal is 13.5MHz, and a 10-bit 13.5MB/s bit stream is generated in the A/D converter; after the two chromaticity signals go through the same process, two 10-bit 6.75MB/s bit streams are generated in the A/D converter. The three video channels are multiplexed to form a 27MB/s 10-bit parallel data stream (Y, Cb, Cr).

The 27MB/s 10-bit parallel data stream is sent to the shift register (serializer), clocked and scrambled, and a 270Mb/s serial data stream (SDI) is formed according to the TV specification.
5. Video format conversion

Different video formats determine the performance of the signal in terms of brightness, chroma, contrast, sharpness, clarity, maximum resolution, etc. From the above analysis of various video formats, we can know that the level of video high-definition quality can be roughly ranked as follows (from high to low):

Among them, the DVI digital video signal is currently the highest level, but it has the disadvantage of only being able to be transmitted over short distances (the effective distance is about 5 meters). SDI digital video has the advantages of being editable and transmitted over longer distances. RGBHV and VGA are actually signals of the same level, but they have two different names due to the different components of the signals. S-Video has a significant improvement in brightness utilization compared to Video (short for composite video), and has effectively eliminated color creep. The RF format is the lowest level signal and is only used in surveillance and public television.

In engineering applications, we often face many signal format conversion processes. What rules should be followed for these signal conversions of different formats? What effects will be produced in the end? It is generally believed that:

The conversion from low-level format to high-level format has a relatively obvious quality improvement, such as the early double frequency scanner or quadruple frequency scanner, and the currently popular intelligent video regulator, which are all Video-RGBHV (composite video-component video) conversion processing, which has a significant improvement in improving signal quality. Because these products all use multi-bit digital technology to ensure that the signal quality (clarity, brightness, signal-to-noise ratio) can be highly restored.

DVI digital video is usually converted into SDI or RGBHV. The clarity of the original signal is lost after the conversion, but the DVI signal can be transmitted over long distances. The actual effect of converting VGA signals into RGBHV is not improved because the two are of the same level, but the synchronization and universal matching problem of VGA signals is solved, and it can be transmitted over longer distances.

The process of converting high-level formats to low-level formats (such as VGA to Video) will cause serious losses to any aspect of the original signal, including brightness, chroma, color, contrast, sharpness, clarity, and maximum resolution. This conversion is meaningless, but it had certain use value in the early days, such as: converting a computer's VGA signal into Video for tape recording, TV wall display, or for "capture" transmission in video conferencing.

6. Disadvantages of high-level to low-level video format conversion

6.1. Inherent scanning jitter

A standard video signal consists of a set of scan lines, not all of which are visible. In the NTSC format, there are 483 visible lines, while in the PAL and SECAM formats there are 576. Television video images with fewer lines are limited in their ability to display very small text or other intricate details. In contrast, computer display devices can have scan lines ranging from low resolution (≤480 lines) to high resolution (≥1280 lines). Many new computer display cards now allow users to choose from several different display resolutions. Obviously, the higher the resolution, the more perfect the details of text and images will appear.

Television signals are interlaced, meaning that each "picture" is actually made up of two half-frames, or fields consisting of odd and even lines. First the odd lines are scanned, then blanked, and then the even lines are scanned in between the odd lines. The display and disappearance of the odd and even fields makes images with certain shapes prone to noticeable jitter, especially those thin horizontal lines.

In contrast, computer signals are generated using non-interlaced signals, also known as "progressive scanning". All scan lines are scanned from top to bottom and from left to right at once, regardless of odd or even frames. This eliminates the image jitter problem caused by interlaced scanning in television systems.

6.2. Signal format compatibility

NTSC, PAL and SECAM are several common standard television video signal formats that specify the number of lines used to display an image, the definition of color information and the speed of the scan lines (i.e., the refresh rate). There are many other formats that are different from these formats, such as composite video, S-Video and D1 (digital) video, but all of these formats have many things in common. For example, they are all interlaced, with 483 (NTSC) or 576 (PAL and SECAM) scan lines, and they all have a fixed refresh rate. Two interlaced fields make up a frame in the NTSC system, which appears 30 times per second (30Hz), while for PAL and SECAM systems, it appears 25 times per second (25Hz).

Unlike television video, computer video signals do not have a single standard that must be followed. There is a wide range of resolutions and refresh rates that can be selected, and the refresh rate is generally between 60Hz and 85Hz. Although computers do not display images in an interlaced manner, some graphics cards provide the ability to display interlaced images. In any case, the way computer video signals convey color and brightness information to the monitor is the same. All VGA, SVGA and Mac computer video formats convey red, green, and blue information as separate signals (components). Therefore, this allows computers to display a wide range of colors without distortion, while the most common television video format combines red, green, and blue information into a single signal (chrominance) and conveys it to the monitor.

The process of converting high-level formats to low-level formats is generally achieved through a scan converter. This technical concept sounds simple, and even if it makes people agree with the design concept, there are still many technical factors that need to be considered:

- The computer input compatibility of the scan converter
- What is the highest resolution of the compatible computer
- Is "genlock" required
- The color sampling rate of the scan converter
- The quality of the scan converter's encoder
- What format of video signal is output
- Are there any built-in test patterns

Anyone familiar with computer resolution knows that the video line count does not meet the standard resolution. Therefore, when the above signal is input to a projector or display device, it will cause incompatibility problems, which are manifested as:

- Pixels are missing and most details cannot be reproduced
- The image is stretched or distorted and only the outline of the information can be reproduced
- The projector or display device performs forced compatibility processing on the input image. This additional processing often reduces the image quality (artificial factors, similar to the keystone correction function).

Another limitation is the vertical refresh rate generated by the scan converter. The vertical refresh rate of the signal output by the scan converter is up to 60Hz or 50Hz, depending on whether the output signal is NTSC or PAL/SECAM. Many projectors can input and display higher refresh rates, providing a better image quality. When a scan converter is used, the image displayed by the projector at a lower refresh rate is limited.

6.3. Loss of the projector's inherent resolution

LCD and DLP projectors or PDP display devices are devices that are often used in conjunction with scan converters or video regulators. These devices use pixels to display images, and the number of all pixels is called the inherent resolution.

Although many projectors can display images with resolutions lower than their native resolution, images displayed at their native resolution have the highest quality. For example, a projector with a native resolution of 1024×768 can display images with a resolution of 800×600, but the effect is not as good as an image with a resolution of 1024×768, because every point in an image with a resolution of 1024×768 corresponds to every pixel of a projector with a native resolution of 1024×768, making the color display very clear, and there is no need for color compensation like an image with a resolution of 800×600, which causes a decrease in image clarity.

Reference address:In-depth analysis of common video signal transmission characteristics and conversion

Previous article:Microwave synthesizers play a role in aircraft tracking applications
Next article:Signal Integrity Analysis in High-Speed ​​Video Processing Systems

Latest Test Measurement Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号