There are two important trends in the development of television: from standard definition to high definition, the resolution will be higher and higher; 3D technology that realizes the concept of stereoscopic vision. In particular, 3D technology is an important trend in the development of television technology for a long time in the future.
If 3D is well shot and produced, it will look very good, with a strong sense of presence and a very impactful picture. Some 3D movies are made through post-production. If they are not well made, they are not as good as 2D movies, because the software is used to convert two-dimensional into three-dimensional. If the quality is not guaranteed during the conversion, the effect may not be better than 2D from the viewing perspective. Therefore, some technical means must be used to ensure that the 3D content produced is very consistent with sensory perception.
How does the human eye perceive 3D?
A very important effect of 3D is the stereoscopic feeling, because the parallax between the left and right eyes produces the so-called 3D effect. The parallax has a certain range. If the parallax exceeds the range, it will produce a very bad visual experience, and even make you dizzy or uncomfortable after watching 3D movies for a long time. When shooting and post-producing 3D movies and TV, if the parallax or depth of field is not properly controlled, a similar discomfort experience will occur. Of course, there is no good evidence that watching 3D will cause any harm, but there will be a feeling of discomfort.
The most traditional 3D glasses are used to watch 3D effects. If you don't wear glasses, you will see a very blurry picture. The cyan is in front of the red, or the red is in front of the cyan. The red represents the picture seen by the left eye, and the cyan represents the picture seen by the right eye. It is precisely because they have a certain horizontal difference and parallax that they have a 3D feeling. If the 3D picture is not done well, it will give people a very poor feeling. The 3D that everyone sees in the cinema is not seen with red and cyan glasses. The TV at home will have active 3D glasses. Through interlaced scanning, the first scene is the picture for the left eye, and the second scene is the picture for the right eye. When the left eye comes, it is equivalent to closing the right eye, and everyone will get the picture of the left eye. When the right eye comes, the left eye will be closed. Through active glasses, the images of the left and right eyes can be obtained, and finally the 3D feeling is in the brain. There are many 3D TVs on the market now. What we see in the cinema is that the left eye transmits the video image to everyone horizontally and the right eye transmits the video image vertically. No matter which method is used, there will basically be two images for the left eye and the right eye. The basis is that there is a certain distance between the pupils of the human eye, and the distance between the left eye and the right eye is between 60-65 mm.
Figure 1: Naked-eye 3D effect.
There is a certain gap between the left and right eyes. The two images are analyzed by the human eyes. Through the judgment of the human retina, the signals obtained by the left and right eyes are analyzed by the brain to form a three-dimensional feeling. The relationship between their positions and the three-dimensional relationship is analyzed, and the 3D effect is finally obtained. This is the process of the human eye perceiving 3D through the sense. Someone originally asked a question, can you feel the 3D effect if you close one eye? In fact, the depth of the image needs to be perceived by both eyes. Sometimes, you can also perceive its three-dimensional information through one eye. This information will reveal the distance through 2D. It will provide us with some information about the exposed and covered parts of the image, and we can judge who is in front and who is behind through this. When watching 3D movies, there is often an effect of the image running behind the screen. There are several scenes in "Rio" where the parrot will rush towards you when learning to fly. That effect is not often seen. It is specially made through post-production. Beijing TV has done some 3D experiments and obtained some experience. It is easier to accept the image when it moves toward the screen. If it moves outward, the effect is not very acceptable. When making 3D effects, everyone should also consider that there is a certain proportion of the image that impacts outward, that is, it cannot be done too much.
Some information can be perceived by a single eye in terms of its 3D depth. We can see the 3D effect through changes in light and dark, including changes in texture. For example, a flower looks like a piece from a distance, but it is easier to distinguish when looking at a close view. The farther away, the smaller the picture will be. Through this information, we can perceive 2D depth. We can also perceive 2D depth by judging the size of an object or by changes in its size. Motion parallax, when sitting in a car, close objects are in the opposite direction of us, and these can be perceived in depth with one eye. The position of the object is different when viewed with both eyes, and this must be felt with both eyes. Through these, we can perceive the 3D sense of distance and depth very well.
3D video recording
How to restore what people can see? Through two cameras, what we saw with one camera was two-dimensional before. Two cameras are used to simulate human eyes and take pictures of the left and right eyes. At present, there are two ways to arrange two cameras, one is horizontal side by side, and the other is vertical up and down. You can do some experiments according to different systems. Which one is better? The distance between them is generally about the same as the pupil of a human eye, 60-65 mm. When shooting, you can adjust the distance between the two cameras according to the close-up or long-distance view. A very important issue is to ensure that the aperture, focal length and brightness of the two cameras are consistent, otherwise the two pictures taken will look uncomfortable to the human eye. Of course, many cameras are now automatically adjusted through cables, but it is difficult to ensure that the two are completely consistent. There are some studies now, such as how much displacement difference between two cameras is acceptable, how much brightness difference is allowed, which is also the standard and main content of future 3D test measurement, and how much vertical difference and brightness difference between the two eyes are not so obvious to people. Another problem is that when shooting moving objects, it is necessary to confirm that both the left and right eyes are present. If the left or right eye is not present when shooting the moving object, the object will look very strange when synthesized and cannot be superimposed. Generally speaking, the background can be different between the left and right eyes, but the moving objects must be ensured to fall within the area captured by the left and right cameras.
The other is to place the camera vertically. The 3D left-eye signal goes directly into the camera, and the right-eye signal is split through a beam splitter. When it is split, it is an inverted image, and it needs to be turned over by a rotating circuit. Because the circuits process differently, it is necessary to ensure that the images are taken at the same time. If there is a time difference of one or two frames, the final image will be completely messed up. There was an article that explained why it should be vertical. If it is horizontal, it is difficult to ensure that the distance between the two cameras is 60-65 mm due to the large size of the two cameras. Because the cameras are relatively wide, the vertical method can well adjust the distance between the left and right cameras.
There will be angle issues for both horizontal and vertical cameras. Should we shoot in parallel or use a diffuse method? Parallel shooting can well guarantee the horizontal direction, but there is a problem: people generally have a convergence point when looking at things. If we shoot in parallel in the early stage, we can converge in post-production and adjust the pictures in between. It will be more difficult to converge because we need to calculate the distance between the shooting position and the camera, and we need to position it to see whether the picture is facing outside or inside the screen. There will be a lot of calculations involved, which is more troublesome.
The so-called 3D is the left-right displacement caused by parallax, and finally there will be a 3D effect. There are four types of parallax (see the figure below): zero parallax, the distance seen by the left eye and the right eye is the same; positive parallax is that the right eye is in front of the left eye, and the picture is generally behind the screen; negative parallax is that the picture seen by the right eye is on the left of the left eye, and the picture seen by negative parallax should be in front of the screen; normal two eyes cannot have scattered vision, and scattered parallax conditions should be avoided when shooting. Zero parallax is generally the screen of a movie or TV. Which one is considered a screen? The point of zero parallax is the movie screen and the TV screen. If the picture is to be out of the screen, it can be used as a reference, and it can also be used as a reference when entering the screen. Positive parallax is that the right eye is on the right side of the left eye, and its point falls behind the screen. The effect of the picture is presented at the back of the screen. The negative parallax picture is in front of the screen defined by zero parallax, and the picture seen by the right eye is on the left of the left eye. All objects are outside the screen, which will produce a feeling of suspension and fly towards the visual direction. The human eye will not disperse the dispersed parallax, and such a picture will not appear if it is actually filmed.
Figure 2: Four different types of parallax effects.
When shooting, there will be a plane axis. For the so-called negative parallax, we suggest not to get too close to the eyes when the object is flying towards them. If a bullet suddenly hits you, if it is too close, the human eyes will not adapt well. Which position of the picture is more suitable? Some studies have said that the position of arm length is a more suitable negative parallax, but we cannot have too many convergence points for the sake of effect, as the effect may not be good. The scattered areas do not need to be too large. Currently, everyone is just doing research projects, and there is no standard definition of what range we can accept. There will be many experiments here, including data.
Because it is shot by two cameras, the brightness and color must be consistent. If there is a certain difference, it will look uncomfortable. When watching, the left eye and the right eye must be superimposed together. How much vertical color difference, brightness and color difference can we allow? Although we have a lot of software and tools to ensure the aperture and focus between the two cameras, it is impossible to ensure complete consistency. There is currently no authoritative standard to define how much range is acceptable. If the left eye and the right eye are not completely separated, that is, although the information is from the left eye, the things of the right eye can also be seen, which will cause a very confusing effect on our vision and produce a very uncomfortable feeling.
A zero parallax screen is set. If you always switch the parallax screen when shooting, it will have a great impact on us. It is not recommended to change the zero parallax screen. The zero parallax screen for the same scene should be fixed. The same product cannot be changed over and over again. The eyes have to constantly adjust the focus, which will make the eyes very tired. If it changes, it is recommended that it is best to use 2D when transitioning from one screen to another, and then establish a three-dimensional feeling on the other screen. In this way, the human eye will have an adaptation process and it will not look particularly uncomfortable. Do not change the zero parallax screen in the same type of scene.
3D video monitoring and measurement
Tektronix also has some solutions for testing the brightness, chromaticity, and focal length of 3D images. If the brightness difference between the left and right images is large, the 3D effect will be very bad. Using the chessboard test method, you can easily see how big the difference in brightness is between the left and right eyes. If the brightness and chromaticity are not much different, there will not be a big transition between the left and right, and the brightness and chromaticity will look smoother. By adjusting the aperture and light, ensure that the brightness and chromaticity are within the same range. The figure below is a schematic diagram of the image error between the left and right eyes. Because of the parallax, the sun shines through on the right. If the two images are combined into a 3D effect, it will look very awkward. If the halo is not allowed when shooting in two dimensions, this effect should also be avoided when shooting in 3D. There is no sun on the left, but there is sun on the right. If the composite image is like this, there will be very big problems. From the test waveform, it can be seen that the brightness of the right eye is obviously higher than that of the left eye. The aperture should be adjusted to avoid the halo phenomenon.
Figure 3: Image errors for left and right eye: different brightness levels, different colors.
Previous article:Research on video quality optimization strategy in video surveillance system
Next article:Design of monitoring system based on CXD3142R
- High signal-to-noise ratio MEMS microphone drives artificial intelligence interaction
- Advantages of using a differential-to-single-ended RF amplifier in a transmit signal chain design
- ON Semiconductor CEO Appears at Munich Electronica Show and Launches Treo Platform
- ON Semiconductor Launches Industry-Leading Analog and Mixed-Signal Platform
- Analog Devices ADAQ7767-1 μModule DAQ Solution for Rapid Development of Precision Data Acquisition Systems Now Available at Mouser
- Domestic high-precision, high-speed ADC chips are on the rise
- Microcontrollers that combine Hi-Fi, intelligence and USB multi-channel features – ushering in a new era of digital audio
- Using capacitive PGA, Naxin Micro launches high-precision multi-channel 24/16-bit Δ-Σ ADC
- Fully Differential Amplifier Provides High Voltage, Low Noise Signals for Precision Data Acquisition Signal Chain
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Can anyone share the motor stall detection circuit? Similar to the motor stall detection circuit of door locks and sweepers,
- STM32F10x stepper motor encoder position control CANWeb source program
- MSP430F249_TimerA timer
- Synthesizable Verilog Syntax (Cambridge University, photocopy)
- [RVB2601 Creative Application Development] Short recording, playback and printing of recording data
- When the carrier data reaches the receiving end, how does the receiving end identify this information?
- SparkRoad Review (7) - FPGA Serial Port Test
- Disable AD auto-start JLink
- Seeking guidance - stc microcontroller remote upgrade program
- Problems with creating sheet symbols for multi-page schematics