Maxim Blog | Understanding the Importance of Camera Interfaces to ADAS Systems
By Jim Harrison, Guest Blogger, Lincoln Technology Communications Division
The car is rapidly evolving into a secure, connected, self-driving robot that can sense its environment, think, and take autonomous actions. Perhaps even faster is the evolution of small, self-driving public vehicles – taxis, rideshares, or buses that can take us from public transit stations, city centers, or work areas to where we need to go (the last mile).
One example is the NAVYA ARMA self-driving electric bus, which was launched in October 2015. This small bus, capable of safely carrying 15 passengers at speeds of up to 28 miles per hour, is currently being tested or operated in many communities in Europe and the United States, and was unveiled on the streets of Las Vegas during CES 2017.
Whether it’s a small car or a public bus, an autonomous vehicle needs cameras, radar or lidar to sense its surroundings. Using a combination of these sensors, the car’s advanced driver assistance system (ADAS) is able to detect the environment around the vehicle. Multiple video cameras (at least 5, but often up to 8) are key to the system. Cameras in front of the vehicle and behind the vehicle need to be highly sensitive and responsive to help detect intersections and collisions, and are quickly becoming standard on many cars and SUVs. The combination of all-around cameras provides reliable information for emergency brake assist, adaptive cruise control, blind spot detection, reverse blind spot warning systems, lane departure warning/automatic lane keeping, and soon, traffic sign recognition systems so you never exceed the speed limit.
For example, the latest hardware suite in Tesla vehicles uses the NVIDIA Drive PX 2 processing platform, which takes data from eight cameras, a combination of ultrasonic sensors, and a radar system. The platform can be expanded from an energy-efficient handheld module that supports AutoCruise to a powerful AI supercomputer that supports fully autonomous driving. The system can understand the situation around the vehicle in real time, accurately locate itself on a high-definition map, and plan a safe path forward. The system combines deep learning, sensor fusion, and panoramic vision to change the driving experience.
The performance of the camera system is critical for safety-assisted or autonomous vehicles. Cameras are, of course, distributed around the vehicle and often far away from the CPU. Their performance determines how far away the ADAS can see objects, how small objects can be detected, and how fast the information can be transmitted - depending on the resolution, dynamic range and frame rate. Given the criticality of the information provided by these devices, high bit error rates cannot be tolerated because their data rates are also very high. In a panoramic view system, the video stream of each camera is typically 1280 × 800 pixel resolution with a frame refresh rate of 30f/s.
There are a large number of buses or networks used in automobiles, including CAN, LIN, FlexRay, MOST, LVDS, and Ethernet. However, the data rate required by the video link excludes all bus types except LVDS and Ethernet.
A better approach is Gigabit Multimedia Serial Link (GMSL), an uncompressed alternative to Ethernet. As a result, GMSL offers 10 times faster data rates, 50% lower cable costs, and better EMC performance than Ethernet. Maxim offers the
MAX96707
and
MAX96708
GMSL serializer/deserializer chips, which use current-mode logic (CML) for extremely high noise immunity and can transmit data over 50 Ω low-cost coaxial cable or 100 Ω twisted-pair cable at distances up to 15m. The devices can work with megapixel cameras with serial bit rates up to 1.74Gbps. The camera data clock is 12.5MHz to 87MHz × 12 bits + H/V data, or 36.66MHz to 116MHz × 12 bits + H/V data (using internal encoding). The ICs share a 9.6kbps to 1Mbps I
2
C control channel between each other and with external sources for updates and setup. It is worth mentioning that the devices can automatically resend control data when errors are detected. The control channel is multiplexed into the serial link with or without the video channel.
The MAX96707 serializer IC features programmable pre-emphasis/pre-de-emphasis to drive longer cables. The device provides error detection for video and control data and has a crosspoint switch for dual camera selection. The serial output provides programmable spread spectrum. The chip is available in a small, 24-pin, 4x 4mm TQFN package and uses a 1.7 to 1.9V supply. The maximum supply current is 88mA.
Figure 1. MAX96707 functional block diagram.
The MAX96708 deserializer tracks the data of the spread-spectrum serial input, and the chip's adaptive equalization greatly reduces the bit error rate. The output crosspoint switch increases flexibility. The IC's core supply range is 1.7 to 1.9V, and the I/O supply range is 1.7 to 3.6V. The device is available in a 32-pin 5x 5mm TQFN package.
Figure 2. MAX96708COAXEVKIT# development kit
Both chips operate over the -40º to 115ºC temperature range and have ±8kV contact discharge protection and ±15kV air-gap ESD protection in accordance with IEC 61000-4-2 and ISO 10605. Both devices meet the automotive standard AEC-Q100. Evaluation kits are available from distributors (Figure 2).
Without a doubt, if I were designing an autonomous driving system, a key concern would be reliable communication with the cameras. I would carefully check the bit error rates of all camera connections in actual vehicle configurations and worst-case noise conditions, so GMSL technology would likely offer the best chance of success in this critical area, providing high reliability while meeting industry standards.