Image sensors drive the development of embedded vision technology

Publisher:硬件实验室Latest update time:2019-05-10 Source: EEWORLDAuthor: Teledyne e2v 公司 Marie-Charlotte Leclerc Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

New imaging applications are booming, from collaborative robots in Industry 4.0, to drones for firefighting or in agriculture, to biometric facial recognition, to point-of-care handheld medical devices in the home. A key factor in the emergence of these new applications is that embedded vision is more prevalent than ever before. Embedded vision is not a new concept; it simply defines a system that includes a vision setup that controls and processes data without an external computer. It has been widely used in industrial quality control, with the most familiar example being "smart cameras".

 

In recent years, the development of affordable hardware devices from the consumer market has significantly reduced the bill of materials (BOM) cost and product size compared to previous solutions using computers. For example, small system integrators or OEMs can now purchase single-board computers or module systems such as NVIDIA Jetson in small quantities; while larger OEMs can directly obtain image signal processors such as Qualcomm Snapdragon or Intel Movidius Myriad 2. At the software level, commercial software libraries can speed up the development of dedicated vision systems and reduce the difficulty of configuration, even for small-volume production.

 

The second change driving the development of embedded vision systems is the advent of machine learning, which enables a neural network to be trained in the lab and then uploaded directly into a processor so that it can automatically recognize features and make decisions in real time.

 

Being able to provide solutions for embedded vision systems is critical for imaging companies targeting these high-growth applications. Image sensors play an important role in the large-scale introduction of embedded vision systems because they can directly affect the performance and design of embedded vision systems. The main driving factors can be summarized as: reducing size, weight, power consumption and cost, referred to as "SWaP-C" (decreasing Size, Weight, Power and Cost).

 

1. Reducing costs is crucial

The accelerating driver of new embedded vision applications is the price that meets market demand, and the cost of vision systems is a major constraint in achieving this requirement.

 

1.1. Save optical costs


The first way to reduce the cost of vision modules is to reduce the size of the product. There are two reasons: first, the smaller the pixel size of the image sensor, the more chips can be manufactured on the wafer; on the other hand, the sensor can use smaller and lower-cost optical components, both of which can reduce inherent costs. For example, Teledyne e2v's Emerald 5M sensor reduces the pixel size to 2.8µm, allowing S-mount (M12) lenses to be used on 5-megapixel global shutter sensors, bringing direct cost savings - the price of an entry-level M12 lens is about $10, while the larger C-mount or F-mount lens costs 10 to 20 times that. Therefore, reducing the size is an effective way to reduce the cost of embedded vision systems.

For image sensor manufacturers, this reduced optical cost has another impact on the design, because generally speaking, the lower the optical cost, the less ideal the angle of incidence to the sensor. Therefore, low-cost optics require the design of specific shifted microlenses above the pixels to compensate for the distortion and focus light from wide angles.


1.2. Low-cost sensor interface


In addition to optical optimization, the choice of sensor interface also indirectly affects the cost of the vision system. The MIPI CSI-2 interface is the most suitable choice to achieve cost savings (it was originally developed by the MIPI Alliance for the mobile industry). It has been widely adopted by most ISPs and has begun to be adopted in the industrial market because it provides a low-cost system-on-chip (SOC) or system-on-module (SOM) integration from companies such as NXP, Nvidia, Qualcomm or Intel. Designing a CMOS image sensor with a MIPI CSI-2 sensor interface directly transmits the image sensor's data to the host SOC or SOM of the embedded system without any intermediate converter bridge, thus saving cost and PCB space. Of course, this advantage is more prominent in multi-sensor based embedded systems (such as 360-degree panoramic systems).

These benefits are somewhat limited, however, as the MIPI interface is limited to a connection distance of 20 cm, which may not be optimal in remote settings where the sensor is far from the host processor. In these configurations, a camera board solution with an integrated longer interface is a better option at the expense of miniaturization. Some off-the-shelf solutions can be integrated, for example, camera boards from industrial camera manufacturers (such as Flir, AVT, Basler, etc.) are typically available in MIPI or USB3 interfaces, the latter of which can achieve a range of more than 3 to 5 meters.

 

1.3. Reduce development costs


Rising development costs are often a challenge when investing in new products; it can cost millions of dollars in one-time development fees and put pressure on time to market. For embedded vision, this pressure becomes even greater because modularity (i.e., the ability of a product to switch between multiple image sensors) is an important consideration for integrators. Fortunately, development expenses can be reduced by providing a degree of cross-compatibility between sensors, for example, by defining component families that share the same pixel architecture for stable optoelectronic performance, by having a common optical center to share a single front-end mechanism, and by compatible PCB assemblies to simplify evaluation, integration, and the supply chain.

To simplify camera board design (even for multiple sensors), there are two approaches to designing sensor packages. Pin-to-pin compatibility is the preferred design for camera board designers because it allows multiple sensors to share the same circuitry and controls, making assembly completely independent of PCB design. Another option is to use footprint-compatible sensors so that multiple sensors can be used on the same PCB, but this also means that they may have to deal with differences in the interface and routing for each sensor.


image.png 

 

Figure 1: Image sensors can be designed to be pin-compatible (left) or footprint-compatible (right) to enable proprietary PCB layout designs


2. Energy efficiency provides better individual working ability


Tiny battery-powered devices are the applications that most obviously benefit from embedded vision, as an external computer prevents any portable application from occurring. To reduce the energy consumption of the system, image sensors now include a variety of features that enable system designers to save power.

 

From a sensor perspective, there are several ways to reduce power consumption in an embedded vision system without sacrificing acquisition frame rate. The simplest approach is to minimize the dynamic operation of the sensor itself at the system level by using standby or idle modes as long as possible. Standby mode reduces the sensor's power consumption to less than 10% of active mode by turning off the analog circuits. Idle mode cuts power consumption in half and allows the sensor to restart image acquisition in microseconds.

 

Another way to integrate energy saving in sensor design is to use advanced lithography node technology. The smaller the technology node, the smaller the voltage required to switch the transistor. Since power consumption is proportional to voltage (�������� ∝ � × �²), this can reduce power consumption. So the pixels produced using 180nm technology 10 years ago not only reduced the transistor to 110nm, but also reduced the voltage of the digital circuit from 1.9 volts to 1.2 volts. The next generation of sensors will use 65nm technology nodes, making embedded vision applications more energy-efficient.

 

Finally, by choosing the right image sensor, the power consumption of LEDs can be reduced under certain conditions. Some systems must use active illumination, such as 3D map generation, motion quiescence, or simply using sequential pulses of a specific wavelength to improve contrast. In these cases, reducing the noise of the image sensor in low-light environments can achieve lower power consumption. By reducing sensor noise, engineers can decide to reduce the current density or reduce the number of LEDs integrated into the embedded vision system. In other cases, when image capture and LED flashing are triggered by external events, choosing the right sensor readout structure can significantly save power. With traditional rolling shutter sensors, the LED lights must be fully turned on for the entire frame exposure, while global shutter sensors allow the LED lights to be turned on only for a portion of the frame. So, in applications such as using intra-pixel correlated double sampling (CDS), replacing rolling shutter sensors with global shutter sensors can save lighting costs while still maintaining the same low noise as CCD sensors used in microscopes.

 

3. On-chip functions pave the way for application-oriented vision systems


Some of the more radical extensions of embedded vision have led us to fully customize image sensors, integrating all processing functions in a 3D stack (system on chip) to optimize performance and power consumption. However, the cost of developing such products is very high, and fully customized sensors that can achieve this level of integration are not completely impossible in the long run. Now we are in a transitional stage that includes embedding certain functions directly into the sensor to reduce the computing load and speed up processing time.

 

For example, in barcode reading applications, Teledyne e2v has patented technology that adds an embedded function containing a proprietary barcode recognition algorithm to the sensor chip. This algorithm can find the position of the barcode within each frame, allowing the image signal processor to focus only on these ranges, improving data processing efficiency.

[1] [2]
Reference address:Image sensors drive the development of embedded vision technology

Previous article:IMT and Peking University Institute of Micro-Nano Electronics jointly awarded MEMS special scholarship
Next article:The hot motor market brings new demands for sensors

Latest sensor Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号