New imaging applications are booming, from collaborative robots in Industry 4.0, to drones for firefighting or in agriculture, to biometric facial recognition, to point-of-care handheld medical devices in the home. A key factor in the emergence of these new applications is that embedded vision is more prevalent than ever before. Embedded vision is not a new concept; it simply defines a system that includes a vision setup that controls and processes data without an external computer. It has been widely used in industrial quality control, with the most familiar example being "smart cameras".
In recent years, the development of affordable hardware devices from the consumer market has significantly reduced the bill of materials (BOM) cost and product size compared to previous solutions using computers. For example, small system integrators or OEMs can now purchase single-board computers or module systems such as NVIDIA Jetson in small quantities; while larger OEMs can directly obtain image signal processors such as Qualcomm Snapdragon or Intel Movidius Myriad 2. At the software level, commercial software libraries can speed up the development of dedicated vision systems and reduce the difficulty of configuration, even for small-volume production.
The second change driving the development of embedded vision systems is the advent of machine learning, which enables a neural network to be trained in the lab and then uploaded directly into a processor so that it can automatically recognize features and make decisions in real time.
Being able to provide solutions for embedded vision systems is critical for imaging companies targeting these high-growth applications. Image sensors play an important role in the large-scale introduction of embedded vision systems because they can directly affect the performance and design of embedded vision systems. The main driving factors can be summarized as: reducing size, weight, power consumption and cost, referred to as "SWaP-C" (decreasing Size, Weight, Power and Cost).
1. Reducing costs is crucial
The accelerating driver of new embedded vision applications is the price that meets market demand, and the cost of vision systems is a major constraint in achieving this requirement.
1.1. Save optical costs
The first way to reduce the cost of vision modules is to reduce the size of the product. There are two reasons: first, the smaller the pixel size of the image sensor, the more chips can be manufactured on the wafer; on the other hand, the sensor can use smaller and lower-cost optical components, both of which can reduce inherent costs. For example, Teledyne e2v's Emerald 5M sensor reduces the pixel size to 2.8µm, allowing S-mount (M12) lenses to be used on 5-megapixel global shutter sensors, bringing direct cost savings - the price of an entry-level M12 lens is about $10, while the larger C-mount or F-mount lens costs 10 to 20 times that. Therefore, reducing the size is an effective way to reduce the cost of embedded vision systems.
For image sensor manufacturers, this reduced optical cost has another impact on the design, because generally speaking, the lower the optical cost, the less ideal the angle of incidence to the sensor. Therefore, low-cost optics require the design of specific shifted microlenses above the pixels to compensate for the distortion and focus light from wide angles.
1.2. Low-cost sensor interface
In addition to optical optimization, the choice of sensor interface also indirectly affects the cost of the vision system. The MIPI CSI-2 interface is the most suitable choice to achieve cost savings (it was originally developed by the MIPI Alliance for the mobile industry). It has been widely adopted by most ISPs and has begun to be adopted in the industrial market because it provides a low-cost system-on-chip (SOC) or system-on-module (SOM) integration from companies such as NXP, Nvidia, Qualcomm or Intel. Designing a CMOS image sensor with a MIPI CSI-2 sensor interface directly transmits the image sensor's data to the host SOC or SOM of the embedded system without any intermediate converter bridge, thus saving cost and PCB space. Of course, this advantage is more prominent in multi-sensor based embedded systems (such as 360-degree panoramic systems).
These benefits are somewhat limited, however, as the MIPI interface is limited to a connection distance of 20 cm, which may not be optimal in remote settings where the sensor is far from the host processor. In these configurations, a camera board solution with an integrated longer interface is a better option at the expense of miniaturization. Some off-the-shelf solutions can be integrated, for example, camera boards from industrial camera manufacturers (such as Flir, AVT, Basler, etc.) are typically available in MIPI or USB3 interfaces, the latter of which can achieve a range of more than 3 to 5 meters.
1.3. Reduce development costs
Rising development costs are often a challenge when investing in new products; it can cost millions of dollars in one-time development fees and put pressure on time to market. For embedded vision, this pressure becomes even greater because modularity (i.e., the ability of a product to switch between multiple image sensors) is an important consideration for integrators. Fortunately, development expenses can be reduced by providing a degree of cross-compatibility between sensors, for example, by defining component families that share the same pixel architecture for stable optoelectronic performance, by having a common optical center to share a single front-end mechanism, and by compatible PCB assemblies to simplify evaluation, integration, and the supply chain.
To simplify camera board design (even for multiple sensors), there are two approaches to designing sensor packages. Pin-to-pin compatibility is the preferred design for camera board designers because it allows multiple sensors to share the same circuitry and controls, making assembly completely independent of PCB design. Another option is to use footprint-compatible sensors so that multiple sensors can be used on the same PCB, but this also means that they may have to deal with differences in the interface and routing for each sensor.
Figure 1: Image sensors can be designed to be pin-compatible (left) or footprint-compatible (right) to enable proprietary PCB layout designs
2. Energy efficiency provides better individual working ability
Tiny battery-powered devices are the applications that most obviously benefit from embedded vision, as an external computer prevents any portable application from occurring. To reduce the energy consumption of the system, image sensors now include a variety of features that enable system designers to save power.
From a sensor perspective, there are several ways to reduce power consumption in an embedded vision system without sacrificing acquisition frame rate. The simplest approach is to minimize the dynamic operation of the sensor itself at the system level by using standby or idle modes as long as possible. Standby mode reduces the sensor's power consumption to less than 10% of active mode by turning off the analog circuits. Idle mode cuts power consumption in half and allows the sensor to restart image acquisition in microseconds.
Another way to integrate energy saving in sensor design is to use advanced lithography node technology. The smaller the technology node, the smaller the voltage required to switch the transistor. Since power consumption is proportional to voltage (�������� ∝ � × �²), this can reduce power consumption. So the pixels produced using 180nm technology 10 years ago not only reduced the transistor to 110nm, but also reduced the voltage of the digital circuit from 1.9 volts to 1.2 volts. The next generation of sensors will use 65nm technology nodes, making embedded vision applications more energy-efficient.
Finally, by choosing the right image sensor, the power consumption of LEDs can be reduced under certain conditions. Some systems must use active illumination, such as 3D map generation, motion quiescence, or simply using sequential pulses of a specific wavelength to improve contrast. In these cases, reducing the noise of the image sensor in low-light environments can achieve lower power consumption. By reducing sensor noise, engineers can decide to reduce the current density or reduce the number of LEDs integrated into the embedded vision system. In other cases, when image capture and LED flashing are triggered by external events, choosing the right sensor readout structure can significantly save power. With traditional rolling shutter sensors, the LED lights must be fully turned on for the entire frame exposure, while global shutter sensors allow the LED lights to be turned on only for a portion of the frame. So, in applications such as using intra-pixel correlated double sampling (CDS), replacing rolling shutter sensors with global shutter sensors can save lighting costs while still maintaining the same low noise as CCD sensors used in microscopes.
3. On-chip functions pave the way for application-oriented vision systems
Some of the more radical extensions of embedded vision have led us to fully customize image sensors, integrating all processing functions in a 3D stack (system on chip) to optimize performance and power consumption. However, the cost of developing such products is very high, and fully customized sensors that can achieve this level of integration are not completely impossible in the long run. Now we are in a transitional stage that includes embedding certain functions directly into the sensor to reduce the computing load and speed up processing time.
For example, in barcode reading applications, Teledyne e2v has patented technology that adds an embedded function containing a proprietary barcode recognition algorithm to the sensor chip. This algorithm can find the position of the barcode within each frame, allowing the image signal processor to focus only on these ranges, improving data processing efficiency.
Previous article:IMT and Peking University Institute of Micro-Nano Electronics jointly awarded MEMS special scholarship
Next article:The hot motor market brings new demands for sensors
- Popular Resources
- Popular amplifiers
- Melexis launches ultra-low power automotive contactless micro-power switch chip
- Infineon's PASCO2V15 XENSIV PAS CO2 5V Sensor Now Available at Mouser for Accurate CO2 Level Measurement
- Milestone! SmartSens CMOS image sensor chip shipments exceed 100 million units in a single month!
- Taishi Micro released the ultra-high integration automotive touch chip TCAE10
- The first of its kind in the world: a high-spectral real-time imaging device with 100 channels and 1 million pixels independently developed by Chinese scientists
- Melexis Launches Breakthrough Arcminaxis™ Position Sensing Technology and Products for Robotic Joints
- ams and OSRAM held a roundtable forum at the China Development Center: Close to local customer needs, leading the new direction of the intelligent era
- Optimizing Vision System Power Consumption Using Wake-on-Motion
- Infineon Technologies Expands Leading REAL3™ Time-of-Flight Portfolio with New Automotive-Qualified Laser Driver IC
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Download from the Internet--ARM Getting Started Notes
- Learn ARM development(22)
- Learn ARM development(21)
- Learn ARM development(20)
- Learn ARM development(19)
- Learn ARM development(14)
- Learn ARM development(15)
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- DSP's various lords
- Unboxing K210 and ESP32S2
- What is the name of this insulation sheet?
- cpld realizes phase-locked loop frequency synthesis
- Power conversion type analysis
- Synplify Tool User Guide.pdf
- C6678 DSP program debugging problem
- [First batch of finalists] 2022 Digi-Key Innovation Design Competition
- Very urgent help, power supply people please help
- I searched Baidu for half an hour but couldn't find the answer. I used a search engine to find the answer in seconds.