Challenges and Solutions for Image Acquisition and Processing in Portable Designs

Publisher:EtherealMelodyLatest update time:2013-08-13 Source: 21ic Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

This four-part series delves into trends and design challenges in image acquisition and processing on mobile phones and other handheld device platforms. This part focuses on using software to enhance optical performance.

The invention of wafer-scale manufacturing technology has made it possible to produce extremely compact camera modules at very low cost. For physical reasons, very small camera modules have inferior performance to larger ones, but the defects of small camera modules can be corrected by new lens structures using wafer-scale manufacturing. However, this only serves to maintain the status quo and does not improve image quality or enhance the user experience.

Designers of high-resolution camera phones can now market their products to consumers simply by highlighting the number of pixels. As camera phones have become more common (more than 80% of phones have at least one camera), consumers have realized that image quality is not necessarily related to the number of pixels. In fact, the amazing images uploaded from the Mars rover were taken with a camera that had only 1M pixels. Similarly, designers of professional-grade cameras have long known that in order to obtain a higher-quality digital image, optics and software must work closely together.

As discussed in the first part of this article, customer demand has driven up the image quality of new camera phones as well as a host of embedded features. Among the most anticipated are optical zoom, focus, and low optical sensitivity, such as the ability to take photos without a flash. All of these features are relatively simple to implement, provided that the height and cost are allowed, especially the zoom range. Traditional zooming requires two lenses to move relative to each other on the optical axis within the camera. This can be accomplished with micro-actuators, but the resulting product is large, power-hungry, slow, and not designed to survive the harsh environments of portable electronic devices, especially the “drop test.” So how can designers of camera models provide all of the features consumers demand, improving picture quality, without compromising the product’s appearance, reliability, and most importantly, cost? The answer is software-enhanced optics.

Software Enhanced Optics

Software-enhanced optics, or "smart optics," is a technique for correcting known optical effects in image processing. An optical effect can be an inherent defect that must be corrected, or an artificial correction that is intentionally introduced to provide a function or specific effect. If the goal is simply to improve the quality of the image, then instead of investing in high-quality, high-precision optical equipment, the known distortions introduced by cheap optical lens modules can be corrected through software. For example, if size and cost constraints mean that the corners of the image always have the same degree of blurriness, software-enhanced optics can apply edge sharpening algorithms to correct these corner areas. The user can then be satisfied with the image because the inherent imperfections within the camera module can be corrected or masked, and the resulting image is good in all departments. For further efficiency, this correction can be completely transparent to the user, i.e., without user intervention.

With this concept of software-enhanced optics, new opportunities are emerging. The basic approach is to use specialized lenses to manipulate the light as it enters the camera, distributing it across the imager in accordance with the desired function. The manipulated image is not used directly, it requires further correction by software. However, because the image has been manipulated in a known manner, it can be restored digitally, resulting in a high-quality output. Many features can be achieved in this way, including full optical zoom with no moving parts, extended depth of field, and low F-number optics for low-visibility environments.

Software-enhanced optics solutions for optical zoom take advantage of the fact that in a conventional optical lens chamber, the density of information is not uniform across the field of view. The center contains more data, while the opposite is true at the edges. However, an image sensor has a regular, two-dimensional array of pixels. This means that when framing, the center of the imager is normally sampled, while the edges are oversampled. Software-enhanced optics solutions take advantage of specially designed pixel lenses that provide a non-uniform distribution of optical sensing performance across the field of view, matching the quantization format of solid-state imagers. In effect, this is the opposite of the traditional approach taken by nature. Many animals with single-aperture eyes, especially birds, have a standard "lens" but a non-uniform distribution of ducts and cones across the retina. In both cases, image distortion results, but can be corrected because the design of the lens and the pixel distribution of the imager (or retina) are known.

In order to view at the same magnification, the algorithm must compress the details in the central area of ​​the field of view, because this is the area where the professional lens increases the magnification and resolution. Therefore, compression does not reduce the quality of the image, and in fact, the software-enhanced lens solution can make the image quality the same as that produced by traditional cameras in such a design. When the image is zoomed, the border of the image appears, and the enlarged central area is retained. The image is then corrected for distortion. This is the biggest difference with digital zoom, because the zoom is the result of the lens movement and is fixed at the time of image acquisition, so the zoomed image retains its original higher resolution. Software-enhanced optics can achieve 3x zoom.

Figure 1 is an example of using software enhanced optics to provide zoom functionality. In this solution, there are no moving parts, making it physically compact, rugged, instantly visible, very low power consumption and therefore very low cost. This is significantly better than digital zoom. Digital zoom involves cropping and expanding the image to fill the field of view, which naturally reduces resolution because the available information is spread over a larger area. In a 3x digital zoom, almost 90% of the quality information is lost during image acquisition, which is why digital zoom only provides a small magnification. The image enhancement solution for zoom can be implemented with a fixed lens and a simple algorithm. This makes this solution adaptable to all imager technologies and all resolutions (from QCIF to >10M pixels), so it is expected to be widely used in camera phones in the short term.

Figure. Optical zoom using OptiMLTM zoom software enhanced optical solution (only the central area of ​​the field of view is shown) (left) Image before distortion correction (center) Image after distortion correction after 1x optical zoom (right) Image after distortion correction after 2x optical zoom. Source: Tessera

Pictures taken with camera phones are often "spur of the moment" affairs. Consumers do not expect scenes to be staged, and will not have the time or bother to position the camera and themselves at the best distance from the subject. With these small optics, a traditional camera module only needs to focus on subjects within a certain distance, typically 60 cm to tens of meters. Failing to understand and adhere to this limitation, consumers are often dissatisfied with the pictures they take. The image enhancement solution to this problem is "extended depth of field". This results in all details of the scene being in focus, as long as they are within the range of 10 cm to infinity from the camera module. Similar to software-enhanced lens zoom solutions, this is achieved by merging the optical magnification provided by professional lenses with a simple algorithm. It does not involve any moving parts and is therefore rugged, reliable, instantaneous and low power.

In traditional camera modules, the optical lens housing is designed to focus a point of light and is therefore placed inside the camera at a certain distance from the imager. If the lens cannot focus or the object is too close to the lens, the smear will appear in the diffuse area and the image will be blurred. The law by which the lens transforms the light source into a blurred spot can be described by a mathematical transformation called the point spread function. If the point spread function of the lens is known, the blur can be restored to the original scene using digital signal processing. However, when only a certain area of ​​the image is out of focus, no matter what transformation method is used, it cannot be reliably identified. Software-enhanced optics solve this problem by refocusing the entire image in a controlled manner. The lens effectively creates an image that has the same degree of blur regardless of the distance from the light source, which can be de-interlaced using a straightforward algorithm. The result is a much better, clearer image with the foreground, mid-range, and background all in focus at the same time. Figure 8 shows a good example.

Figure. A traditional lens can only focus on objects within a limited range, especially at medium and long distances; a software-enhanced optical solution, such as OptiML Focus, can achieve an extended depth of field, from 10 cm to infinity, without increasing the height or complexity of the camera module. Source: Tessera

关于拍照手机的主要抱怨之一是其低能见度时的性能。其实这只是一个半真理命题。小型相机模块因为其像素尺寸的缩小,无疑导致了相对于数字静止相机光学灵敏度的降低。从2007年像素尺寸为2.2μm到2008年的1.75μm,预计2009年会发展到1.4μm,最终会达到1.1μm,这一趋势会对低能见度性能和图像质量有着显著的影响。简而言之,随着像素尺寸的降低,其敏感度也在减小。从更为技术的角度来看,光二极管吸收光子和释放电子的能力随着像素的下降而减弱。小像素尺寸带来的其他相应影响,包括第动态范围和下降的信噪比。现实中,拍照手机较差低能见度性能的感知更主要的是由越来越多在低能见度环境下拍照而带来的;典型的,在晚上以及在如俱乐部和饭店等场所,在这些地方的光照强度大约在5 lux,而远远小于白天室外的>350 lux。由于亮度下降,从数字成像器得到的图片质量自然迅速恶化,如同增加噪声一样出现缺陷,细节丢失或者色彩错误。

One of the main reasons for the poor low-light performance of camera phones is that the F-number of the optical lens housing cannot be changed, as it is fixed at the time of manufacture. Most digital still cameras offer an option to increase the aperture to compensate for the reduced number of photons in dim scenes. However, a mechanically adjustable aperture would make the camera body large, unsturdy, slow to respond, and consume more power. Simply lowering the F-number of a fixed aperture camera to improve low-light sensitivity is not a good option, because a larger aperture reduces the depth of field, making it difficult to obtain good image quality when the scene has depth of field. Typically, standard camera phones use an aperture between F/2.8 and F/2.4, mainly to maintain sufficient depth to focus under normal lighting conditions. A simple fixed imaging aperture does not allow for longer exposure times in low light conditions. However, this makes the image susceptible to motion blur or camera shake, and is not possible for video capture, which requires exposure times that are limited to 67 milliseconds by the frame rate.

"Speed" is a simple way to describe the ability of an optical system to deliver light to the imager. "Slow lenses" operate in good light conditions. This is because optics allow the use of small apertures and slow shutter speeds to achieve good depth of field. Taking pictures in poor light, or in good light conditions that require a fast shutter speed (such as sports mentioned below), requires a "fast lens". Therefore, the challenge is to provide a good relationship between lighting conditions, depth of field, shutter speed, and develop a fast lens that works well in low light scenes.

Software-enhanced optics offer a fully automatic solution for camera phones, allowing consumers to take clear pictures in a wider range of lighting conditions. The basic idea of ​​this approach is to design a camera module with low F-number optics, typically F/1.75, and restore the depth of field to normal using one of the extended depth of field solutions described above. Low F-number optics enable ultra-fast lens solutions for both still photography and video acquisition. Signal processing can compensate for the loss of contrast and subsequently reduce noise in the final image, while preserving edges, details, textures, etc. of the original image. This is possible because the information written to the linear buffer for the algorithm to use provides data based on pixel averaging and improves the signal-to-noise ratio of the image by about 6 dB. The effectiveness of this approach can be illustrated by comparing the two images obtained from a 1.75μm imager in Figure 9.

accomplish

Software-enhanced optics combine specialized lenses with custom algorithms to deliver images of exceptional quality that are completely transparent to the user. However, camera module designers need to think ahead about how to incorporate these enhancements into handsets, rather than as a plug-in. In principle, all that is required is a lens in a custom optical lens housing, which can be manufactured using existing architectures and lens materials. The custom lens can even replace the existing lens. Together with the lens, the image processing algorithm is also included. The algorithms used in these solutions can be very small, typically on the order of 100,000 logic gates. This is small enough to be embedded in a CMOS imager, but this requires coordination with the imager manufacturer, and then the die must be fitted with the correct optics.

Another way to place the algorithm is in the form of software or firmware, which can run on the image processor or the phone processor. Again, both solutions are simple from a technical perspective, but require good communication with traditional camera module suppliers. However, the benefits of these solutions are so compelling that 3-megapixel camera phones with extended depth of field are already in production and will be equipped with high-resolution cameras in 2009 - along with zoom and ultra-fast lens solutions.

While these software-enhanced optics work independently of zoom solutions to improve the raw performance of highly miniaturized and low-cost camera modules, none of them offer features that can improve the user's satisfaction during the shooting experience. This problem does not concern the camera module designers, but the task of the OEMs. One of the most common problems with digital cameras is the red-eye phenomenon, which explains why more than 80% of current digital still cameras implement red-eye reduction features. Whether these features will be available on camera phones and how they will be integrated will be discussed in the fourth part of this series of articles.

Reference address:Challenges and Solutions for Image Acquisition and Processing in Portable Designs

Previous article:Complete medium power supply management solution
Next article:Designing power management from a system perspective

Latest Power Management Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号