Generally speaking, people think that blindness means a complete loss of the ability to see. In fact, this is a misunderstanding. The World Health Organization (WHO) defines blindness as a loss of vision so severe that one cannot distinguish the number of fingers raised at a distance of 3 meters, even with glasses or contact lenses. Therefore, even people diagnosed as blind may still have a certain degree of vision, and most of them can still distinguish changes in contrast to varying degrees.
Improving vision for the visually impaired
Our team of scientists at the Department of Clinical Neurosciences at the University of Oxford are developing an innovative visual prosthesis - an electronic assistance system to support the vision of visually impaired people.
We are currently conducting trials of new technology that uses an individual's visual ability to judge changes in contrast. We acquire and process image data from a head-worn camera video feed to detect nearby objects such as people, sign posts, or obstacles of interest. The detected object is reduced to an image displayed by a set of LEDs and returned to the head-worn helmet display. Using a very small number of LEDs, we can pinpoint the location and classification of obstacles in close proximity to the wearer of the device.
Ultimately, we hope to design this technology into a pair of electronic glasses, which we affectionately call "Smart Specs". These glasses will enable more visually impaired people to live more independently, helping them to locate nearby objects and observe their surroundings. When mass-produced, Smart Specs will cost about the same as a modern smartphone. Their performance is comparable to that of a fully trained guide dog, but much cheaper.
Build a prosthetic simulation environment to verify our design
We started by simulating the functionality of a retinal prosthesis and exploring how to increase the amount of information that low-resolution display images can provide. We used LabVIEW and the NI Vision Development Module to develop the simulation software. The module supports a variety of different camera types and provides us with ready-made image processing functions, image acquisition drivers, display functions, and image recording functions. We can quickly acquire raw images without extensive development. We have published our methods and results (van Rheede, Kennard and Hicks. Journal of Vision 2010).
In this first study, we proposed using machine vision to simplify the important information in a video stream and recreate a bright, low-resolution image that might help people with minimal vision. This led to our ongoing research, which is based entirely on LabVIEW, NI-IMAQ, and Vision.
We follow these development steps to create our system:
- Blindness simulation.
- Develop real-time image optimizations such as edge detection and contrast optimization.
- Develop real-time object detection algorithms and explore different methods to simplify images and output bright images suitable for people with severe visual impairments.
- Develop a fast face detection algorithm to interface to a simplified image output.
- Develop a real-time, orientation-independent text recognition algorithm.
We performed a proof-of-principle study using the described technique on healthy controls (under conditions of simulated blindness) and a blind person, and found that both could easily find and identify objects in our system's environment that were previously invisible.
Using the functions provided by the NI Vision Development Module, we developed various built-in processing algorithms, such as downsampling and detail reduction algorithms based on Gaussian blur. With these algorithms, we can process the acquired images. We used several functions provided by the Vision Module, such as pattern matching and optical character recognition, to detect visual objects of interest. But we are by no means limited to using only the functions provided by the module. For example, we created a face recognition algorithm using functions from the Color Contrast Functions palette.
Initially, the objects under test were presented to the subjects via a commercial head mounted display (HMD), but we soon realized that we could customize a modified, low-resolution display using an array of LEDs connected via a serial interface. To integrate our custom head mounted display into the simulation system, we chose the NI USB-8451 I2C/SPI interface module. With this interface module, we can quickly generate a bright image display through our object recognition software. We can refresh all 128 LEDs in the LED array at a speed that human vision cannot distinguish.
Benefits of NI Solutions
By using the USB-8451 interface module to collect gyrator data (I2C) and control the LED display (SPI) at the same time, we minimized the need for hardware devices, which not only simplified the system development, but also helped us save development costs. We also considered using other serial interface devices provided by other suppliers, but the USB-8451 was easily integrated into our system due to its easy integration, so we turned to NI products. At the same time, as a typical hardware product of NI, the USB-8451 also installed a large number of useful example programs when installing the driver, which further accelerated our development.
As the application development environment for our simulation system software, we have not considered any other products except LabVIEW. As an avid LabVIEW developer for 10 years, I have found that no other application development environment (ADE) can provide the same fast and flexible software development and debugging experience as LabVIEW. In addition, the series of ready-made visual processing functions provided by LabVIEW are very convenient and easy to use, and the programming efficiency is very high, which is a necessary factor to meet the needs of our project.
Technology Outlook
This technology has a lot of potential for the future. We can use colored LEDs to reflect different information, so that the wearer can distinguish the importance of objects, such as pedestrians or road signs. We can also control the brightness of the LED array to reflect the distance of the object being detected.
We believe that with our efforts, we can further improve the character recognition program to the point where it can distinguish between the headlines of newspaper articles and images in videos before reading the images back through the wearer's integrated headphones. Similarly, we can implement the barcode recognition algorithm that is already part of the NI Vision Development Module to enable our product to identify different items and then download the price information and read it back to the wearer.
in conclusion
We have now started the first full clinical trial of this new technology. Although it is still in the early stages of development, our innovative capabilities will certainly allow us to open up new ways to help the visually impaired.
As mentioned above, we have big plans for this technology. By using LabVIEW as the core of our simulation system and adopting a highly maintainable software architecture, the process of expanding our existing system to integrate new innovations in the future will be simple and efficient.
Previous article:RF/communication physical layer research based on LabVIEW and NI USRP
Next article:MIT Designs Dynamic Output Feedback Controller Using LabVIEW and CompactRIO
Recommended ReadingLatest update time:2024-11-15 17:25
- Popular Resources
- Popular amplifiers
- 100 Examples of Microcontroller C Language Applications (with CD-ROM, 3rd Edition) (Wang Huiliang, Wang Dongfeng, Dong Guanqiang)
- Arduino Nano collects temperature and humidity data through LabVIEW and DHT11
- Modern Testing Technology and System Integration (Liu Junhua)
- Computer Control System Analysis, Design and Implementation Technology (Edited by Li Dongsheng, Zhu Wenxing, Gao Rui)
- From probes to power supplies, Tektronix is leading the way in comprehensive innovation in power electronics testing
- Seizing the Opportunities in the Chinese Application Market: NI's Challenges and Answers
- Tektronix Launches Breakthrough Power Measurement Tools to Accelerate Innovation as Global Electrification Accelerates
- Not all oscilloscopes are created equal: Why ADCs and low noise floor matter
- Enable TekHSI high-speed interface function to accelerate the remote transmission of waveform data
- How to measure the quality of soft start thyristor
- How to use a multimeter to judge whether a soft starter is good or bad
- What are the advantages and disadvantages of non-contact temperature sensors?
- In what situations are non-contact temperature sensors widely used?
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Innovation is not limited to Meizhi, Welling will appear at the 2024 China Home Appliance Technology Conference
- Innovation is not limited to Meizhi, Welling will appear at the 2024 China Home Appliance Technology Conference
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Download from the Internet--ARM Getting Started Notes
- Learn ARM development(22)
- Learn ARM development(21)
- Learn ARM development(20)
- Learn ARM development(19)
- Learn ARM development(14)
- Learn ARM development(15)
- Summary of Problems and Solutions in CCS6 Compilation
- [STM32WB55 review] +thread trial 1
- Guess the question about the list of materials for the undergraduate group of the electronic competition: Sound source localization system
- 【ST NUCLEO-G071RB Review】_01_First impression
- Why use discrete components for preamplification?
- MSP430IO Driver
- Why is there no sound after the voice IC is connected to the power amplifier circuit?
- I'm a newbie, please KP
- Use the MP3 decoding library provided by ST
- How do I design a circuit with such input and output?