The development of wearable electronic systems, whether biometrics, communications or virtual reality, extends the concept of embedded systems to new and unknown areas. Putting sensors and output devices on the operator has created a new term - cyborg: the combination of human and embedded system.
Wearable systems open up new vistas for practical applications and require a new vision of embedded architecture. Sensor clusters used in sticky or ingestible forms are completely isolated from traditional power, ground, and I/O connections. To achieve a tiny size and near-zero power consumption, small sensor clusters must support traditional local signal processing, storage, and wireless connectivity, but are not limited to this. This is a dilemma that designers must solve.
Separate the system
One way to solve the wearable system challenge is to look at traditional embedded system design, which includes sensors, actuators, and displays connected to the user's body. Driven by the needs of mobility, comfort, and concealment, the system needs to be separated. When sensors, output devices, and computing resources are physically separated from each other, see how the system architecture changes.
As an example, consider the design of smart glasses. To avoid a cliché, we will not discuss familiar consumer products, but rather look at glasses designed by industrial equipment supplier XOEye. These glasses are used for activities such as component viewing, inventory handling, and field maintenance. The system features a stereo-mounted 720-line video camera, voice input, and LED and voice output, and is designed to interactively help people complete certain predefined tasks.
XOEye CTO Jon Sharp explained that the glasses capture and analyze the stereoscopic images the user sees, enhancing the ability to distinguish components, measure size and shape without physical contact or measuring tools, and interact with technicians during repairs - "adjust the screw on the left first," or warn of potential safety hazards through a flashing red LED. "Don't go there!"
The traditional approach to this type of design would use a camera and microphone mounted on glasses, then perform video processing, object recognition, and establish a wireless communication link through a backpack and battery worn on the back. The traditional user response to this design is to look at the backpack and then bend down carefully to use the system.
Let's get into the concept of wearable technology. XOEye's approach is to achieve fully autonomous glasses. This goal obviously has space and power constraints. We can't do magic, and these constraints force some computation to be done remotely, usually in the cloud. But partitioning the computational load also brings new design challenges.
Build Links
Moving a lot of computing tasks to the cloud is not a new concept in the Internet of Things (IoT). Chakra Parvathaneni, senior director of business development at Innovent, noted that the split varies by application. “A home thermostat has a lot of local processing, but Apple’s Siri is almost all in the cloud,” he noted.
In the case of XOEye, moving the task to the cloud means either having enough bandwidth to send both video streams in raw format, or doing the video compression in real time in the glasses. The latter is possible with existing media processing chips, but requires a battery of suitable size. However, there is another problem.
“You have to maintain the human interface and certain functions even when there’s no connection,” Sharp cautions. “For example, when you lose WiFi connectivity, you have to be able to identify security issues in real time.” Some functions will require a level of continuous real-time response — something that cloud computing over the far end of the Internet can’t guarantee.
These issues require local processing, which conflicts with the size, weight, and power constraints of glasses. XOEye originally wanted to solve this problem using the OMAP architecture that combines an MCU with an accelerator. The OMAP SoC can handle traditional media processing tasks, but, Sharp lamented, "it is impossible to achieve real-time stereo ranging." Therefore, XOEye turned to the CPU plus FPGA approach, and they were able to build energy-efficient local accelerators no matter what tasks the application required.
Smart Hub
Even if operating conditions allow for local connectivity to a wireless hub, the round-trip link from the hub through the Internet to the cloud still introduces unacceptable uncertainty. This is one of the architectural challenges facing the IoT. Given these circumstances, if some computing tasks are to be done outside of the wearable device, then they can be placed on the local wireless hub rather than in the cloud (Figure 1). Of course, this cannot be done using only commercial WiFi hubs.
Figure 1. Wearable embedded system becomes smart hub wireless network
Integrating compute nodes in WiFi hubs greatly increases system design flexibility. Hubs are generally not constrained in space and power consumption, so you can put some compute and storage resources there. Short-range WiFi links can provide reliable broadband, predictable latency connections, allowing hubs to participate in critical control or human-machine interface loops where unexpected latency can be problematic. In addition, hubs have multitasking CPUs and corresponding accelerators to complete many of the processing tasks of remote wearable devices.
Smart RF
What happens when wearables are much smaller than glasses, like wristbands, devices inserted into shoes, and larger pills? There is not enough room for a large battery, so the power cannot be supported, and the WiFi cannot be always on. Wireless methods move to Bluetooth or very low-power short-range links. The hub is now a wearable device itself, worn on a belt or in a pocket, within a meter of the sensor - or even closer if it only supports near-field wireless links. And the task division problem changes in interesting ways.
Several challenges have led to the development of wearable devices. At a minimum, there have to be sensors, controllers to query those sensors, and wireless interfaces. With careful tuning of the duty cycle, tiny batteries can support these loads—low-power devices. But where do you put the computation now?
The bandwidth between the sensor and the first level of sensor processing becomes a big question. Can the wireless link carry the raw data stream from the sensor in real time? If not, can some of the energy be spent on increasing the link bandwidth, or local processing at the sensor? If the system user model changes, will the answer be different?
One way to address this problem is to rethink the RF. Designers tend to place the baseband processor in the radio interface as an interference-resistant black box. However, Parvathaneni of Genesis Technologies recommends looking inside. For example, Genesis Technologies has a line of radio processing unit (RPU) baseband processor subsystems that give system designers more freedom in two ways.
Internally, the Ensigma RPU (Figure 2) includes a general-purpose MIPS CPU core, supported by a set of specialized accelerators, Parvathaneni said. Therefore, functionality is software-defined, and users can change the RF air interface by modifying the code. This is one aspect of freedom, as you can adjust the power consumed by the baseband to match the bandwidth and distance requirements of a specific wireless link. Parvathaneni also explained, "In many cases, the air interface leaves room for the MIPS core for the main task." Therefore, system designers can choose the air interface standard and then load a set of processing tasks into the RPU, without changing the hardware design, ready to respond to changes in the operating mode of the wearable system. In some cases, this flexibility avoids the need for an MCU or compression engine where the sensor is located.
Figure 2. Ensigma Whisper from Envision Technologies implements a programmable baseband processor with a MIPS core and a set of low-power accelerators.
Wireless beyond RF
Wearable sensors are getting smaller, lighter, and almost disposable, so issues such as hardware supporting the air interface and energy consumption are becoming increasingly important. In response, IP startup Epic Semiconductor has made an interesting suggestion that the wireless link does not use radio frequency, but something else. According to Epic's CTO, Wolf Richter, the solution is to use electric fields.
Epic has developed a technology that uses external electrodes—small sheets or foils of conductors—for three different purposes. First, the circuit harvests energy from the surrounding electric field. In three to five seconds, the device can harvest enough energy to power a 5 mW load for a period of time. Richter gives examples that include performing tasks on an ARM Cortex-M0 running at 3MHz, rewriting a 15V e-ink nonvolatile display, or briefly activating a 30V printed circuit.
Second, the Epic intellectual property (IP) is able to detect any physical phenomenon that modulates an electric field, just like a typical capacitive sensor. For example, Richter said, the device could detect the presence of a person about three meters away. Other more mundane applications include measuring the dielectric constant of nearby surfaces—from which the system can infer the body's temperature, pulse, and muscle activity. Or, in a completely different context, the sensor could infer the degree of spoilage on the surface of packaged meat from changes in the dielectric constant.
Finally, this technology can use the same electrodes for bidirectional signals, enabling RF-free near-field communication by monitoring and modulating the electric field on the electrodes. In this way, a 0.25 mm2 silicon chip can provide power, sensing and connectivity functions for smart surfaces, patches or some similar medium.
Previous article:Pointer out of bounds causes global variable change error
Next article:Detailed explanation of the world's seven major MCUs
- Popular Resources
- Popular amplifiers
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Keysight Technologies Helps Samsung Electronics Successfully Validate FiRa® 2.0 Safe Distance Measurement Test Case
- Innovation is not limited to Meizhi, Welling will appear at the 2024 China Home Appliance Technology Conference
- Innovation is not limited to Meizhi, Welling will appear at the 2024 China Home Appliance Technology Conference
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Download from the Internet--ARM Getting Started Notes
- Learn ARM development(22)
- Learn ARM development(21)
- Learn ARM development(20)
- Learn ARM development(19)
- Learn ARM development(14)
- Mini Adafruit QT Py Development Board
- openmv3 serial port hangs up
- Design and Implementation of VGA Image Controller Based on CPLDFPGA
- HC32F460 series virtual serial port problem
- It’s really stuffy to wear a mask when going out in summer. Is there any good solution?
- Smoke sensor as wireless power supply
- What are the methods for upgrading the msp430 microcontroller program?
- RS422 to 5V and 3.3V TTL level
- Architecture of Wireless Sensor Networks
- Recommended FPGA learning resources