With the advent of the digital and networking era, especially the development of broadband wireless networks, it has provided an opportunity for the application of large-volume transmission services such as audio and video on wireless networks. At the same time, due to the unique sensory characteristics of audio and video, the related application needs are becoming more and more urgent. Wireless multimedia is the product of the integration of technologies in the two fields of multimedia and mobile communications, and has become a hot spot in today's communications field. In view of the open source nature of the Linux kernel, it is used as the operating system to make the entire system have better real-time and stability. The entire system uses ARM11 as the core processor, adopts the new generation of video codec standard H.264 for encoding and decoding, and transmits audio and video through wireless networks. It makes full use of the multimedia codec (Multi-Formatvideo Codec, MFC) integrated in the S3C6410 microprocessor to effectively improve the cost performance of the system. The entire system provides a good solution for the transmission of wireless multimedia audio and video, which can be widely used in various fields such as remote monitoring and video calls, and has good practical value and promotion and application prospects.
1. System overall design
The audio and video acquisition modules in the systems of both communicating parties are responsible for collecting analog signals and sending the collected audio and video data to the audio and video management module. After compression processing, the data packet header is added and sent to the other party by WiFi. After the other party receives the data, it undergoes relevant processing to determine the audio and video frame type, and then sends it to the decompression processing module to restore the audio and video data. Both communicating devices contain embedded audio and video management modules and wireless transceiver modules. The wireless WiFi transceiver module operates in the 2.4 GHz frequency band and complies with the IEEE 802.11b wireless LAN protocol standard.
2. System hardware design
The system hardware design uses ARM11 as the core microprocessor with a main frequency of 532 MHz, which can meet the requirements of real-time processing. It integrates 256 MB SDRAM, 2 GB FLASH, audio recording and playback interface, Camera video interface, wireless WiFi interface, LCD interface, SD card interface, etc. At the same time, it uses open source Linux 2.6.28 as the kernel, yaffs2 as the root file system, and Qtopia 4.4.3 as the user interface, providing a good platform for development, debugging and system design.
2.1 Audio and video acquisition module
The audio uses the IIS (Inter-IC SoundBus) audio interface and WM9714 audio chip integrated inside the processor. IIS is a bus standard customized by Philips for audio data transmission between digital audio devices. In Philips' IIS standard, both the hardware interface system and the format of audio data are specified. Based on this hardware and interface specification, integrated audio output, Linein input and Mic input functions are realized.
The video acquisition uses the OV9650CMOS camera module, which has a resolution of up to 1.3 million pixels and can be directly connected to the Camera interface of the OK6410 development board. It is suitable for high-end consumer electronics, industrial control, car navigation, multimedia terminals, industry PDAs, embedded education and training, personal learning, etc. Its structure is relatively simple, and it provides hardware drivers for easy use and debugging.
2.2 Wireless transmission module
The wireless transmission module of this system is implemented by a WiFi module working in the 2.4 GHz public frequency band. It complies with the IEEE 802.11b/g network standard and can be used to connect the terminal to the Internet in later development. Its maximum data rate is 54 Mb/s and it supports WinCE and Linux systems. The indoor communication distance can reach 100 m and the outdoor open area can reach 300 m. Only a simple configuration of the ARM-Linux operating system is required to convert the Ethernet connection mode to the dual-machine communication AD-HOC mode. After the system is started, a Qt-based window design is designed to facilitate switching of the connection mode.
The use of WiFi has good scalability and can be connected to the wide area network through the WiFi of the wireless router, which has a good application prospect. At the same time, most mobile phones and other terminal devices have WiFi functions, and the software can be upgraded to the Andriod system in the future, which is convenient for development and transplantation. It reduces the development cost and cycle of real-time audio and video transmission, and also provides a new audio and video communication method for modern mobile communications.
After the WiFi driver is configured, the application layer and Ethernet interface mode programming are exactly the same. Since this design has a large amount of audio and video data, UDP is not suitable, because when the data volume is too large or the transmission signal is not good, UDP will cause serious packet loss. Therefore, the connection-oriented TCP transmission protocol is finally selected to ensure the effective transmission of the system audio and video. Since TCP transmits data in response, there is no need to consider the TCP packet loss problem in the local area network, which provides a reliable guarantee for the realization of system functions.
3. Software Design
The software is divided into user interface design and data processing, transmission and other module designs.
3.1 Overall software design based on multithreading
The system software architecture is shown in Figure 1. It is a one-way audio and video acquisition, compression, transmission, reception, decompression, processing and playback audio and video stream control process. Each module uses thread processing, and the semaphore processing thread priority constitutes a loop thread, which effectively processes the audio and video data stream. The system's various functions are modularized, easy to modify and transplant, and the code is short and powerful.
Figure 1 Software Architecture
3.2 Echo Cancellation
At the beginning of the system, there were echo and delay problems. The delay was caused by the acquisition and transmission process, so the delay could only be shortened as much as possible, but it was impossible to play it instantly. This is also one of the defects of this system. The echo was caused by the delay. In the end, the open source Speex algorithm was used to eliminate the echo. The specific method is to compile the algorithm into a library file and add it to the Linux kernel, that is, the Speex API function can be used to achieve audio echo elimination.
3.3 Embedded Audio and Video Synchronization
The basic idea of this article is to use video stream as the main media stream and audio stream as the secondary media stream. The video playback rate remains unchanged, the actual time is determined according to the local system clock, and audio and video synchronization is achieved by adjusting the audio playback speed.
First, a local system clock reference (LSCR) is selected, and then the LSCR is sent to the video decoder and audio decoder. The two decoders compare the PTS value of each frame with the local system clock to generate the accurate display or playback time of each frame. In other words, when generating the output data stream, each data block is timestamped according to the time on the local reference clock (generally including the start time and the end time). When playing, the timestamp on the data block is read, and the playback is arranged according to the time on the local system clock reference.
The audio and video synchronization data flow of the entire system is shown in Figure 2.
Figure 2 Audio and video synchronization data flow
4. Audio and video channel management
In order to save memory resources and facilitate channel management, this design adopts channel-based thread pool management, and audio and video tasks are completed by their own channels respectively.
Audio and video acquisition is processed by the same thread, using the select system call. Every time this thread is executed, it determines whether the audio and video equipment is ready. If it is ready, the audio or video is collected into the audio and video buffer, and then handed over to the audio and video acquisition compression thread, and finally handed over to the sending thread for packaging and sending using TCP. It should be noted that semaphores are used between threads in this design to complete the synchronization management of the TCP-based audio and video software architecture between threads. After sending, enter the receiving thread to wait for the other party's audio and video data. After the receiving thread receives the data at the receiving end, it determines the packet header of the data, and then hands it over to the decompression processing thread for processing, and then plays the audio and video, and then waits for the other party to send data to the local machine.
Due to the high-speed processing of the processor and the high-efficiency video hardware H.264 decompression, the real-time performance of the entire system basically meets the requirements. The embedded audio and video management module realizes the overall control and real-time processing of the entire system, providing a reliable guarantee for audio and video data management.
5 Conclusion
At present, video surveillance products based on embedded wireless terminals are highly favored due to their advantages such as no need for wiring, long transmission distance, strong environmental adaptability, stable performance and convenient communication. They play an irreplaceable role in safety monitoring, patrol communication, construction communication, personnel deployment and other occasions. This system is a wireless audio and video communication handheld terminal based on embedded Linux. It is small in size and easy to carry. It uses lithium batteries to step down the voltage through switching power supply chips to power the entire system. Its efficiency is greatly improved compared with traditional DC voltage regulation. It can be used in outdoor visual entertainment, construction site monitoring, large-scale security communication and other occasions, and has a wide range of application prospects.
Previous article:How to set up ARM Linux system to automatically run specific applications when booting
Next article:Research and design of ARM single chip microcomputer height and distance measuring vehicle
- Popular Resources
- Popular amplifiers
- Learn ARM development(16)
- Learn ARM development(17)
- Learn ARM development(18)
- Embedded system debugging simulation tool
- A small question that has been bothering me recently has finally been solved~~
- Learn ARM development (1)
- Learn ARM development (2)
- Learn ARM development (4)
- Learn ARM development (6)
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- How Lucid is overtaking Tesla with smaller motors
- Wi-Fi 8 specification is on the way: 2.4/5/6GHz triple-band operation
- Wi-Fi 8 specification is on the way: 2.4/5/6GHz triple-band operation
- Vietnam's chip packaging and testing business is growing, and supply-side fragmentation is splitting the market
- Vietnam's chip packaging and testing business is growing, and supply-side fragmentation is splitting the market
- Three steps to govern hybrid multicloud environments
- Three steps to govern hybrid multicloud environments
- Microchip Accelerates Real-Time Edge AI Deployment with NVIDIA Holoscan Platform
- Microchip Accelerates Real-Time Edge AI Deployment with NVIDIA Holoscan Platform
- Melexis launches ultra-low power automotive contactless micro-power switch chip
- Looking to buy a ZYNQ development board
- LPC7168 QEIPOS cannot get value
- Google has suspended some of its business dealings with Huawei. Is it too late to develop an operating system now?
- [SAMR21 New Gameplay] Summary
- 【TI Recommended Course】#TI Embedded Processor Solutions for Speech Recognition Applications#
- How to convert high and low level signals into analog signals
- RF quality information sharing
- LSM6DSL acceleration wake-up threshold usage (WAKE_UP_THS (5Bh))
- NUCLEO Burning
- New Year, gifts, and heights are all here! AMS OSRAM invites you to play games and win New Year gifts!