Design of embedded media player system based on Qt

Publisher:BlissfulAuraLatest update time:2012-03-20 Source: 微计算机信息 Keywords:Embedded Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

1 Introduction

As user requirements continue to increase, more and more embedded devices use powerful and low-cost embedded Linux as an operating system and begin to adopt more complex graphical user interfaces. Qt has gradually become a widely used GUI system with its powerful functions and good portability. It is precisely because of the continuous development of embedded operating systems and their corresponding graphical user interfaces that the development of embedded software has become increasingly important. Among them, embedded media players have gradually become an indispensable and important part of the system because they can meet people's audio-visual enjoyment. Developing media players on embedded systems has become a technical hotspot. Many current embedded products include media players. Therefore, implementing media players in Qt-based embedded systems has profound significance and practical value.

2 Embedded Media Player System Design

2.1 Architecture Design

The embedded media player architecture design is shown in Figure 1. It supports embedded Linux systems by using pure C++ language development, adopts Qt/Embedded as GUI to provide a powerful user interface, implements an open plug-in interface to enhance scalability, uses kernel frame buffer for output, eliminates dependence on specific architectures, and ensures portability. The media player is an upper-level application and is located in the Linux user space. The purpose of this design is system portability.

The graphical user interface window is developed based on Qt/Embedded. By calling the class library provided by Qt/Embedded, basic windows for managing multimedia files can be designed according to needs, including windows for opening, deleting, displaying file length, displaying play time, etc., as well as windows for managing playlists and controlling playback for the convenience of users. These are all directly interacting with users. Since Qt/Embedded is used as the GUI, portability can be guaranteed.

File input mainly reads and parses the files specified by the user, and displays the obtained file length, playback time, encoding format, audio and video frame rate, file title and other contents in a pre-designed window in combination with MIME processing.

The plugin interface call mainly integrates all operations on the decoder into a unified open interface. According to the file information parsed in the previous part, it searches for the corresponding decoder plugin and calls it. If no available decoder is found, it can return information to remind the user to add the corresponding plugin. By implementing such an interface, the scalability of the player can be greatly improved, so this part is the core of the media player.

File decoding and output is mainly responsible for decoding the audio and video data stream by calling the decoder, and then using QT/Embedded to directly manipulate the characteristics of the kernel frame buffer FrameBuffer, and send the decoded data directly to the output device through FrameBuffer, avoiding dependence on specific architectures such as DirectShow and OpenGL, and further enhancing portability.

Figure 1 Architecture of embedded media player[page]

3 Plug-in interface module and decoding library module

3.1 Plug-in interface module design

The plug-in interface module is the core part of the entire player. It encapsulates the operation of specific decoders, thus building a bridge between the input and output modules to ensure the normal flow of data. The plug-in interface module mainly provides the following methods to control the decoder:

1) File support function bool isFileSupported(const QString&filename);

Determine whether the file to be played is supported by the player by checking the file extension. If so, return true, otherwise return false. The recognizable extensions include asf, avi, dat, mp2, mp3, mpeg, mpg, ogg, wav, etc. If a new decoder plug-in is added to recognize a new file format, just add its extension to the support list of this function

2) Get file information function const QString& fileInfo();

It is used to obtain various information of the file and save the result in a constant string for easy calling of other functions. This information includes: play time, audio format, audio bit rate, audio channel, audio frequency, video format, video bit rate, video height, video width, etc.

3) Read audio sampling function

bool audioReadSamples(short* output, int channels, long samples, long& samplesRead, int);

Calling the decoder to read audio sample data is the core part of audio data processing. output indicates the file pointer to be output, channels indicates the number of channels, samples indicates the number of samples, and samplesRead indicates the number of samples to be read.

4) Read video frame function

bool videoReadScaledFrame(unsigned char** output_rows, int, int, int in_w, int in_h,int out_w,int out_h,ColorFormat fmt,int);

Calling the decoder to read the video frame is the core part of video data processing. The parameter output_rows indicates the pointer of the output column address, in_w, in_h, out_w, out_h indicate the width and height of the input and output frame data respectively, fmt indicates the color mode used, and the return value is used to determine whether the execution is successful.

5) Audio and video synchronization function definition: int Sync(File*fp,int auIndex,struct timeval*vtime);

fp is the pointer of the opened multimedia file, vtime is the time extracted from the frame header of the currently playing video file, and auIndex indicates the current audio frame count, that is, the frame currently played. Through these parameters, the difference between the number of frames you want to jump to and the current number of frames can be calculated, and then the audio stream can be jumped forward (lag) or backward (advance) according to this difference. At the same time, the Sync function will also feed back this difference int to the audio decoder, allowing the audio decoder to correct the timestamp of the data stream, and so on, so as to achieve a better audio and video synchronization effect. The overall idea of ​​this function is to start another thread while playing the video data stream, open the corresponding audio data stream for playback, and then synchronize the audio data in the video thread.

In addition, there are the plug-in initialization and registration function void pluginInit(), the file initialization function void fileInit(), the search function bool seek(long pos), the video data clearing function flushVideoPackets() and the audio data clearing function flushAudioPackets(), the next data packet acquisition function MediaPacket*getAnotherPacket(int stream), etc., which will not be introduced in detail.

3.2 Decoding library module

The main function of the decoding library module is to provide a decoder for the plug-in interface module. Considering the portability and scalability of the player, this system uses the ffmpeg decoding library. The FFmpeg decoding library is an open source decoder collection under Linux. It supports a variety of audio and video codec standards, and also supports file conversion and avi production. It is very powerful. The ffshow plug-in that can be used under Windows and the mplayer player under Linux both use the ffmpeg decoding library.

The decoding library contains decoders and separators. The decoder is a component that decodes the audio and video data streams, and the separator is a component that separates the data in the file stream into audio data streams and video data streams. Audio data and video data are decoded separately, so both are indispensable.

3 Implementation of Embedded Media Player System

3.1 Overall design of data flow

Figure 2 shows the data flow of the system: First, the input module reads data from the data source (multimedia file). At this time, it will read the file header and do some basic processing, such as reading the file length, obtaining the encoding type and bit rate of this file, and judging whether it can be played. Then the plug-in interface module will call the separator plug-in to split the multimedia data into video data stream and audio data stream; then it will be sorted through the video FIFO and audio FIFO; finally, it will be sent to the video and audio decoders to call the corresponding decoders for decoding. The audio data will be resampled, and the video data will read the corresponding frames and decode them frame by frame; then the sampled audio data and the rendered video data will be synchronized with the audio and video, and then output through the video and audio output modules respectively. Among them, the reading, separation, decoding and output of data are all carried out simultaneously in multiple threads through the class library provided by Qt. While decoding, the program is also constantly reading data into the buffer and sorting it for processing to improve efficiency.

The main function of the input module is to read the multimedia file specified by the user. Since multimedia files of different formats need to call different decoders to open normally, considering the modularization of the program, the actual file opening work is handed over to the plug-in interface module to call the corresponding decoder. The input module only performs some basic processing on the file and caches the file content, and then transmits the original data stream to the plug-in interface module. The user first selects the file to be played through the graphical user interface and issues an open command, which will cause the input module to receive a signal and obtain the file path and file name of the file to be played through the information returned by the user interface. Next, the input module will check whether the file path is legal and whether the file is empty, and then send a signal to the plug-in interface module to notify the plug-in interface module to find an available decoder and prepare for file decoding. The next step is to call the playback initialization function init(), the specific process of which will be described in detail below, and finally hand over the work to the plug-in interface module to let it call the open() function of the decoder of the corresponding file format.

The main function of the output module is to send the audio and video data decoded by the decoder to the output device (such as LCD display, speaker) for output. According to the different output contents, the output module can be divided into two sub-parts: audio output and video output. These two parts are basically output independently of each other, and they are synchronized during output through the synchronization control of the plug-in interface module. Video output is slightly different from audio output. It uses Qt/Embedded to directly control the characteristics of FrameBuffer to output video data. The frame buffer is the memory on the graphics card. Using the frame buffer can improve the speed and overall performance of drawing. The device related to the frame buffer is /dev/fb0 (major device number 29, minor device number 0).

Figure 2 System data flow[page]

4.2 Embedded Audio and Video Synchronization Design

The basic idea of ​​this method is to use the video stream as the main media stream and the audio stream as the secondary media stream. The video playback rate remains unchanged. According to the actual display time determined by the local system clock, the audio and video synchronization is achieved by adjusting the audio playback speed. The audio and video synchronization data flow of the entire system is shown in Figure 3. First, a local system clock reference (LSCR) is selected, and the time on the local system clock reference is required to be linearly increasing. Then the LSCR is distributed to the video decoder and the audio decoder, and these two decoders generate the accurate display or playback time of each frame according to the PTS value of each frame and the local system clock reference. In other words, when generating the output data stream, each data block is timestamped according to the time on the local reference clock (generally including the start time and the end time). When playing, the timestamp on the data block is read, and the playback is arranged according to the time on the local system clock reference.

Figure 3 Audio and video synchronization data flow

In the playback process based on timestamps, it is often not enough to just wait for or quickly process the data blocks that arrive early or late. If you want to adjust the playback performance more actively and effectively, you need to introduce a feedback mechanism, that is, to feedback the playback status of the current data stream to the upper-level "source" by comparing the timestamps of audio and video. If the audio stream lags behind, the audio decoder is immediately notified to speed up the audio stream output, but if it lags too much, the current data is directly discarded and the next frame is directly skipped; if the video stream lags behind, the audio decoder is notified to slow down the audio output speed and wait for the video stream. If it lags too much, it will also skip frames directly. The data stream is first decomposed into video data stream and audio data stream by the separator, and then passes through the corresponding decoder. At the same time, the local system clock is used for timestamp control; after obtaining the accurate display or playback time, the timestamp comparison is performed; if it is synchronized, it is directly output, and if it is not synchronized, the audio frame is skipped or waited until it is synchronized and output.

5 Conclusion

The innovation of this paper is that the system has good portability. Its implementation process and core code have good reusability for similar applications. It can be ported to different operating systems and platforms with only minor modifications. It can be widely used in various embedded systems, such as PDAs, smart phones, etc. It has high economic value and can also provide reference opinions for the development of other embedded system software. Secondly, this paper proposes a media player design based on embedded Linux operating system and graphical user interface QT/Embedded based on the basic needs of users. The design scheme has good characteristics such as low coupling, high cohesion, scalability, and portability, and the scheme is implemented on the basis of the design. The media player supports multimedia files encoded in MPEG-1, MPEG-2 and MPEG-4 formats. At the same time, it has the performance characteristics of small storage space and fast response speed, and supports functions such as playback control and playlist. It can freely switch between Chinese and English bilingual interfaces, and users can choose to open files in any location. The economic benefit of the project is 500,000 yuan.

References

[1] Yu Lei. Internet of Things Logistics Management System Based on RFID Electronic Tags[J]. Microcomputer Information, 2006.2:233-235

[2] Peng Xuange. An Embedded Internet Interface System[J]. Microcomputer Information, 2005.2: 8-9

[3] Wang Yamin, Chen Qing, Liu Changsheng, et al. Configuration Software Design and Development[M]. Xi'an: Xi'an University of Electronic Science and Technology Press, 2003.

[4] W. Richard Stevens, Stephen A. Rago, Advanced Programming in the UNIX Environment [M], 2nd edition, Beijing: Posts and Telecommunications Press, 2006

Keywords:Embedded Reference address:Design of embedded media player system based on Qt

Previous article:Embedded Data Acquisition System Based on Linux
Next article:Design and implementation of embedded FPU microinstruction control module

Recommended ReadingLatest update time:2024-11-16 15:36

Simple example of arm-Linux dynamic library compilation
This article tells a simple example and explains "compiling dynamic libraries" very well.  1. Preparation  1. Use oracle VM Virtualbox software to install the Ubuntu virtual machine  2. Download the relevant software and transfer it to the virtual machine, and install the cross compiler.  2. Compile and apply  This
[Microcontroller]
Simple example of arm-Linux dynamic library compilation
OK6410A Development Board (VIII) 102 linux-5.11 OK6410A One of the four uses of mmap provided by glibc
OK6410A Development Board (VIII) 102 linux-5.11 OK6410A One of the four uses of mmap provided by glibc Shared anonymous mapping two ways When using the parameter fd = -1 and flags = MAP_ANONYMOUS | MAP_SHARED, the created mmap mapping is a shared anonymous mapping. Shared anonymous mappings allow related processes
[Microcontroller]
OK6410A Development Board (VIII) 115 linux-5.11 OK6410A Memory File System Mounting Example
mount -t tmpfs tmpfs /work/mount/ struct mount structure $1 = (struct mount *) 0x816f5780 // struct mount structure address $2 = { // struct mount structure content     mnt_hash = {         next = 0x0,         pprev = 0x0     },     mnt_parent = 0x816f5780, // indicates that it is mounted on the struct mount where
[Microcontroller]
OK6410A Development Board (VIII) 49 linux-5.11 OK6410A Linux user space virtual memory management VMA
I have previously introduced five ways to manage Linux virtual memory. One of them (called VMA) is used to manage user space virtual memory. This article will introduce VMA. What is VMA mmap(VMA) in mm_struct in task_struct // The structure type of mmap is vm_area_struct It can be seen that VMA is a data structure,
[Microcontroller]
Complete Analysis of Linux I2C Driver (Part 1)
Actually, I have wanted to write about I2C for a long time, but various things delayed me. I took advantage of the May Day holiday to write this for your reference. I will spend some time to study the kernel in depth in the future. Although I have some understanding of the kernel before, it is not systematic. The hard
[Microcontroller]
CMake sets up arm-linux-gcc cross compiler
Host: Ubuntu 10.04 Cross compiler: EABI-4.3.3 CMake uses the system's gcc and g++ compilers by default in Ubuntu. To compile programs under arm, arm-linux-gcc must be used. CMake needs to be set up (by specifying the cross compiler in CMakeLists.txt). Add relevant settings at the beginning of CMakeLists.txt: #In
[Microcontroller]
Ubuntu installs arm-linux-gcc compiler
/**********************************************************************************  * Ubuntu installs arm-linux-gcc compiler  * illustrate:  * Directly install the arm-linux-gcc compiler in Ubuntu to save the trouble of configuring those environments.  *  * 2017-3-21 Zeng Jianfeng, Pingshan Village, Nanshan, Shenzhen
[Microcontroller]
Research on USB slave device driver based on Linux
    introduction   USB is the abbreviation of Universal Serial Bus. USB is a fast, bidirectional, synchronous, low-cost, dynamically connectable serial interface. USB is now widely used in various devices, especially handheld devices, which almost all use USB interfaces. Now, USB can be used to connect to other devic
[Microcontroller]
Research on USB slave device driver based on Linux
Latest Microcontroller Articles
  • Download from the Internet--ARM Getting Started Notes
    A brief introduction: From today on, the ARM notebook of the rookie is open, and it can be regarded as a place to store these notes. Why publish it? Maybe you are interested in it. In fact, the reason for these notes is ...
  • Learn ARM development(22)
    Turning off and on interrupts Interrupts are an efficient dialogue mechanism, but sometimes you don't want to interrupt the program while it is running. For example, when you are printing something, the program suddenly interrupts and another ...
  • Learn ARM development(21)
    First, declare the task pointer, because it will be used later. Task pointer volatile TASK_TCB* volatile g_pCurrentTask = NULL;volatile TASK_TCB* vol ...
  • Learn ARM development(20)
    With the previous Tick interrupt, the basic task switching conditions are ready. However, this "easterly" is also difficult to understand. Only through continuous practice can we understand it. ...
  • Learn ARM development(19)
    After many days of hard work, I finally got the interrupt working. But in order to allow RTOS to use timer interrupts, what kind of interrupts can be implemented in S3C44B0? There are two methods in S3C44B0. ...
  • Learn ARM development(14)
  • Learn ARM development(15)
  • Learn ARM development(16)
  • Learn ARM development(17)
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号