At present, more and more embedded devices such as smart phones, PDAs and tablets support high-definition video acquisition and playback functions. High-definition video acquisition or playback functions are widely used in embedded systems such as game devices, monitoring equipment, video conferencing equipment and digital network TV. The realization of these functions is based on high-performance video hardware codec technology. This article describes the implementation method of H.264 video hardware codec based on FFMpeg on the S3C6410 processor, providing a reference for the implementation of high-definition video hardware codec in the development of digital entertainment, video monitoring and video communication systems.
FFmpeg[1] is an open-source, free, cross-platform video and audio streaming solution, and is free software. It contains a very advanced audio/video codec library, libavcodec, which provides a complete solution for recording, converting, and streaming audio and video. FFmpeg supports more than 40 encodings such as MPEG4 and FLV, and more than 90 decodings such as AVI and ASF. Currently, the more popular domestic players, Baofeng Player, and the more popular foreign players, Mplayer, both use FFmpeg for audio/video encoding and decoding.
S3C6410[2] is an application processor chip launched by Samsung. It is based on the ARM11 architecture and has a maximum main frequency of 800 MHz. It has multimedia hardware acceleration functions, including MPEG4 SP with more than 30 FPs, H.264/263 BP and VC1 (WMV9) video hardware codecs, and can be used in mobile devices such as mobile phones, tablets and game consoles, and other high-performance embedded devices. The processor of the domestic mobile phone Meizu M8 uses S3C6410.
Although FFmpeg provides a simple application programming interface (API) that can easily implement software codecs for video in various formats [3], software codecs cannot be applied to embedded environments with slow processing speeds and limited memory space when processing complex video codecs (such as H.264). In order to use FFmpeg to implement complex video codecs in resource-limited embedded environments, this article analyzes the FFmpeg video encoding process and the S3C6410 processor video codec method, and describes the implementation of H.264 hardware codec based on FFmpeg on the S3C6410 processor under the embedded Linux operating system.
1 FFmpeg video encoding and decoding process
FFmpeg mainly has three modules: encode/decode, muxer/demuxer, and memory operation. The encode/decode module is used for encoding and decoding audio and video, and is stored in the libavcodec subdirectory; the muxer/demuxer module is used for merging and separating audio and video (also known as the mixer module), and is stored in the libavformat directory; common modules such as memory are stored in the libavutil directory. The following takes the decoding process as an example to analyze the FFmpeg video encoding and decoding process.
The basic decoding process is divided into 4 steps:
① Register all possible codecs and mixers. In the av_register_all(void) function, by executing REGISTER_MUXDEMUX(X,x) and REGISTER_ENCDEC(X,x), all the mixer and codec related information supported by FFmpeg is stored in the memory in a chain structure.
② Open the video file. The av_open_input_file(AVFormatContext ** IC _ptr,const char *filename,AVInputFormat *fmt,int buf_size,AVFormatParameters *ap) function detects the file format, finds the corresponding mixer (demuxer) from the chain mixer according to the file format, and separates the video information.
③ Get video information. Get the video format through the av_find_stream_info(AVFormatContext *ic) function. According to the video format, find the corresponding video decoder in the chained video decoder, and open the decoder through the avcodec_open(AVCode CC ontext *avctx,AVCodec *codec) function for the next step of video decoding.
④ Decode a frame of video and decode a frame of video through the avcodec_decode_video(AVCodecContext *avctx,AVFrame *picture,int *got_picture_ptr,const uint8_t *buf,int buf_size) function.
The encoding process of FFmpeg is similar to the decoding process. The difference is that in step 3, the corresponding video encoder is found in the chain of video encoders according to the required encoding format, and the encoding process is performed.
Through the above analysis of the FFmpeg video encoding and decoding process, we can know that in order to add a custom video codec in FFmpeg and use this codec when the program is running, the key lies in the following two points:
① Implement a custom codec based on FFmpeg's description of the codec.
② Add the custom video codec to the video codec chain through the REGISTER_ENCDEC(X,x) function. When obtaining video information, ensure that the video to be encoded or decoded can find the custom video codec in the video codec chain.
2 S3C6410 processor video encoding and decoding method
The S3C6410 video codec software architecture [4] is shown in Figure 1. The bottom layer is the operating system space, and the upper layer is the user space. The video codec is used in the form of device files through the driver and the operating system. The usage methods are the same as ordinary files, including file opening and closing, file reading and writing, and input/output control (ioctl, input/output control).
Figure 1 S3C6410 video codec software architecture
The specific operation method is as follows:
① Open the codec device file through the open function;
② Use the mmap method to map the input/output buffer space between the user space and the driver space. The advantage of this is that data input/output can be performed quickly;
③ Initialize the codec through ioctl device codec parameters;
④ Input data, execute the encoding and decoding process through ioctl, and output data;
⑤ Close the codec device file through the close method.
It is worth noting that, no matter encoding or decoding, the processed data is operated in the form of frames, so step 4 is a continuous loop process until all data processing is completed. In addition, although the codec is used in the form of a device file, it cannot use standard file read and write operations. Looking at the device driver of the codec, it can be found that its file read and write functions are empty, which is not explained in Samsung's development documents.
3 H.264 Hardware Codec Implementation
FFmpeg's H.264 hardware codec[5] is implemented by customizing a video codec and adding it to the FFmpeg library. This video codec uses the S3C6410 video hardware codec function to implement the H.264 video encoding and decoding process. In this way, multimedia programs using the FFmpeg library can use this custom codec in the same way as other FFmpeg codecs. The key to adding a custom codec is to define the codec according to the description of the codec in FFmpeg and implement the relevant functions in the definition.
The AVCodec structure in libavcodec/avcodec.h is the key structure that defines the FFmpeg codec, including the codec name, type (sound/video), codec identification number (CodecID), supported formats, and some function pointers for initialization, encoding, decoding, and closing.
typedef struct AVCodec {
const char *name;
enum CodecType type;
enum CodecID id;
int priv_data_size;
int (*init)(AVCodecContext *);
int (*encode)(AVCodecContext *,uint8_t *buf,int buf_size,void *data);
int (*close)(AVCodecContext *);
int (*decode)(AVCodecContext *,void *outdata,int *outdata_size,
uint8_t *buf,int buf_size);
int capabilities;
struct AVCodec *next;
void (*flush)(AVCodecContext *);
const AVRational *supported_framerates;
const enum PixelFormat *pix_fmts;
} AVCodec;
The H.264 hardware codec is defined as follows:
AVCodec s3cx264_encoder = {
.name="s3cx264",
.type=AVMEDIA_TYPE_VIDEO,
.id=CODEC_ID_H264,
.init=X264_init,
.encode=X264_frame,
.decode=X264_decode,
.close=X264_close,
…
};
The decoder name is s3cx264, the type is video. CodecID is H264, which means this decoder is used for H.264 video encoding and decoding. The initialization, encoding, decoding and closing function pointers point to the X264_init, X264_frame, X264_decodec and X264_close functions respectively.
The key to adding the s3cx264 codec to the codec chain is to modify the libavcodec/allcodecs.c file as follows:
REGISTER_ENCDEC (ASV1,asv1);
REGISTER_ENCDEC (S3CX264,s3cx264);
//Add s3cx264 codec
REGISTER_ENCDEC (ASV2,asv2);
In this way, after calling the av_register_all(void) function when the program is running, the custom codec s3cx264 can be added to the decoder chain stored in the memory of FFmpeg. It is worth mentioning that FFmpeg has multiple codecs corresponding to the same video format. For example, for videos in H.264 format, FFmpeg itself has a corresponding soft decoder . Now a hard decoder has been added. In order to avoid uncertainty about which decoder is executing, the custom hardware codec can be placed at the front of the registration process when registering. In this way, the codec will be placed at the front when added to the decoder chain, and the hard decoder can be found better than the software decoder when searching.
After registering the hardware codec s3cx264 to the codec chain, the X264_init, X264_frame, X264_decodec and X264_close functions must be completed before the codec can work properly. The following combines the previous analysis of the S3C6410 video encoding and decoding process and takes encoding as an example to explain the implementation process in detail.
Define the X264Context structure to save information such as device file descriptors, encoding parameters, and input/output addresses for data transfer between FFmpeg modules:
typedef struct X264Context {
int dev_fd;
uint8_t *addr;
s3c_mfc_enc_init_arg_t enc_init;
s3c_mfc_enc_exe_arg_t enc_exe;
s3c_mfc_get_buf_addr_arg_t get_buf_addr;
uint8_t *in_buf,*out_buf;
AVFrame out_pic;
} X264Context;
X264_init implements the encoder initialization process, which is used to open the encoder device file, map the memory space, set the encoding parameters, and obtain the input/output address of the codec data.
static av_cold int X264_init(AVCodecContext *avctx){
X264Context *x4 = avctx>priv_data;
//Open the encoder device file
x4>dev_fd = open(MFC_DEV_NAME,O_RDWR|O_NDELAY);
//Memory space mapping
x4>addr = (uint8_t *) mmap(0,BUF_SIZE,PROT_READ |PROT_WRITE,MAP_SHARED,x4>dev_fd,0);
//Encoding parameter settings
ioctl(x4>dev_fd,S3C_MFC_IOCTL_MFC_H264_ENC_INIT,&x4>enc_init);
//Get input/output address
x4>get_buf_addr.in_usr_data = (int)x4>addr;
ioctl(x4>dev_fd,S3C_MFC_IOCTL_MFC_GET_YUV_BUF_ADDR,&x4>get_buf_addr);
x4>in_buf = (uint8_t *)x4>get_buf_addr.out_buf_addr;
x4>get_buf_addr.in_usr_data = (int)x4>addr;
ioctl(x4>dev_fd,S3C_MFC_IOCTL_MFC_GET_LINE_BUF_ADDR,&x4>get_buf_addr);
x4>out_buf = (uint8_t *)x4>get_buf_addr.out_buf_addr;
return 0;
}
The parameter of ioctl is S3C_MFC_IOCTL_MFC_H264_ENC_INIT, which means using H.264 encoding.
The X264_frame function performs the encoding process. It should be noted that the data parameter stores the data to be encoded, which is a four-dimensional array. It needs to be converted into a one-dimensional array for S3C6410 encoder input. In addition, the encoded data may be empty, that is, empty frames. This needs to be handled by returning "0" to indicate that there is no output data, otherwise a segmentation error will occur when the program is running.
static int X264_frame(AVCodecContext *ctx,uint8_t *buf,int bUFSize,void *data){
……
//Space conversion
if(frame){
mEMCpy(x4>in_buf,frame>data[0],ctx>width*ctx>height);
mEMCpy(x4>in_buf+ctx>width*ctx>height,frame>data[1],ctx>width*ctx>height/4);
memcpy(x4>in_buf+ctx>width*ctx>height+ctx>width*ctx>height/4,frame>data[2],
ctx>width*ctx>height/4);
}
else
return 0; // empty frame, return
//Execute the encoding process
ioctl(x4>dev_fd,S3C_MFC_IOCTL_MFC_H264_ENC_EXE,&x4>enc_exe);
//Encoded data output
bufsize = x4>enc_exe.out_encoded_size;
memcpy(buf,x4>out_buf,bufsize);
……
return bufsize;
}
The X264_close function is used to release resources after encoding, including canceling space mapping and closing device files.
static av_cold int X264_close(AVCodecContext *avctx){
…
//Cancel space mapping
munmap(x4>addr,BUF_SIZE);
//Close the device file
close(x4>dev_fd);
return 0;
}
The implementation process of the decoding function is similar to the encoding function, including spatial conversion, decoding execution and decoded data output. Use the S3C_MFC_IOCTL_MFC_H264_DEC_INIT parameter for initialization and the S3C_MFC_IOCTL_MFC_H264_ENC_EXE parameter for execution.
4 Run the test
Once the s3cx264 codec has been added to FFmpeg, it can be tested with:
① Compile FFmpeg using the following command.
./configure enablecrosscompile
arch=armv6 cpu=armv6
targetos=linux crossprefix
=/usr/LOCal/arm/4.3.2/bin/
armlinux
② Run ./ffmpeg codecs to find the s3cx264 codec, as shown in Figure 2.
Figure 2 FFmpeg displays s3cx264 codec information
③ Test s3cx264 encoding with USB camera. Run ./ffmpeg s 320x240 r 50 f video4linux2 i /dev/video2 vcodec s3cx264 test.mp4. You can see that FFmpegg is using s3cx264 encoder to encode and compress the data collected by USB camera into test.mp4 file. test.mp4 can be played and displayed normally.
The above test shows that the s3cx264 hardware video encoder has been successfully added to FFmpeg, which can encode video data and can be applied to other multimedia programs using the FFmpeg library.
Conclusion
For multimedia development, using FFmpeg multimedia library for encoding and decoding is a good choice. It supports more audio and video encoding and decoding, and the programming interface is simple and easy to use. Understanding the FFmpeg encoding and decoding process and being familiar with the method of adding FFmpeg hardware codecs are very helpful for multimedia development, especially embedded multimedia development with limited resources. This article analyzes the FFmpeg video encoding and decoding process and the Samsung S3C6410 processor video hardware encoding and decoding method, and successfully adds the S3C6410 hardware codec to the FFmpeg library, so that the FFmpeg library has the hardware encoding and decoding capabilities of the H.264 video format, which can be used in embedded systems such as gaming devices, monitoring equipment, video conferencing equipment and digital network TV. At the same time, it also provides a reference for other embedded devices to add codecs of other video formats to the FFmpeg multimedia library.
references
[1] http://www.ffmpeg.org/.
[2] Samsung.S3C6410 Datasheet,2010.
[3] Li Shaochun. Embedded video surveillance system based on FFMPEG[J]. Electronic Technology, 2007(3):3437.
[4] API Document S3C6400/6410 MultiFormat Codec,2008.
[5] FFmpeg codec HOWTO[EB/OL].2010[201101].http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_HOWTO/.
Liu Jianmin (Master student) and Yang Bin (Professor), their main research directions are microcontrollers and embedded systems and applications.
Previous article:Application of I2C Serial EEPROM Devices in Embedded Control Systems
Next article:Research on Remote Monitoring System of Computer Room Based on Embedded Linux
- Popular Resources
- Popular amplifiers
- Naxin Micro and Xinxian jointly launched the NS800RT series of real-time control MCUs
- How to learn embedded systems based on ARM platform
- Summary of jffs2_scan_eraseblock issues
- Application of SPCOMM Control in Serial Communication of Delphi7.0
- Using TComm component to realize serial communication in Delphi environment
- Bar chart code for embedded development practices
- Embedded Development Learning (10)
- Embedded Development Learning (8)
- Embedded Development Learning (6)
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Intel promotes AI with multi-dimensional efforts in technology, application, and ecology
- ChinaJoy Qualcomm Snapdragon Theme Pavilion takes you to experience the new changes in digital entertainment in the 5G era
- Infineon's latest generation IGBT technology platform enables precise control of speed and position
- Two test methods for LED lighting life
- Don't Let Lightning Induced Surges Scare You
- Application of brushless motor controller ML4425/4426
- Easy identification of LED power supply quality
- World's first integrated photovoltaic solar system completed in Israel
- Sliding window mean filter for avr microcontroller AD conversion
- What does call mean in the detailed explanation of ABB robot programming instructions?
- STMicroelectronics discloses its 2027-2028 financial model and path to achieve its 2030 goals
- 2024 China Automotive Charging and Battery Swapping Ecosystem Conference held in Taiyuan
- State-owned enterprises team up to invest in solid-state battery giant
- The evolution of electronic and electrical architecture is accelerating
- The first! National Automotive Chip Quality Inspection Center established
- BYD releases self-developed automotive chip using 4nm process, with a running score of up to 1.15 million
- GEODNET launches GEO-PULSE, a car GPS navigation device
- Should Chinese car companies develop their own high-computing chips?
- Infineon and Siemens combine embedded automotive software platform with microcontrollers to provide the necessary functions for next-generation SDVs
- Continental launches invisible biometric sensor display to monitor passengers' vital signs
- Shenzhen Nanshan is hiring an e-cigarette structural engineer with a high salary of 10-30K, working from 9am to 6pm, with two days off a week
- MSP430 capacitive touch wheel and LED PWM output design
- How to modify this circuit to make it normal?
- Can anyone share the mobile testing app that can be downloaded over the firewall?
- [Evaluation of Anxinke Bluetooth Development Board PB-02-Kit] Serial Communication and Usage
- Development of 1~6GHz broadband low noise amplifier
- CSS style changes - clipping, z-index, clearing, changing element properties
- Correct and good design method of DSP2812 serial port baud rate
- Design appreciation! Millimeter wave sensor automatic parking system reference design
- Animation introduces the triggering function and principle of oscilloscope