Design of monitoring transmission subsystem based on P2P and CDN

Publisher:学海星空Latest update time:2009-07-24 Source: 现代电子技术Keywords:Network Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

1 Introduction

Peer-to-peer (P2P) technology is a hot topic in the field of international computer network technology. The prototype of this technology was created in the 1970s, with UseNet and FidoNet as typical representatives. CDN (Content Distribution Network) publishes the content or media of a website to the "edge" of the network closest to the user. When a user visits, the system automatically and seamlessly redirects the user to the edge server, thereby reducing the pressure on the central server and backbone network and improving the performance of streaming media or websites.

With the rapid development of network technology, streaming media content is widely disseminated on the Internet, and the demand for high-quality streaming media distribution services has become increasingly evident. Therefore, providing fast and high-quality streaming media distribution services to a large number of users has become a hot topic and difficulty in recent research.

The amount of data to be transmitted in the multimedia monitoring system is quite large, mainly including: control information, feedback information, video, audio and other information such as text. For the traditional multimedia monitoring system based on C/S mode or B/S mode, the server performance will drop sharply when these large amounts of streaming media data are transmitted between the monitoring point and the monitoring center. Based on this, this paper introduces P2P technology into the design of the multimedia monitoring system and makes the following improvements:

(1) A monitoring transmission subsystem based on P2P and CDN was designed.

(2) Customers obtain services from edge servers using P2P. Content publishing between original servers and edge servers is also carried out through P2P. This method effectively utilizes the network bandwidth and host resources in the system, reduces the pressure on original servers and edge servers, reduces backbone network data traffic, reduces operator costs, and improves customer service quality.

(3) In order to alleviate the conflict between network I/O and disk I/O, a semi-synchronous/semi-asynchronous method is used in the design of the transmission subsystem to separate network I/O from disk I/O, and buffer them through a task pool.

(4) A thread pool dynamic management algorithm was designed to effectively reduce the CPU load pressure and improve network throughput and overall system performance.

(5) Effective improvements have been made to the shortcomings of traditional methods. The system framework is established using a semi-synchronous/semi-asynchronous method. The task pool is used to encapsulate data read and write requests, and the thread pool is used to efficiently and asynchronously process tasks in the task pool. By counting the idleness of tasks and combining the current resource utilization of the system, the task pool and thread pool are dynamically managed, which reduces the CPU load and improves the system throughput.

2 System Framework

The overall layout of the system is shown in Figure 1. The edge server will form a P2P network with several client nodes to provide efficient service quality and reduce the server load.

When a client's request for a resource on an edge server is not matched, the edge server will request the original server. The original server will store the required media resources locally through the efficient transmission subsystem implemented in this paper according to the specific request requirements, and then publish the content to multiple edge servers using P2P.

This method effectively reduces the pressure on the original server when publishing content. In theory, it only needs to send out a complete media copy, and other edge servers will get a complete copy based on the P2P method. Similarly, when the edge server provides services to customers, in theory it only needs to transmit a copy, and multiple clients can get complete services. The original server and the media resource server are usually in the same subnet, and the network speed is faster than the disk I/O speed. At this time, the disk I/O becomes the bottleneck of the system. In order to alleviate the contradiction between network I/O and disk I/O, the semi-synchronous/semi-asynchronous method is used in the design of the transmission subsystem to separate network I/O from disk I/O, and buffer them through the task pool.

The upper-level main thread handles epoll asynchronous events and protocol interactions. The framework encapsulates the received data in a task according to a fixed size, and then puts the task back into the task pool. The lower-level thread pool is responsible for taking tasks from the task pool and performing specific disk read and write operations. After the operation is completed, the thread and task return to the thread pool and task pool respectively to wait for scheduling.

3 Algorithm Implementation

In order to effectively manage the thread pool dynamically, it is necessary to collect various performance parameters and make adjustments to the thread pool after comprehensive analysis. The algorithm refers to the two most critical parameters, namely the average waiting time of the task and the CPU usage rate. The average waiting time of the task can be used to analyze the direction in which the current thread pool needs to be adjusted. The CPU usage rate can be used to determine whether it is necessary to increase or decrease threads.

[page]

In Figure 2, c (current) represents the current average waiting time of the thread pool; p (previous) represents the last waiting time of the thread pool; pp represents the last waiting time; ps (pool size) represents the thread pool size; and pps represents the last thread pool size. In this algorithm, the absolute value of the waiting time is not compared, but currTime and preTime are compared. If the difference is greater than 1%, the thread pool may need to be adjusted, and the adjustment direction needs to be determined based on the size relationship between currTime and preTime. If currTime is greater than preTime, the relationship between pre-Time and prepreTime needs to be further compared; if preTime is less than prepreTime, and the CPU usage is greater than 90%, then the thread pool is reduced. The stride of reduction is 2. If preTime is greater than prepreTime, and the CPU usage is less than 80%, then the thread pool is increased, and the step of increase is 2. If currTime is less than preTime, and preTime is less than prepreTime, then the thread pool is increased.

In short, the algorithm determines whether the thread pool needs to be adjusted by comparing the relationship between currTime, preTime, and prepre-Time.

When you need to reduce the thread pool, you need to further judge the CPU usage rate. The reduction operation can only be performed when the CPU is greater than a threshold, because too small a CPU load is also a waste of resources. Similarly, when you need to increase the thread pool, you can only increase it when the CPU is less than a threshold, because the CPU load cannot be too large.

[page]

4 Experimental analysis

Because the media resource server and the original server are mostly in the same subnet, the experimental environment is also simulated through a local area network. The basic configuration of the server is: two Intel dual-core Xeon 3 GHz chips, 2 048 KB cache, 4 GB memory, and 1 000 Mb/s network card.

4.1 Experimental data of three models

The experiment simulates a large number of data requests by downloading data from the load generator through the transmission subsystem, and collects experimental data for the following three models:

(1) Traditional multi-threaded blocking model, that is, each existing blocking process has to process a single request, which is represented by A in FIG3 and is referred to as the A model for short.

(2) Thread pool with fixed number of threads: the initial number of threads is determined by twice the number of CPUs plus 2, that is, 10 initial threads, which is represented by B in Figure 3 and is referred to as the B model for short.

(3) The model of the thread pool dynamic management algorithm proposed in this paper is adopted. The number of threads in the initial test is also 10, which is represented by C in Figure 3 and is referred to as the C model.

4.2 Analyze data to get average value

The following data are the average values ​​obtained through nmon sampling and ninon analyser analysis.

(1) Comparison of CPU usage. As can be seen from Figure 3, in model A, almost all CPU resources are occupied. Because each thread serves one request, once a large number of requests arrive, a large number of threads will be generated. In model B, because the number of threads is fixed and has been created in advance, when the number of requests is too large, the task queue will play a good buffering role. Model C is the most effective because the number of threads is always adjusted to the optimal number, and the use of the task pool effectively reduces the frequent memory application and release operations in the system.

(2) Free memory comparison. It can be easily analyzed from Figure 4 that when the total amount of requests is the same, the memory occupied by models A and B is very similar. However, in model C, the sizes of the task pool and thread pool are dynamically scaled, which improves the processing power of the system and naturally uses more memory.

[page]

(3) Comparison of network I/O traffic. Figure 5 shows the network I/O of the three models. In model A, because it uses a blocking method, when there is no data to read on the socket, the thread will block and wait for the arrival of data, while other sockets that have already received data may not be processed. Therefore, the network throughput of model A is relatively low. In model B, a non-blocking and thread pool model is adopted. Once a socket is about to be blocked, the thread can quickly switch to other sockets that have data ready, which speeds up the data reception speed and thus improves the network transmission speed. In model C, the load on components such as memory and CPU is reduced, and the performance is improved. The dynamic task pool makes the system have better cache capabilities than model B. Therefore, it is understandable that model C has a higher network throughput than model B. The system uses a 1,000 Mb/s network card, which basically reaches the limit of the network card.

5 Conclusion

According to the statistics of the average waiting time of each thread in the thread pool and the current CPU usage, the size of the thread pool is dynamically adjusted. This thread pool dynamic management algorithm can well adapt to the sudden changes in customer requests on the Internet. When a large number of requests suddenly arrive, according to the algorithm principle, an appropriate number of threads can be added to meet the additional requests; when the number of requests decreases, the number of threads will be reduced, thereby reducing the pressure on the system. After experimental analysis and comparison, it can be concluded that the use of the thread pool dynamic management algorithm effectively reduces the CPU load pressure, improves network throughput and overall system performance. However, there are still many areas that can be optimized in the management of the thread pool. For example, the thread pool size is adjusted in steps of 2, but this step size is based on experience and has no good theoretical basis. At the same time, more statistical information can be added to the decision-making of the algorithm to improve the accuracy of the algorithm.

Here, the combination of P2P and CDN in the multimedia monitoring transmission system is realized, a semi-synchronous/semi-asynchronous mode is introduced, a system framework is designed, technologies such as task pool and thread pool are introduced, the network bottleneck of the efficient transmission subsystem between the media resource server and the original server is solved, and an effective thread pool dynamic management algorithm is designed.

Keywords:Network Reference address:Design of monitoring transmission subsystem based on P2P and CDN

Previous article:Design and implementation of monitoring server based on TMS320DM355
Next article:Design of intelligent building fire monitoring system based on SPCE061A

Recommended ReadingLatest update time:2024-11-16 21:45

Introduction to Artificial Neural Network Algorithm
Artificial neural network, or neural network for short, is a mathematical model or computational model that imitates the structure and function of biological neural networks. In fact, it is an algorithm very similar to Bayesian network. I have read some content before but it is still a mystery. This time I dec
[Embedded]
Introduction to Artificial Neural Network Algorithm
Latest Security Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号