The scale of modern computer systems is becoming larger and more complex, which makes it increasingly difficult to ensure the reliability of computer systems. Therefore, the reliability of computer systems has attracted widespread attention from the society [1]. In the face of this situation, the international community has attached increasing importance to the research and development of software reliability engineering theory, and has gradually put software quality management on a standardized and scientific track [2]. Software reliability engineering has also gradually developed under the influence of comprehensive factors such as information technology, reliability engineering, and user needs, and has become a marginal discipline that integrates the achievements of many disciplines to solve software reliability as a starting point.
The main research object of software reliability engineering is the failure causes, elimination and prevention measures of software products or systems, so as to ensure the reliability and availability of software products, reduce maintenance costs and improve the efficiency of software products. Software reliability has become the focus of attention, research hotspot and practice focus of the software industry and reliability engineering community.
1 Software Reliability Data
Different software errors, defects and their failures may vary greatly in terms of manifestation, nature and even quantity. It is very difficult, objective and unrealistic to give a comprehensive and detailed description of them. However, in reality, for the sake of simplicity, it is usually assumed that all failure levels of the software reliability model are the same or belong to the same category, that is, no distinction is made between software errors, defects and their failures. If failure levels and failure types are to be distinguished, many problems will arise. For example, whether the same model is applicable to different types of failure data; since the number of samples of each type of failure data after classification is generally very small, it will affect the accuracy of the results given by the model. Therefore, in general, failure data is no longer classified [3].
The classic software reliability models are: (1) In 1972, Jelinski and Moranda first proposed the concept of software reliability model and established a specific reliability model - JM model [4-5]; (2) In 1973, Littlewood and Verall used Bayes method to test software reliability [6]; (3) In 1979, Goel and Okumoto proposed a non-homogeneous Poisson process model to improve the JM model, namely the GO model; (4) In 1983, Yamada and Osaki found that the number of errors grows slowly in the initial stage of prediction and then grows rapidly, and finally tends to saturation, that is, a delayed S-shaped growth model, called the YO model [7].
A set of MUSA JM software reliability data as shown in Table 1 is randomly selected, and the above four software reliability models are verified using the software reliability prediction system developed by the author. The obtained fitting curve is shown in Figure 1. It can be seen from the figure that due to the non-stationary time interval of the original software reliability data, the final prediction result has a huge error, especially at the peaks and troughs.
Through the research and analysis of a large amount of software reliability data, it is found that the interval time between software defects has a large volatility, which is the main reason for the large error in its prediction results. Describing its volatility trend and building a volatility model for software reliability data are the key to solving the problem.
2 Preprocessing of Software Reliability Data
To solve the above problems, this study decomposes software reliability data into two independent parts. One part depicts the overall trend of software reliability data; the other part depicts the fluctuation trend of software reliability data over time. The final reliability results are obtained by separately predicting and combining the two parts of data.
Suppose the software failure intervals are x(1), x(2), …x(n), and the failure times are t(1), t(2), …t(n), where t(i) is the time from the start of software operation to the occurrence of the i-th failure, and x(i) is the time interval from the i-1th failure to the i-th failure, that is, x(i)=t(i)-t(i-1).
Assume that the software reliability data at time t is M(t)=P(t)+Q(t), where P(t) is used to describe the overall trend of the software reliability data, and Q(t) is used to describe the fluctuation trend of the software reliability data with the occurrence of defects.
According to the above algorithm, the data listed in Table 1 are processed to obtain the results in Table 2.
The time interval curve of the predicted data P(t) estimated in Table 2 is shown in Figure 2. It can be seen from the figure that its overall trend is relatively stable relative to the original data, and its general trend is similar to the original data curve.
According to the difference curve between the original data and the predicted data P(t), the changing law of Q(t) is found and predicted. As can be seen from Figure 3, for the value of the fluctuation degree, both the positive and negative fluctuations and the amplitude of the fluctuations must be considered.
Also based on the principle that early data has little effect on predicting future behavior, and that current failure interval data can better predict the future than failure interval values observed earlier, this study selected the five failure data points before Q(t) to predict the fluctuation value Q(t).
First, based on the number of times the amplitude switched between positive and negative four times in the previous five points, predict the possibility of the amplitude at time t being of the same or different sign relative to the previous time t-1.
The amplitude of the fluctuation is obtained by taking the average of the absolute values of the amplitudes of the five points. At the same time, it can be found that the amplitude of point Q(t-1) is also closely related to the prediction of Q(t), assuming that Q(t)=a×Q(t)+b×|Q(t-1)|, a is 0.7, and b is 0.3.
According to the above algorithm, the estimated P(1)…P(t-1) values are substituted into the software reliability model to obtain P(t), and finally the predicted time P′(t)+Q(t) at time t is obtained.
3 Algorithm Verification
(1) Use the Littlewood-Verall model to calculate P(t) and obtain the predicted P′(t) based on P(t)…P(t-1). The results are shown in Table 3.
Define the reliability model evaluation criteria:
Excluding failure data points 1, 2, and 3, the RE of the other 14 failure data points is 0.349 351, while the RE value of the initial failure interval is 1.595. It can be seen that by smoothly processing the failure data points, a higher degree of fit can be obtained.
(2) Evaluate Q′(t) according to the previous algorithm, and the obtained values are shown in Table 4.
Since the prediction Q′(t) of the first five failed data points lacks valid data, they are removed when calculating ESS. The RE value of the removed points is 1.23, which is nearly 20% less than the RE value (1.595) obtained by using the unprocessed points. At the same time, it can be seen that the main reason for the error is the failure data point 11. The reliability data and final prediction data of MUSA JM software are shown in Table 5.
Software reliability evaluation is increasingly valued, and the research on software reliability model theory, which is its core, must be deepened. This paper opens the door to software reliability theory research. In addition to further processing reliability data, future research will also further improve software reliability models.
Based on the traditional method that only focuses on the software reliability model, this paper expands to the preprocessing of reliability data and proposes a new method for processing software reliability data, which solves the defect of large volatility in the reliability data collection process. The algorithm is simple and robust, and can be applied to various engineering applications. However, there are still many problems that deserve further study, such as how to achieve the self-adaptation of the Q(t) coefficient in the new algorithm.
Previous article:Design of Bluetooth Data Acquisition System Based on DSP and FPGA
Next article:Research on Speaker Recognition in Coding Domain Based on DTW
- Huawei's Strategic Department Director Gai Gang: The cumulative installed base of open source Euler operating system exceeds 10 million sets
- Analysis of the application of several common contact parts in high-voltage connectors of new energy vehicles
- Wiring harness durability test and contact voltage drop test method
- Sn-doped CuO nanostructure-based ethanol gas sensor for real-time drunk driving detection in vehicles
- Design considerations for automotive battery wiring harness
- Do you know all the various motors commonly used in automotive electronics?
- What are the functions of the Internet of Vehicles? What are the uses and benefits of the Internet of Vehicles?
- Power Inverter - A critical safety system for electric vehicles
- Analysis of the information security mechanism of AUTOSAR, the automotive embedded software framework
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- Data collection and display of dsp
- Power Supply Problem Summary
- The one with the highest score wins: ST Sensors Conquer the World: Driver Transplantation Competition + Bone Vibration Sensor Evaluation
- [RVB2601 Creative Application Development] Experience Sharing 1: Unboxing, Environment Building, and Outputting HelloWorld
- Award-winning live broadcast: Application of TI mmWave radar sensors in smart home and smart security
- The 10 Best Pico Accessories of 2021
- Yole: GaN RF market size will exceed $2 billion in 2025
- How do I add a sub-device of a part in TINA?
- The problem of which two SW pins are used for debugging in STM32CubeMX
- Live broadcast at 10:30 this morning [Microchip Embedded Security Solutions | Secure Boot of Microprocessors]