Research and implementation of data-driven automated testing

Publisher:琴弦悠扬Latest update time:2010-07-26 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

0 Introduction

With the continuous development of society and the popularization of informatization, various software are increasing and playing an increasingly important role in daily life. Coupled with the complexity of the objective system, no matter how experienced the developers are or what development model is used to develop the software, it is impossible for the technical review at each stage to detect and correct all errors without omission. So how can we make new software more stable and with fewer errors? Testing! Statistics show that in a typical software development project, the workload of software testing often accounts for more than 40% of the total software development workload.

Testing is the last and most important step for software to enter the market. The traditional testing method is manual testing, which is currently used by most companies. It is simple, but it has many problems. Manual testing may introduce human input errors, especially when the amount of data is large. In addition, a large number of repetitive manual tests may be costly. If the software is changed and needs to be repeated, the cost will be even higher. There is no way to test components in isolation, which leads to high costs for discovering and solving problems. In many projects, all tasks of testers are actually handled manually, but in fact a large part of the repetitive testing work can be independently implemented automatically.

In response to the shortcomings of manual testing, automated testing came into being. Compared with manual testing, automated testing has many advantages: standardizing the test process, improving test efficiency, test coverage, etc. Many people have misunderstandings about automated testing, thinking that it is to find an automated testing tool and apply it to software engineering projects. Automated testing tools are only seen as a recording and playback tool. In fact, automated testing is far more than that. Recording and playback are only the lowest level of automated testing. Currently, automated testing is often divided into five levels, as shown in Figure 1.

There are 5 levels of automated testing

The commonly used test now is data-driven testing, which uses data to control the process and actions of automated testing. The data is independent of the test case script and usually exists in the form of text files, Excel files, XML files, etc.

1 Implementation of data-driven automated testing

1.1 Feasibility Analysis

Based on the analysis of the advantages of automated testing, many people have another misunderstanding about automated testing, thinking that all software is suitable for automated testing, and as long as automated testing is introduced, the efficiency of testing will be improved and the cost of testing will be reduced. In fact, this is not the case. Automated testing also requires the development and construction of a test framework and the creation of test cases, which means cost investment. For a test project with a tight project cycle, the efficiency of manual testing according to the test plan may be much better than the efficiency of recording scripts and then testing with automated testing tools. So what is the value of automated testing tools?

For a one-time developed software without subsequent version updates, automated testing is meaningless. However, many softwares now continuously launch new versions. In the process of launching new versions, in addition to testing the newly added or modified modules, the associated old modules also need to be tested to ensure the quality of the product. This requires a lot of repetitive work. Automated testing can create reusable modules in the test at this time, and can also cover most of the functional tests, so that testers can be freed from regression testing and focus on testing new modules. So it can be said that the greatest value of automated testing lies in regression testing.

Therefore, to determine whether a software or some of its modules are suitable for automated testing, you must first conduct a feasibility analysis to prove the correctness of the testing method you choose. Usually, software that can be automated needs to meet the following points:

(1) Manual testing is complex;
(2) The selected test cases have low difficulty in implementing automatic testing;
(3) The module interface used for automated testing of the software changes relatively little;
(4) The software life cycle is long, and new versions are often released;
(5) Software development has been basically completed and is mainly used to test upgraded versions;
(6) The selected automated testing framework must have effective support for the software application interface being tested and have low maintenance and management costs.

In addition, automated testing requires time and a certain amount of cost investment in the early stage, so don't expect a high return from the beginning. Its effect will be revealed in the continuous improvement and accumulation. And don't expect automated testing to find most of the errors in each version, because automated testing is mainly used for regression testing, and most of the bugs in each new version of the product will appear in the new module, so automated testing is about long-term effects and can ensure the stability of the quality of each version of the product.

1.2 Demand Analysis

Just as software development requires demand analysis, data-driven automated testing is essentially development, so it is also necessary to collect test requirements before formulating a test plan to ensure the success of automated testing.

With the development of IT technology, the traditional model of developers serving as testers can no longer meet the needs. Currently, most formal software companies have adopted independent testers to test software, thus forming a model of developers, development managers, testers, and test managers. See Figure 2.

Demand Analysis

A standardized testing process requires the cooperation of the above personnel, so before doing automated testing, there should be a standardized document to describe the test content, personnel arrangement, test process, defect management, etc. Among them, the development manager and the test manager serve as the interface between the development team and the test team respectively, coordinating the work of the two teams. Generally speaking, developers need to provide detailed functional documents after each software update. Developers also need to provide data and other related resources required for automated testing. Testers create test cases suitable for automation based on functional documents and establish data-driven automated testing projects.

1.3 Data-driven automated testing framework structure and implementation

Data-driven automated testing is not a simple recording and playback, but rather implements each test case through programming, where the data file is independent of the test case, so that data updates can minimize the maintenance of the entire test project. Therefore, creating an automated testing framework requires a certain level of programming skills.

The three-layer framework structure is adopted in the automated test in this article, as shown in Figure 3.

Three-layer frame structure

The bottom layer is the UI Driver layer, which is mainly responsible for defining the basic common element library, such as buttons, drop-down boxes, text boxes, and other basic elements that appear in every software; basic operations on these elements and common operations (such as functions that wait for a certain period of time, etc.). This layer has nothing to do with the software being tested, so it is very versatile and can be developed by yourself or use the bottom-level automated Driver developed by predecessors.

The second layer is the agent layer, which is built on the software under test. Relevant classes and objects are established for each interface (UI) of the software under test to facilitate the top-level call. This layer needs to be changed according to the continuous update of the software.

The top layer is the test case layer (Test Cases), which is built on the proxy layer. After the proxy layer is built, it can provide the interface elements required by the test case layer, so that the test case can complete the automated testing process by operating the interface elements. This layer is the implementation layer of the test case. If there is a relatively complete and well-structured bottom layer and proxy layer, this layer will be very simple to implement.
The test data and the ID information of the elements in the software are stored in independent XML files. When the test case layer or the proxy layer needs data, it can be read through a unified interface. This method can not only make the results of the entire test project clear, but most importantly, it can reduce the maintenance cost of the entire test system, so as to ensure that the return on investment in automated testing continues to increase.

1.4 Maintenance and expansion of automated testing

The automated testing project must be maintained and expanded due to the continuous expansion of the software. Maintenance means that the old test cases cannot pass due to the upgrade of the new version and must be maintained to run normally. Expansion means that due to the continuous upgrade of the version, some functions have become very stable and suitable for automated testing, and some new test cases need to be added to cover these functions.

Expansion and maintenance is a long process, and special attention should be paid to the fact that each time a test case is automatically run, a detailed result log must be kept to record the passing status of the test case. For the test cases that failed to run, the reasons for the failure should be recorded, which is helpful for testers to judge the bugs of the product through the results. It is important to note that some test cases appear to have passed, but actually failed to execute, and the result log records that they passed. If this happens and the tester is unaware, this is a failed automation. Therefore, it is best to establish a verification mechanism for the results of each automated test to ensure the reliability of the results.

2 Conclusion

Automated testing is a relatively new research field and a very controversial research topic recently. There are many different opinions on the pros and cons of the introduction of automated testing. Of course, automated testing has also shown strong vitality in the controversy. Its advantages such as high testing efficiency and good reusability have been widely recognized. The automated testing framework structure introduced in this article has been applied in many large software systems and achieved good results.

Reference address:Research and implementation of data-driven automated testing

Previous article:Signal reading and compensation technology for contact image sensors
Next article:Development and implementation of a low temperature ceramic co-fired (LTCC) bandpass filter

Latest Analog Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号