Understanding the Autonomous Driving Test System in One Article

Publisher:SereneMeadowLatest update time:2022-01-27 Source: 九章智驾 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

The testing of autonomous driving is a very complex system. We will use this article to sort it out with you from small to large.


Before we sort it out, let us first raise a question: What scale does autonomous driving testing need to reach?


According to international general standards, the probability of death for a human driver in one hour of driving is about 1/10^6, and there are about 1.25 million deaths in road traffic accidents worldwide each year. If self-driving cars are to develop, their probability of death must be much lower than this standard. According to surveys, the mortality rate of one hour of self-driving cars that is currently acceptable to society must not be higher than 1/10^9. Therefore, if the mortality rate is to be reduced to 1/10^9, the driver must drive for 10^9 hours every time the software is updated to ensure the reliability of the function. Obviously, this method of real car testing is not advisable.


The real test system often utilizes the layered thinking, combining a variety of test methods with different costs and coverage angles, so that we can use controllable time and cost to approximately achieve the effect similar to the real vehicle test. The costs of different test methods are different, and the reasonable number of iterations is also different. For a project with a reasonable test system, module logic testing must avoid more than 60% of potential problems, simulation function performance testing must solve the remaining 30% of potential problems, and leave a maximum of 10% for real vehicle robustness testing. Try to find potential problems as much as possible within each method, and control the number of problems in the subsequent test methods. If the combination is reasonable, while ensuring extremely high coverage, the cost will also be within a controllable range. For example, if the simulation test system is complete, planning and development almost does not require on-vehicle verification, which can reduce a large number of peripheral support resources.


Understanding the Autonomous Driving Test System in One Article

Comparison of different testing methods


The combination of multi-level testing methods is not without cost. Building a special testing system often has the problem of long construction period and high initial cost. Whether it is CAE, DV, PV testing in component testing, or static, integrated, and simulation testing in software, some testing processes are often bypassed in order to catch up with the progress.


In fact, this is an economic account. When we omit a certain front-end testing process, if the resources consumed by the high-cost back-end testing process to solve the problems left over from the front-end process are higher than the cost of setting up the front-end process, the entire testing system will become unprofitable, and vice versa.


A reasonable level of testing is also a balancing act.


But generally speaking, in a R&D system with strong continuity and maturity, more hierarchical and mutually orthogonal testing systems combined with efficient circulation will often achieve higher efficiency.


Truly effective testing often uses specific testing tools and specific test cases to review problems in specific dimensions of the object being tested. Any test system is a good test system as long as it targets a certain type of potential problem at a lower cost than other means and covers a wider range of problems than other means. It doesn't matter what type of test system it belongs to, as that is a concept that is artificially divided later. In the design of tests, pragmatism is very important.


Understanding the Autonomous Driving Test System in One Article

Testing process in engineering practice


In addition, the testing pipeline is often also part of the training pipeline. In the past, the work of the testing system was mainly to eliminate potential product risks caused by human errors.


The most well-known one is Test-Driven Development (TDD), which requires writing test code before writing code for a certain function, and then only writing functional code that makes the test pass, using testing to drive development.


Now that autonomous driving is moving towards a self-supervision process, we see more interactions between machines. This also includes test feedback and development adjustments between machines, which is the deep learning we are very familiar with. For humans, testing is to ensure that the product is consistent with the goal. For machines, training is also to achieve a similar purpose.


The above are some basic ideas of testing. Next, let’s take a closer look at the typical testing processes of intelligent driving. As shown in the figure below, I think we can systematically sort out the three aspects of different cooperation models, different fields of specialization, and different technical sections.


Understanding the Autonomous Driving Test System in One Article

Common testing methods for autonomous driving


From the perspective of different cooperation models, it can be divided into black box testing, white box testing and gray box testing.


White box testing will check whether each path of the internal structure works normally as designed, and is generally used for internal management of the product provider; black box testing generally does not consider the internal structure, but only checks whether the product functions are implemented according to the technical requirements of the contract, and is generally used for internal management of the provided party; gray box testing is between the above two levels of testing. On the basis of testing external functions, it will confirm the key links, and is generally used for release testing of the provider or acceptance testing of the provided party. The specific level depends on the specific cooperation.


From the perspective of different fields, different fields have their own unique problems and corresponding testing dimensions.


Starting from the software code, there will be static testing, dynamic testing, etc.: Static testing will analyze whether the program's statement structure, programming specifications, etc. are wrong and inappropriate. Commonly used tools include QAC/Converity, etc., which account for a small proportion of the entire testing system and are generally the first step in software testing. Similar to it is codereview, which will organize relevant experts to evaluate the static design of the code; while dynamic testing will compare the results after running the program with expectations, and analyze the operating efficiency and robustness. Currently, most software testing subjects for autonomous driving belong to the category of dynamic testing, such as performance testing and various in-the-loop tests.


Starting from different technical sections is the most complex and important of all division modes.


First, let me explain the significance of setting sections. When we are faced with a complex system problem with multiple factors, by setting sections, we can isolate the influencing variables and simplify the complexity to a testable level. At the same time, we can transform the original serial troubleshooting tasks into parallel tasks, thus shortening the project schedule.


As shown in the figure below, the bottom layer is unit testing, module testing, and module integration testing. On the R&D platform (X86), the input and output of software functions, single or multiple modules are used as sections. The core is to verify the correctness of the code logic. Through tools such as VectorCast and GTest, a large number of wrong inputs and a small amount of correct inputs are injected into the object under test to confirm that the feedback meets expectations. This process is generally open-loop.


Module-level testing is generally also called Model-in-the-Loop (MIL). In addition to considering partial correctness, there are also some model performance indicators such as the recognition accuracy of the perception module.


Understanding the Autonomous Driving Test System in One Article

Testing methods at the software logic level


A software that runs stably on X86 may have a series of problems in an embedded environment, such as stack overflow, scheduling confusion, unstable timestamp, inadequate system call support, memory read exceptions, and operation blocking. In order to troubleshoot this difference. As shown in the figure below, the target hardware dimension can be introduced above the software logic level, that is, the processor-in-the-Loop (PIL) test, which places part of the code on the target processor to verify the functional correctness of the code and confirm whether its performance meets the requirements. For example, the longest time taken by the software, the reliability of system calls, etc. Software-in-the-loop testing generally evaluates correctness, while hardware-in-the-loop testing generally evaluates stability.


Understanding the Autonomous Driving Test System in One Article

PIL Testing Methods


As shown in the figure below, all of the above tests are generally open-loop and do not verify the interaction with the environment. When we add the interaction with the virtual or real environment in the dimensions of software and hardware, the concepts of software-in-the-loop testing SIL (Software-in-the-Loop) and hardware-in-the-loop testing HIL (Hardware-in-the-Loop) are generated.


After introducing environmental factors, the scenario library will also be introduced as test cases. In addition to verifying the basic logic, the test process will also evaluate some of the operating service indicators of intelligent driving.


SIL testing does not consider the target hardware and can be deployed in large quantities on servers at a low cost. Its core function is to verify the correctness of the closed-loop operation of intelligent driving functions. It can be divided into partial closed-loop testing using a semantic-level simulation system and full-function closed-loop testing of software using an environment rendering-level simulation system.


SIL is one of the most promising testing methods at present, so let's make a simple expansion. Although methods such as unit testing and module testing have a high automation rate, they cannot directly discover functional problems of the intelligent driving system. Although hardware-in-the-loop testing and real vehicle testing can find problems more intuitively, they are costly. SIL has achieved a good balance between these methods and is a very cost-effective method. From the inside, the core of the SIL system must ensure repeatability. If the test cannot reproduce its past experimental results, it will have a great impact on subsequent evaluation. If repeatability cannot be fully maintained due to multithreading and other reasons, it is also necessary to confirm its variance and stability after multiple experiments. From the perspective of the entire test system, the closer to the inside (such as unit testing), the easier it is to control repeatability, and the closer to the outside (such as real vehicle testing), the more difficult it is to control. From the outside of the SIL system, the core is the automation rate and the ability of large-scale parallel deployment. As the largest test method in the entire test system, comprehensive analysis shows. Reducing manpower and improving concurrent deployment capabilities can effectively reduce testing costs and improve testing efficiency. In the closed-loop system of intelligent driving, the SIL system, in addition to testing, has also begun to serve planned iterative training. The indicators and use cases used in safety assessments, functional assessments, regulatory requirements assessments, comfort assessments, etc. in simulation tests are actually a kind of "loss function" in the regulatory training process.

[1] [2]
Reference address:Understanding the Autonomous Driving Test System in One Article

Previous article:Rohde & Schwarz and Vector collaborate on hardware-in-the-loop testing of automotive radar sensors
Next article:Solid-state battery safety testing by US battery startup Solid Power "revealed"

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号