How can autonomous driving be safer? Listen to NXP experts talk

Publisher:EEWorld资讯Latest update time:2020-07-23 Source: EEWORLD Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

This article was written by Ali Osman Ors, Head of AI Strategy and Partnerships at NXP Automotive.

                 Brian Carlson, Director of Global Product and Solutions Marketing, NXP


Modern connected vehicles are considered “servers on wheels”. Even without fully autonomous systems, Advanced Driver Assistance Systems (ADAS) provide drivers with a comprehensive set of driver assistance systems such as traffic sign detection and adaptive cruise control, which can generate megabytes of data per second. Global Navigation Satellite System (GNSS)-based routing, direction guidance and traffic alerts are now ubiquitous in most vehicles. Coupled with the increasing number of infotainment systems, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) applications, today’s cars rely heavily on their connectivity.


As more and more technologies are used to assist drivers, or self-driving cars, it is of utmost importance that they are completely safe. Systems must be reliable during vehicle use, and most systems need to comply with functional safety standards such as ISO 61508 and ISO 26262, which include failures caused by hardware failures or hacker attacks.


As many articles have revealed, vehicle hacking can take many forms. Hackers have become adept at attacking cars through unexpected vulnerabilities, even detecting network links through mirrors' blind spots or accessing the vehicle's network through its internal Wi-Fi router.


Top safety considerations


Figure 1 highlights some of the core safety principles that apply to automotive system design. They are equally applicable to other electronic systems, such as those used in industrial applications. The key is to maintain a strict separation between the external interfaces of the car and the internal domains. Inside the car, electronic control units (ECUs) are used for independent functions such as anti-lock braking, comfort, powertrain, etc.


image.png

Basic safety principles in automotive design. (Source: NXP)


Another consideration for protecting each ECU may be that there may be as many as 100 ECUs in a vehicle, which may have a total of 100 million lines of code, which presents a significant challenge for software testing and reliability. In-vehicle gateways are a prudent way to securely and reliably interconnect all systems across a heterogeneous vehicle network, providing physical, process, and protocol isolation between all functional domains within the vehicle.


The gateway provides a way to allow only applications that require external communication to communicate, which can be applied on a case-by-case basis. This is especially true when the system needs to be updated using OTA firmware updates. Not only is the system vulnerable to hacker attacks and its secure operation is threatened, but the IP contained in the ECUs (such as code, passwords, and machine learning (ML) algorithms) also has significant value.


Personally Identified Information (PII) is a cause for concern, with many countries imposing strict penalties on companies that fail to protect user information. Personal information includes not only usernames and passwords that we often use to access online services, but can also extend to biometric information used by vehicle systems to identify drivers or owners. As the complexity of systems within vehicles increases, there is a high likelihood that ML techniques will be used to detect and prevent user anomalies, penetration techniques, and potential adversarial attacks.


Autonomous driving system safety


Automakers have been very clear that safety is a top-level topic due to legal liability. As we move beyond SAE Autonomy Level 2 (Figure 2), the responsibility for operation and real-time assessment of the driving environment shifts from the driver to the car’s self-driving system.


image.png

The evolution of SAE safety concepts. (Source: NXP)


Automotive Safety Integrity Levels (ASILs) are a core part of the ISO26262 functional safety standard as it applies to automotive systems. They define the severity, exposure, and controllability of hazards that may be encountered in any system. The “V” model requires that the behavior of each software component be fully specified, verified, and fully traceable. It also covers the possibilities for enhancements and the requirements for them to comply with and meet the initial specifications. For example, when it comes to ML-based systems that use reasoning to determine whether a detected object is a car, a person, or a road sign, the behavior of the software is not easy to model because it is dynamic.


There is a view that autonomous systems need to move beyond traditional static functional safety features and embrace the concept of behavioral safety. Systems need to be able to learn how to interact with non-automated vehicles and pedestrians, whose behavior is not necessarily predictable. Being able to predict the behavior of other road users, pedestrians, and other road hazards is essential to providing a truly autonomous driving experience.


To bridge the gap where ISO 26262 fails to meet the requirements of a more behavioral safety approach, automotive safety experts collaborated to develop the ISO/PAS 21448 Safety of the Intended Functionality (SOTIF) standard. To facilitate safe and reliable autonomous mobility, any automotive system needs to demonstrate certain key characteristics. This includes their manufacturing using automotive-grade components, providing low failure rates, maintaining high levels of reliability and fail-safety.


Being able to detect potential failures in accordance with ISO 26262 ASIL D requirements is also a priority, and the system must always be available and able to prioritize between safety and non-safety tasks. In this case, the system also needs to be fault-tolerant so that it can continue to operate even in the event of a failure. Finally, the system must be reliable and have the ability to predict potential failures before they occur.


At lower autonomy levels (Figure 3), most systems are required to be fail-safe, meaning the driver must be alert at all times to take over if the system detects a fault and safely stops operation. As we progress to Level 2 and 3 safety systems, the expectation is that if an error is detected, there is enough capability in the system to allow it to continue operating, albeit in a degraded state, and the driver has been alerted and notified.


image.png

Evolution of security levels (Source: NXP)


In Level 4 and 5 systems, there are situations where redundant systems exist, where the emphasis is on operational failure, the condition alerts the driver, and the vehicle can be quickly restored to a safe state. This process may involve a transfer of driving authority to the driver, as in Level 0 to Level 3 systems.


When it comes to handing control of a vehicle back to the driver, a lot of research has gone into the time it takes for a human to regain control, with Ericsson and Stanton’s research finding that it can take anywhere from 2 to 26 seconds for a human to complete such a handover.


This means that to perform a handover on a highway, the vehicle could travel half the length of a football field. In the worst case, it could be closer to six football fields, and in both cases the likelihood of an accident is very high. It is for this reason that NXP firmly believes that the requirement for Level 2 and above should be that if a vehicle fails during operation, the vehicle can at least be brought to a safe stop (Figure 4).


image.png

NXP's view on safety level specifications. (Source: NXP)


The concept that any safe autonomous car is rule-following is a tricky one. As drivers, we know driving norms, and these often go against the rules of the system. We have all moved into the oncoming lane in some situations in order to pass a stopped or broken-down vehicle. These are deviations from the rules we have learned to prevent further accidents. From a system perspective, ISO/PAS 21448 SOTIF is highly relevant to these types of driving scenarios.


Safety and security are fundamental to any autonomous system. NXP believes that ISO 26262 and ISO/PAS 21448 SOTIF will together advance the design, testing and deployment of safe autonomous systems.


Reference address:How can autonomous driving be safer? Listen to NXP experts talk

Previous article:Autonomous driving breaks out of Amazon
Next article:Embedded field programmable gate arrays (eFPGAs) for next-generation automotive application-specific integrated circuits (ASICs)

Latest Automotive Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号