For vehicle OTA types, they are mainly divided into two categories, FOTA (Firmware-over-the-air) and SOTA (Software-over-the-air). Both are areas that OEMs focus on and are gradually implementing. They can be adapted to OTA needs in different scenarios.
FOTA achieves a complete upgrade and update of system functions by downloading and installing a complete firmware image for the vehicle controller. For example, upgrading the vehicle's smart driving system allows drivers to enjoy more and more auxiliary driving functions; upgrading the vehicle's cockpit system to improve the accuracy of driver fatigue detection; upgrading the vehicle's braking system to improve the vehicle's braking performance. After Tesla launched the Model 3, it discovered that there was room for optimization in its braking logic. After upgrading the braking system through FOTA, the braking distance was shortened from 46m to 40m, which greatly improved driving safety.
FOTA involves a complete systematic update of the core function (control strategy) of the controller, which has a great impact on the performance of the entire vehicle. The upgrade process has extremely high requirements on timing, stability, and safety. At the same time, the prerequisites for the upgrade include gear and battery. , vehicle speed and other requirements, the upgrade process generally does not support ignition vehicles. NIO car owners once gave a vivid lesson to car owners across the country for free on Chang'an Street in the capital.
SOTA implements an "incremental" update of controller functions by installing an "incremental package" to the vehicle controller, which is generally used in entertainment systems and smart driving systems. For example, when changing the multimedia system operating interface, optimizing the dashboard display style, and updating the map program in the entertainment console, SOTA upgrade methods are used.
SOTA involves a small-scale partial update of functions at the controller application layer, which has little impact on vehicle performance and requires low upgrade prerequisites. SOTA's incremental update strategy can significantly reduce the size of the upgrade package file, thereby saving network traffic and storage space. It is conceivable that as the scope of SOTA expands and the technology matures, in the future, while the vehicle is driving, eating hot pot and singing songs, the vehicle can automatically complete function updates and iterations.
There is currently no clear and clear boundary between the functional definitions of FOTA and SOTA. When I meet male colleagues for consultation, I always irresponsibly give the example of mobile phone upgrades. Upgrading from iOS14 to iOS15 is FOTA, and upgrading from WeChat 7.9 to 9.0 is SOTA.
This article mainly introduces the current overview of SOTA technology links for vehicle functions. This article introduces the following three technical paths. One is the SOTA architecture based on the Android platform for car and instrument APP applications; the second is for Autosar AP-based controllers to implement new functions online through UCM services; the last one is The current research hotspot is cloud-edge collaboration strategy.
Implementation of Android applications
Android's SOTA technology is already quite mature.
You can
click the update button next to the application in the app store
. Readers have done it hundreds of times on their mobile phones. Currently, many vehicle and instrument systems are based on Android. For Android platforms, The implementation path of APP applications, themes, and skins is similar to that of a mobile app mall. A version warehouse is established in the cloud. After the user clicks to install in the vehicle software store, the vehicle downloads the installation package (apk) from the TSP, and the vehicle computer or instrument performs the installation or uninstall.
SOA implementation based on AP AutoSAR
Under the Autosar CP architecture, all applications are statically configured. Once the software is compiled, it cannot be changed, and its calling cycle is also determined. Under the Autosar AP architecture, everything is a process in the OS, and applications
run dynamically. When to call, process life cycle, resource occupation, and process termination are all dynamically
managed , just like when the App on your mobile phone The resources it will call after opening and running and when it will be closed are all dynamic.
Applications communicate through ARA (Autosar Run-time for Adaptive), which can support or expand the application of SOA communication technologies such as SOME/IP, TSN, and DDS.
Among AP platform services, UCM is responsible for securely updating, installing, deleting and retaining software records.
Software package management systems such as dpkg or YUM in Linux can update or modify software on the Adaptive Platform.
The business process of UCM implementing the SOTA function is shown in the figure below, which includes the following important modules.
UCM Master
: A client that provides a service interface for UCM. It receives, verifies, and parses software packages from the cloud or diagnostic tools, and transmits the software packages to UCM or diagnostic applications for subsequent activation, rollback, and other processing.
UCM
: It is a UCM service instance in the same network as the UCM Master.
OTA Client
: Establish communication between the cloud and UCM Master to transmit incremental package information.
Vehicle status manager
: collects status from multiple vehicle-side ECUs, calculates the corresponding security status, and notifies the UCM Master when changes occur according to the security policy in the vehicle package. If the security policy is not met, UCM Master can take relevant measures, such as notifying the user that the prerequisites are not currently met, and at the same time postponing, suspending or canceling the update.
The entire SOTA process includes the following processes:
1. Packaging and assembly of the upgrade package
The installation unit of UCM is the software package. The package contains one or more executable files. In addition to application and configuration data, each package contains Manifest, which is an arxml type file. Metadata includes package name, version, dependencies, etc.
2. Upgrade package transmission
The client using the UCM service interface can be located on the same AUTOSAR AP platform, or it can be a remote client. One or more software packages are transferred to UCM's internal buffer.
Once the transfer is complete, the contents of the package are authenticated.
3. Upgrade package installation
Installation and uninstallation operations are performed through UCM's ProcessSwPackage interface. It is worth mentioning that UCM supports A/B partition upgrade. For the A/B partition upgrade plan, let the old version run in zone A first and upgrade the inactive B partition. After processing is complete and restarted, run the new version of partition B. The A/B partition upgrade strategy can implement application deployment while the vehicle is driving. Even if the upgrade fails, it will not affect other functions of the vehicle.
4. Upgrade package activation
For some functions, UCM must update multiple software at the same time during the upgrade process, so UCM needs to be activated according to the dependencies of the software packages. If the upgrade is through A/B partitions, a swap occurs between partitions and the new inactive partition becomes a copy of the new active partition.
5. Upgrade package rollback
When an abnormal application is encountered and needs to be rolled back to a stable version, UCM determines the version selection for rollback, and the rollback operation is handled by the Persistency service of Autosar AP.
Implementation based on cloud-edge collaboration
Before the introduction, let me say a few words about virtualization, virtual machines and containers. Each programmer's development environment configuration may be different, which may cause your code to not run properly when placed directly on a colleague's computer. An enterprise's development environment, test environment, and production environment will also differ in configuration and dependent files. How to ensure that your bug can run normally on different hardware and different environments is a hot topic in the current software industry. Cloud native is the best practice of this concept. Through new technologies such as microservices, containerization, and continuous delivery, it realizes the integration of development and operation and maintenance, allowing services to be born in the cloud and grow in the cloud.
Virtual Machine (VM) is the pioneer of virtualization technology. It simulates a complete computer system with complete hardware system functions through software and runs in a completely isolated environment. It can significantly improve the work efficiency of equipment and reduce the workload of enterprises. Dependence on hardware resources.
Currently implemented in the automotive industry is the Hypervisor virtualization solution, which can run multiple different types of operating systems on a multi-core heterogeneous single chip. Each system shares hardware resources and is independent of each other and can exchange information. Hypervisor not only meets different business needs in increasingly complex scenarios, but also improves the efficiency of the use of hardware resources and greatly reduces costs. The isolation capability between different operating systems provided by virtualization greatly improves the reliability and security of the system. .
This solution is often used for joint-screen vehicles in the cockpit to meet the needs of different systems for the main driver and co-pilot. However, each VM needs to install an operating system to execute applications. If users only need to run or migrate simple applications in the VM, using VM is not only cumbersome but also wastes a lot of resources. Therefore, enterprises urgently need lightweight virtualization. technology.
Container technology originated from Linux and is a lightweight kernel virtualization technology. By isolating processes and resources and sharing the same operating system kernel, it can ensure portability in different environments and hardware configurations such as development, testing, and production. consistency. The typical functional architecture block diagram of virtual machine technology and container technology is as follows.
A comparison of the key features of container technology and virtual machine technology is as follows.
Container technology only became popular among the "plaid shirt" crowd with the emergence of Docker. Docker applies the container idea to software packaging technology, providing a container-based standardized transportation system for code. Its product logo is the most vivid interpretation of this idea.
Docker is an open source application container engine. Developers can use Docker to package any application and its dependencies into a lightweight, portable, self-contained image, and then publish it to any popular Linux, Windows and other operating system machines. . Docker technology consists of three core basic concepts:
(1)
Image
, a cascading read-only file system, in addition to providing the programs, libraries, resources, configuration and other files required for container runtime, it also contains some configuration parameters prepared for runtime. The image does not contain any dynamic data and will not be changed internally after it is created. Images are used to create Docker containers. The same image file can generate multiple containers running at the same time;
(2)
Container (Container)
, a container is a running instance created by an image. A container can be created, started, stopped, deleted, paused, etc. Each container is isolated from each other;
(3)
Warehouse (Registry)
, where image files are stored. After the user creates the image, he can upload it to the warehouse. When he needs to use the image on another host, he only needs to download it from the warehouse.
Docker uses the client/server architecture model, and the running logic is shown in the figure below. The Docker daemon serves as a server request and is responsible for building, pulling, and starting Docker containers. The Docker daemon generally runs in the background of the Docker host, and users use the Docker client to directly interact with the Docker daemon.
The Docker client
initiates a Docker build, pull, and start request to the Docker daemon, and the Docker daemon completes the operation and returns the result. The Docker client can access both the local Docker daemon and the remote Docker daemon. The blue dotted line shows the build workflow of Docker building and storing it in local Docker. The purple dotted line shows the pull workflow of pulling the image from the mirror warehouse to the Docker host or pushing the Docker host image to the mirror warehouse. The red dotted line shows the startup workflow of image installation and container startup.
Docker host
, a physical or virtual machine used to execute Docker daemons and containers.
The Docker daemon
receives and processes requests sent by the Docker client, monitors Docker API requests and manages Docker objects, such as images, containers, networks and data volumes.
Docker is currently widely used in the online industry of the Internet, achieving the goal of "Write Once, Run Everywhere". Common application scenarios are as follows.
(1) For development, services and applications are easier to transplant across platforms. It avoids bugs generated due to changes in the platform, and also avoids quarrels with testing and operation and maintenance. No need to complain anymore, "Why are there always bugs in the official environment, but the test environment does not have this problem?" The boss no longer has to worry about the possibility of problems caused by environmental migration, and can better share the responsibility with his subordinates.
(2) For operation and maintenance, container orchestration applications (such as K8S) can be used to automatically manage application instances according to business needs, which is suitable for dynamic expansion and contraction. A typical scenario is that the e-commerce Double Eleven event will cause a sharp increase in the load of the order service. Without container technology, operation and maintenance needs to "voluntarily" work overtime all night, manually add instances of the order service, and monitor the status of each instance in real time. . At present, operation and maintenance may be the coffee in the left hand and the king in the right hand, just keep an eye on it.
With container orchestration mentioned above, you need to manage the containers running your application and ensure there is no downtime. For example, if one container fails, another container needs to be started. Wouldn't it be easier if the system handled this behavior?
At this time, Kubernetes and KubeEdge appeared.
Kubernetes provides a framework for running distributed systems flexibly, meeting expansion requirements, failover, deployment modes, etc. Kubernetes can autonomously manage containers to ensure that containers in the cloud platform run according to user expectations. In Kubernetes, all containers run in Pods. A Pod can host one or more related containers. Containers in the same Pod will be deployed on the same physical machine and can share resources. A Pod can also contain zero or more disk volume groups. These volume groups will be provided to a container in the form of directories or shared by all containers in the Pod. However, the application scenarios of Kubernetes are limited to online cloud services and do not support device side.
Built on Kubernetes, KubeEdge extends native containerized application orchestration and management to edge devices. Provide core infrastructure support for networks and applications, deploy applications in the cloud and edge, and synchronize metadata. The main advantages are as follows:
(1) Edge computing. With business logic running on Edge, large amounts of data can be protected and processed locally where the data is generated. This reduces network bandwidth requirements and consumption between edge and cloud. This improves response times, reduces costs and protects customer data privacy;
(2) Simplify development. Developers can write applications based on regular HTTP or MQTT, containerize them, and run them anywhere in Edge or Cloud, whichever is more appropriate;
(3) Kubernetes native support. With KubeEdge, users can orchestrate applications on Edge nodes, manage devices, and monitor the status of applications and devices, just like traditional Kubernetes clusters in the cloud;
(4) A large number of applications. Existing complex machine learning, image recognition, event processing, and other advanced applications can be easily deployed and deployed to Edge.
Having said all this, what does it have to do with SOTA?
Recently, at the cloud native conference held by CNCF, a car-cloud collaboration case created by an OEM was reported. Through the lightweight framework of KubeEdge, the car is connected to the cloud as a node, and the cloud dynamically manages the services on the node.
This architecture introduces containerization technology to completely decouple functions from the underlying software. The functions carried in the container include smart cockpit, remote collaboration, machine learning, autonomous driving and other high-storage and high-computing power businesses. After implementation, it can lay the foundation for future third-party development platforms, build a rich ecosystem of vehicle functions, and realize software sales business based on this. More importantly, vehicles can change the status quo of implementing iterative changes in functions only through FOTA. Through this framework, it may be possible to achieve non-stop upgrades, saving users time and cost.
The traditional distributed electronic architecture has encountered many development bottlenecks in the wave of software-defined cars.
Regarding the iteration of new functions, the system is complex and lacks flexibility and scalability;
the tight coupling of software and hardware makes it difficult to implement complex functions across multiple ECUs/sensors, which also places higher requirements on FOTA.
Therefore, the centralization of computing power and the establishment of a universal computing platform are the research directions of various OEMs. Cloud-edge collaboration, as another channel for functional iteration, solves the pain points of long FOTA upgrade time and demanding vehicle status requirements. Once implemented, it will definitely It has attracted great attention from all parties and may become the preferred technical path for SOTA.
At present, the SOTA ecology and operation of third-party APPs for vehicles and machines have become mature, but for SOTA related functions of vehicle controllers, the technology implementation is still in progress. However, after the OEM completes the basic capability building of SOTA, developers of internal or third-party applications only need to follow the corresponding development specifications and implement SOTA by calling the engine's interface. I believe that after its implementation, it will be another great technological progress for software-defined cars.
Author | Line 18 does not reach Anyan Road
Original Intention | Record the evidence of being born as a human being, share original works of the working class and peasants, and focus on intelligent network connection and the warmth and warmth of human beings.
Statement | Some text and picture materials in this article are taken from the Internet. If there is any infringement, please contact the platform for modification or deletion. The article belongs to the author and only represents his personal views and does not represent the position of the platform. If there is anything inappropriate, please contact the platform for modification or deletion. Delete; This article is not for any commercial use.