568 views|1 replies

154

Posts

0

Resources
The OP
 

【Lane Change Racing】Extended - Autonomous Driving [Copy link]

【Lane Change Racing】Extended - Autonomous Driving

Preface

Everyone knows that visual recognition is AI intelligent processing of the images captured by the camera, but external light sources and ultraviolet rays have an impact on the camera, and sometimes the collection is not accurate enough. The addition of microwave radar can make up for this defect of visual collection. At present, many car companies will use two sensors to coexist, and the software will perform real-time analysis to improve the reliability of intelligent assisted driving.
Currently, L2 assisted driving vehicles can be purchased from Chinese domestic cars starting at 100,000 yuan, such as Harvard Big Dog, Tank, Changan, Baojun, Geely, etc.
  1. Tesla Smart Driving
    Tesla's automatic assisted driving is called Autopilot in English, or AP for short. It is currently divided into three levels: basic automatic assisted driving (BAP), enhanced automatic assisted driving (EAP), and fully automatic driving (FSD). The
    basic automatic assisted driving is standard for all models, which is at the L2 level and can realize functions such as adaptive cruise control and lane keeping. The
    enhanced assisted driving includes automatic assisted navigation driving, automatic assisted lane change, automatic parking, smart summoning and other functions.
    The fully automatic driving function (FSD) includes all the functions of the basic assisted driving and enhanced automatic assisted driving. In the future, functions such as identifying traffic lights and responding to stop signs may be launched.
  2. Huawei Intelligent Driving
    The theoretical basis of Huawei's autonomous driving is mainly based on its high-level autonomous driving full-stack solution Autonomous Driving Solution (ADS). This system is designed for China's roads and traffic environment, and is a full-stack autonomous driving system aimed at user driving experience. It adopts an end-to-end design approach, based on the L4 autonomous driving architecture, to build a full-stack solution for L4~L2+ autonomous driving, giving private cars a continuous experience of daily commuting in all weather and all scenarios.
    In Huawei's autonomous driving theoretical system, algorithms and perception systems are the two cores. Huawei has a deep technical accumulation in algorithms. The algorithm part of its autonomous driving solution can realize the generation and update of high-precision maps, as well as autonomous navigation and decision-making of vehicles. At the same time, Huawei also uses its perception system, including sensors such as cameras, lidar, millimeter-wave radar and ultrasonic radar, to achieve accurate perception and recognition of the surrounding environment.
    Huawei's autonomous driving theory also emphasizes self-learning and adaptive capabilities. For example, its autonomous driving map data system consists of two parts, Roadcode HD and Roadcode RT. Roadcode RT is a self-learning map for smart cars, which requires continuous self-learning data iteration to optimize the performance and experience of autonomous driving.
  3. BYD Intelligent Driving
    BYD's research and development theory in the field of intelligent driving is mainly based on its strong technical strength and continuous innovative spirit. BYD has long adhered to the vertical integration strategy, self-developed core technologies, and manufactured all key parts by itself. This has enabled it to break the supply bottleneck of industries such as chips in the process of electrification. At the same time, it can also break the barriers between various component systems, so that all perceptions are aggregated into one "brain" for thinking and decision-making, and the same "brain" quickly adjusts the status of various components of the vehicle's "body", greatly improving driving safety and comfort.
    BYD's intelligent driving system adopts the "distributed layout" of the control algorithm to create the Eye of God's high-level intelligent driving assistance system. When the sensors on the car sense dangerous factors, with the early intervention of the distributed domain control, emergency measures can be taken earlier to protect the safety of the occupants before the central computing platform integrates computing and perception. This design significantly improves the safety performance of the vehicle.
  4. NXP Intelligent Driving
    NXP has significant technical strength and deep industry experience in the field of intelligent driving. Its intelligent driving technology mainly relies on advanced sensors, algorithms and computing platforms to achieve high-precision environmental perception, decision-making and execution.
    In terms of sensors, NXP's imaging radar technology has high-resolution target and feature detection capabilities, which can accurately image the environment around the car, allowing automakers to provide better ADAS (Advanced Driver Assistance System) and autonomous driving functions. In addition, its latest 4D imaging radar solution has technical advantages far beyond traditional radars, and can identify and classify other vehicles, road users or objects in various complex scenarios, bringing higher road safety and driving comfort to car users. In terms of
    algorithms, NXP has strong R&D capabilities and technical accumulation, and its algorithms can achieve the generation and update of high-precision maps, as well as autonomous navigation and decision-making of vehicles. At the same time, its self-learning and adaptive capabilities also enable the autonomous driving system to be continuously optimized and improved to adapt to various complex scenarios and challenges.
    In terms of computing platforms, NXP adopts advanced distributed domain control design, which enables information between various component systems to be shared and coordinated in real time, thereby achieving more efficient and safe intelligent driving. This design not only improves driving safety and comfort, but also enables the vehicle to respond quickly in various emergency situations to protect the safety of passengers.
  5. NVIDIA Intelligent Driving
    NVIDIA has significant technical strength and deep industry experience in the field of autonomous driving. Its autonomous driving technology mainly relies on advanced hardware platforms and algorithms to achieve high-precision environmental perception, decision-making and execution.
    In terms of hardware, NVIDIA's NVIDIA DRIVE embedded supercomputing platform can process data from cameras, ordinary radars and lidar sensors to perceive the surrounding environment, determine the location of the car on the map, and plan and execute safe driving routes. The platform is compact, energy-efficient and efficient, and supports multiple functions such as autonomous driving, cockpit functions and driver monitoring. In addition, the NVIDIA DRIVE Hyperion platform is a reference architecture for mass-produced autonomous vehicles. It can accelerate the development, testing and verification of autonomous vehicles by integrating DRIVE Orin-based AI computing and a complete sensor suite.
    In terms of algorithms, NVIDIA has strong R&D capabilities and deep technical accumulation. Its algorithms can achieve accurate perception and recognition of the surrounding environment, as well as autonomous navigation and decision-making of vehicles. At the same time, NVIDIA also focuses on improving self-learning and adaptive capabilities, and through continuous learning and optimization of algorithms, enables autonomous driving systems to adapt to various complex scenarios and challenges.
    In addition, NVIDIA is also actively cooperating with automakers, technology providers, and scientific research institutions to jointly promote the development and application of autonomous driving technology. Through cooperation, NVIDIA can keep abreast of market demand and technological trends, so as to continuously adjust and optimize its autonomous driving solutions.
  6. AMD Intelligent Driving
    AMD has also demonstrated strong technical strength and deep industry accumulation in the field of autonomous driving. AMD's autonomous driving technology is mainly based on its high-performance processors and graphics processing units (GPUs), as well as advanced algorithms and software platforms.
    AMD's processors have excellent computing performance and energy efficiency, which can meet the needs of autonomous driving systems for high-performance computing. Its GPU can provide powerful graphics processing capabilities, support high-definition, real-time environmental perception and image recognition, and provide accurate environmental information for autonomous driving systems.
    In terms of algorithms, AMD has a strong R&D team and technical accumulation, and its algorithms can achieve the generation and update of high-precision maps, as well as autonomous navigation and decision-making of vehicles. At the same time, AMD also focuses on the self-learning and adaptive capabilities of algorithms. By continuously learning and optimizing algorithms, autonomous driving systems can better adapt to various complex scenarios and challenges.
    In addition, AMD also provides a unified software platform, such as AMD Vitis, for the development of accelerated applications. This helps developers develop autonomous driving applications more efficiently and improve the overall performance of the system.
  7. Baidu Intelligent Driving
    Baidu Intelligent Driving is an important layout of Baidu in the field of autonomous driving. Its core lies in the use of advanced technologies and algorithms to achieve autonomous navigation, decision-making and execution of vehicles, providing users with a safer, more efficient and comfortable travel experience.
    Baidu Intelligent Driving has made a lot of investment and research and development in hardware and sensors, using a variety of sensors such as laser radar, high-definition cameras, millimeter-wave radar, etc. to achieve accurate perception and recognition of the surrounding environment. At the same time, Baidu has also independently developed a high-performance computing platform and algorithm that can process and analyze sensor data in real time to make accurate decisions and control.
    In terms of algorithms, Baidu Intelligent Driving focuses on safety and reliability. Its algorithm not only takes into account factors such as road conditions and traffic signals, but also combines the dynamic performance and driving status of the vehicle to achieve more accurate and stable autonomous driving. In addition, Baidu also ensures the reliability and robustness of the algorithm through a large number of tests and verifications.
  8. Intelligent driving algorithm
Car autonomous driving algorithms are the core of autonomous driving technology. They enable cars to perceive the surrounding environment, make decisions and perform corresponding operations. These algorithms can be divided into several main parts: perception, localization, planning and control.
1. Perception algorithm: Detect and identify the surrounding environment, including roads, vehicles, pedestrians, obstacles, etc., by using various sensors (such as radar, lidar, cameras, etc.). For example, the autonomous driving camera perception algorithm uses machine learning and deep learning technology to achieve target detection; while the lidar perception algorithm can detect the ground and obstacles more accurately. In addition, the autonomous driving multi-sensor fusion algorithm fuses data from different sensors to improve the accuracy and robustness of perception.
2. Positioning algorithm: Uses sensor data, maps, GPS and other information to determine the vehicle's location and posture. Positioning algorithm is the key to accurate navigation of self-driving cars.
3. Planning algorithms: responsible for making driving paths and decisions. They consider the vehicle's target location, surrounding environment, and traffic rules to generate a safe and efficient driving path. This includes path planning algorithms (such as A* algorithm, Dijkstra algorithm, etc.) and decision algorithms, which together ensure that the vehicle can make reasonable decisions in complex traffic environments.
4. Control algorithm: Convert the planned path into specific vehicle operations, such as acceleration, braking, steering, etc. The goal of the control algorithm is to enable the vehicle to drive smoothly and accurately along the planned path.
Autonomous driving involves multiple key algorithms that work together to enable autonomous navigation, decision-making, and execution of the vehicle. Here are some of the main autonomous driving car algorithms:
1. Perception algorithm:
Object detection and classification algorithms: such as YOLO, SSD, Faster R-CNN, etc., are used to identify objects and obstacles around the vehicle, such as vehicles, pedestrians, road signs, etc.
Lane detection algorithms: such as Canny algorithm, Hough transform algorithm, Sobel algorithm, etc., are used to identify the lane boundaries of vehicles.
SLAM algorithms: such as ORB-SLAM, LSD-SLAM, DVO-SLAM, etc., are used for vehicle positioning and mapping to help the vehicle determine its position in the environment.
2. Decision Planning Algorithm:
Path planning algorithms: such as A* algorithm and Dijkstra algorithm, which are used to formulate the vehicle's driving route, taking into account factors such as road structure, traffic rules and destination.
Decision-making algorithm: Based on perception information and map data, the decision-making algorithm determines the action the vehicle should take, such as accelerating, decelerating, turning, or changing lanes.
3. Control algorithm:
Path tracking algorithms: such as PID controller, LQR controller, MPC controller, etc., are used to enable the vehicle to travel along the planned route and ensure that the vehicle follows the planned path smoothly and accurately.
Vehicle dynamics control algorithms: Consider the physical characteristics of the vehicle, such as acceleration, steering angle, etc., to achieve smooth and safe driving.
4. Multi-sensor fusion algorithm:
Data-level fusion algorithm: Fusion is performed directly at the raw sensor data level to improve the accuracy and robustness of perception.
Feature-level fusion algorithm: Fusion is performed at the extracted feature level to improve the performance of target detection and classification.
Decision-level fusion algorithm: Fusion is performed at the decision-making results level of multiple sensors or algorithms to improve the reliability of the final decision.
In addition, there are some machine learning algorithms related to autonomous driving, such as clustering algorithms, pattern recognition algorithms, and regression algorithms, etc. These algorithms play an important role in object recognition, positioning, and predicting the movement trajectories of other vehicles and pedestrians.
It is important to note that autonomous vehicle algorithms are a complex and evolving field. As technology advances, new algorithms and methods will continue to be introduced and optimized to improve the performance, safety, and reliability of autonomous driving systems.
This post is from Automotive Electronics

Latest reply

Just tell me which one is better, haha, I’m used to reading reviews and I want to know the result.   Details Published on 2024-3-14 09:51

2w

Posts

74

Resources
2
 

Just tell me which one is better, haha, I’m used to reading reviews and I want to know the result.

This post is from Automotive Electronics
Add and join groups EEWorld service account EEWorld subscription account Automotive development circle
Personal signature

加油!在电子行业默默贡献自己的力量!:)

 
 

Guess Your Favourite
Just looking around
Find a datasheet?

EEWorld Datasheet Technical Support

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号
快速回复 返回顶部 Return list