The role of the erosion operation is to eliminate meaningless boundary points in the image that are smaller than the structural element, so that the boundary of the target object shrinks inward; the role of the dilation operation is to fill the empty points of the target object in the image, so that the boundary of the object expands outward; the composite operation of erosion and dilation is called opening and closing operations: the opening operation is the process of corroding the image first and then dilating it, which can eliminate the edge burrs and isolated spots in the image; the closing operation is the opposite of the opening operation process, filling the holes and cracks in the image. They can perform simple smoothing on the image and detect singular points in the image. According to the binarization processing results, we need to remove the holes and burrs in the image and keep the original image features unchanged, so the image can be opened to make the black edges clear for edge detection.
(4) Navigation marking line edge detection algorithm
Edge refers to the part of the image where the local brightness changes significantly. It is a collection of points in the image where the grayscale of pixels is discontinuous or the grayscale changes dramatically. The purpose of edge detection is to identify points in the digital image where the brightness changes significantly. Although the processing method does not obviously rely on edge detection as a preprocessing, edge detection is still an important feature that image segmentation relies on and an important basis for image analysis. Commonly used edge detection operators are:
(4.1) Gradient operators: Sobel operator, Prewitt operator.
(4.2) Operators based on the zero-crossing point of the second-order derivative of the image function: LOG operator, ny operator.
Obstacle recognition research:
The choice of obstacle recognition method depends on the surrounding environment and the definition of obstacles. Obstacles can be defined as objects of a certain volume on the road ahead of the vehicle. Common obstacles on the road include vehicles, cargo, and debris.
The most critical technologies in obstacle recognition are detection, tracking and positioning. Detection refers to confirming whether there are obstacles on the front visual path, tracking refers to describing the trajectory of the selected target, and positioning refers to calculating the actual distance between the obstacle and the automated guided vehicle. Among them, detection is the basis, tracking is the process, and positioning is the ultimate goal.
Tracking of space targets is the process of building a template based on the effective features of the target and finding the candidate region position that is most similar to the target template in the image sequence, that is, determining the trajectory of the target in the sequence image. In the research on the problem of tracking space obstacles based on monocular vision, there are generally two ideas:
(1) Without relying on any prior knowledge, obstacles are detected directly from the image sequence and then the target of interest is tracked.
(2) Relying on prior knowledge of obstacles, the possible targets are first modeled, and then the targets matching the model are detected in real time in the image sequence and then tracked.
The second approach is the most commonly used, because obstacles exist in a specific operating environment and can be represented by a complete set of finite elements. For this tracking method, the first step to achieve tracking is target detection, that is, extracting the region of interest from the background image from the sequence image.
In the process of target tracking, it is often necessary to use a search algorithm to predict the location of a target at a future time in order to narrow the search range. Based on this idea, there are generally two types of algorithms:
(I) Predict the possible position of the target in the next frame of the image, and then find the optimal point in this relevant area. Commonly used prediction algorithms include Kalman filtering, extended Kalman filtering, particle filtering, etc.
(ii) Algorithms that reduce the target search range, such as the mean shift algorithm (MeanShift algorithm), the continuous adaptive mean shift algorithm (Camshift), and the confidence region algorithm, by optimizing the search direction and using certain estimation methods to optimize the iterative convergence process of obtaining the distance between the target template and the candidate target, thereby narrowing the search range.
The research on spatial target positioning algorithms mainly focuses on obtaining the distance of each point on the target in the scene relative to the camera, which is one of the main tasks of machine vision and the ultimate goal of obstacle recognition. By calculating the distance parameters between the target and the camera, we can get the speed of the target relative to the car, the size of the target and other parameters, and better provide decision data for the control operation status. Here I have collected several implementation ideas for visual mobile obstacle avoidance:
There are many commonly used computer vision solutions, such as binocular vision, depth cameras based on TOF, depth cameras based on structured light, etc. Depth cameras can obtain RGB images and depth images at the same time. Whether based on TOF or structured light, the effect is not ideal in strong outdoor light environments because they need to actively emit light and are easily disturbed by strong light; depth cameras based on structured light emit light that generates relatively random but fixed spot patterns. After these spots hit the object, the positions captured by the camera are different due to different distances from the camera. Then, the offset between the spots in the captured image and the calibrated standard pattern at different positions is calculated, and the distance between the object and the camera can be calculated using parameters such as camera position and sensor size. For AGV, binocular vision is more suitable:
The distance measurement of binocular vision is essentially a triangulation distance measurement method. Since the two cameras are in different positions, just like the two eyes of a person, they see different objects. The same point P seen by the two cameras will have different pixel positions when imaging. In this case, the distance of this point can be measured through triangulation distance measurement. The points calculated by the binocular algorithm are generally image features captured by the algorithm, such as SIFT or SU features, and the sparse graph is calculated through the features.
The key to obstacle detection based on binocular stereo vision lies in two points: ① Extraction of obstacle targets, that is, identifying the position and size of obstacles in the image; ② Stereo matching points between pairs of obstacle target area images, so as to obtain the depth information of obstacle targets. The former step is the basis of the latter step. There can be multiple targets identified. Only after the disparity is obtained by stereo matching can we mark which targets are obstacle targets.
The realization of binocular stereo technology can be divided into: image acquisition, camera calibration, feature extraction, image matching and 3D reconstruction. The optical axes in the above figure are approximately parallel. In the parallel optical axis system, binocular visual ranging transforms the problem of finding the depth of the target in the 3D scene into the problem of finding the parallax in the 2D projection image. Therefore, the camera model is to establish a one-to-one mapping relationship between the points of the 3D scene and the points on the 2D image.
Review editor: Liu Qing
Previous article:Robot shell-has become an important breakthrough at this stage
Next article:How to achieve communication between Siemens Botu and RobotStudio?
- Popular Resources
- Popular amplifiers
- Using IMU to enhance robot positioning: a fundamental technology for accurate navigation
- Researchers develop self-learning robot that can clean washbasins like humans
- Universal Robots launches UR AI Accelerator to inject new AI power into collaborative robots
- The first batch of national standards for embodied intelligence of humanoid robots were released: divided into 4 levels according to limb movement, upper limb operation, etc.
- New chapter in payload: Universal Robots’ new generation UR20 and UR30 have upgraded performance
- Humanoid robots drive the demand for frameless torque motors, and manufacturers are actively deploying
- MiR Launches New Fleet Management Software MiR Fleet Enterprise, Setting New Standards in Scalability and Cybersecurity for Autonomous Mobile Robots
- Nidec Drive Technology produces harmonic reducers for the first time in China, growing together with the Chinese robotics industry
- DC motor driver chip, low voltage, high current, single full-bridge driver - Ruimeng MS31211
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- How to use a power amplifier to amplify and output a pulse train signal? How to use the Burst function of a signal generator?
- 【DIY Creative LED】LED lights and holes
- Surge arrester explanation and working principle
- Is there any error in the schematic diagram of the electrostatic generator?
- Cheap_Flash_FS (SPI_Flash version) -- embedded SPI_FLASH file system free source code, please download
- How to write interrupt function after using library function in MSP430F5529
- TPS61040 boost circuit abnormality
- [Atria AT32WB415 series Bluetooth BLE 5.0 MCU] + CAN communication
- What does it mean when a pushpin icon is displayed during STM32CubeMX pin configuration?
- Does TI C6678 support flash partitioning?