The vehicle and personnel checkpoint monitoring system (hereinafter referred to as the checkpoint monitoring system) relies on checkpoints such as intersections, toll booths, traffic or public security checkpoints to achieve on-site monitoring of motor vehicles and front-seat drivers and passengers passing through the checkpoints by photographing, recording, and processing them.
The checkpoint monitoring system can automatically identify the license plate, color and other features of passing vehicles, verify the legal identity of the vehicle, automatically check the blacklist database, and automatically alarm; the facial features of the front-seat drivers and passengers can be clearly identified in the captured pictures, and the facial feature map and facial feature parameters can be extracted to monitor and deal with vehicles and personnel suspected of traffic violations, hit-and-run, criminal offenses, etc. It is an important non-site law enforcement and surveillance system.
System Introduction
The checkpoint monitoring system monitors the road traffic conditions in real time, detects, captures, and identifies motor vehicles on the road (motor vehicle license plates), detects and extracts facial feature maps/facial feature parameters of front-seat drivers and passengers; uploads the captured images and license plate recognition results to the control center, and the computer in the control center receives the data from the front-end equipment, stores the processed data into the database and processes it.
The system uses 5-megapixel high-definition video detection to conduct real-time detection of motor vehicle lanes. When a motor vehicle passes by, the high-definition 5-megapixel integrated camera captures 1 to 2 pictures. After the high-definition integrated camera processes the pictures, it uploads them to the control center.
At present, the high-definition vehicle and personnel intelligent monitoring and recording system adopts the latest video recognition and DSP embedded system technology, which can accurately detect motor vehicles, output high-definition pictures of vehicles and relevant information of vehicles, and analyze and extract facial feature pictures and facial feature parameters of front-seat drivers and passengers. High-definition video capture is used for vehicles, and the influence of various interference factors such as ambient lighting is minimized to the greatest extent.
The system uses an IP-based network interface, which is easy to interface with other devices and can be flexibly configured for various road conditions. The connection between the control center management computer system and the equipment at each intersection adopts a star topology structure. Each intersection can be directly connected to the control center system through the network to transmit vehicle/personnel information and high-definition pictures. It greatly improves the vehicle/personnel capture rate and recording efficiency, and has been widely praised by users.
System Architecture
The high-definition vehicle and personnel intelligent monitoring and recording system consists of the following equipment:
System front-end equipment (HD 5-megapixel integrated camera, detection/capture fill light, network switching/transmission equipment);
Control center equipment (network switching/transmission equipment, servers).
System Features
The high-definition vehicle and personnel intelligent monitoring and recording system is divided into: vehicle detection/identification/recording, traffic parameter collection/violation evidence collection, detection and extraction of facial feature maps and facial feature parameters, fill light, data transmission, control center computer and other subsystems.
Vehicle detection/identification
Vision-based vehicle detection technology is increasingly being used in intelligent traffic management systems, and reliable vehicle detection technology is an important part of intelligent traffic management systems. Vision-based vehicle detection is very challenging, as the detected vehicles have different speeds, shapes, sizes, angles, and colors. The appearance of the vehicle is also affected by the vehicle's posture and the objects around it. At the same time, the vehicle's occlusion and lighting conditions also change the vehicle's entire appearance. Vehicle detection based on vehicle features has good adaptability and can adapt to different road traffic environments, which can effectively promote the vehicle detection system to improve the detection rate and reduce the false detection rate.
Vehicle feature recognition technology is the positioning, identification and tracking of moving targets in dynamic scenes. It is a high-tech technology that analyzes and judges target features and behaviors. The key is the capture and identification of vehicle features, which includes vehicle model recognition, license plate recognition, license plate color recognition and vehicle color recognition. Its recognition of vehicle features can meet the requirements of precision, accuracy and speed.
Vehicle features are the most effective elements for detecting and identifying vehicles, and the vehicle license plate is a unique element in vehicle features. By identifying and tracking the vehicle license plate, vehicles can be effectively detected, identified, and tracked.
Vehicle feature detection targets vehicles and license plates and responds to vehicles passing through the video stream, ensuring a high capture rate of vehicles and license plate recognition rate.
The high-definition vehicle and personnel intelligent monitoring and recording system adopts advanced computer video detection technology to process and identify images of vehicles in real time frame by frame. In particular, the unique vehicle feature tracking and comparison technology can make full use of effective information, thereby greatly improving the recognition accuracy of the system.
Through video detection of vehicle features, the vehicle's location information is automatically recorded, the vehicle's running trajectory and driving status are analyzed, and the vehicle at the set location is captured.
Bayonet function
The high-definition camera on the motor vehicle lane performs real-time video detection of motor vehicles in the lane. The high-definition camera analyzes and processes the images frame by frame in real time. When a motor vehicle enters the detection area, the high-definition camera captures 1~2 pictures, and the capture fill light flashes synchronously; the camera processes the pictures to realize operations such as license plate recognition of the vehicle. In the captured pictures, the vehicle features, license plate numbers, and facial features of the front-seat drivers and passengers can be clearly identified.
The HD camera can process HD videos in real time, automatically capture vehicles and identify vehicle license plates, and upload the recognition results, pictures/information, etc.
In the captured motor vehicle pictures, the vehicle features (vehicle shape, body color, vehicle license plate) can be clearly reflected; the window reflection can be effectively overcome both day and night, and the facial features of the front driver and passengers can be clearly identified in the captured pictures;
The high-definition camera sends information such as vehicle license plate, picture, road section number, time, etc. to the control center via Ethernet; after receiving the data and pictures, the computer in the control center processes and saves them;
The system uses an integrated 5-megapixel high-definition camera to complete high-definition image acquisition, video analysis/detection, license plate recognition and other functions; it complies with the requirements of GA/T497-2009 "General Technical Standard for Highway Vehicle Intelligent Monitoring and Recording System", and the image information can clearly reflect the location, lane, driving direction, time, license plate number, vehicle type, image sequence number and other information; and it can be modified accordingly according to customer needs to meet various different needs.
Traffic parameter collection/violation evidence collection
1. Video speed test
The driving speed of vehicles passing by in real time can be quickly detected. The principle is as follows.
Method 1: Calibrate the vertical height of the integrated camera and its horizontal position with the lane, calibrate the far distance and near distance in the field of view, and realize video speed measurement on a straight lane. Real-time processing frame by frame can obtain the precise trajectory of the vehicle and record the time of the vehicle at all reference points. Through calculation and analysis, the vehicle's driving speed can be obtained;
Method 2: By comparing the position of the reference object in the actual scene to calibrate the coordinates and height of the reference object in the image, the actual scene is restored to a 3D scene, and the vehicle speed is measured in three-dimensional space. The vehicle information is processed frame by frame in real time, and the precise trajectory of the vehicle in the camera's field of view can be obtained. Through calculation and analysis, the vehicle's driving speed can be obtained.
2. Traffic parameter collection
Real-time collection of traffic parameters such as vehicle flow, average speed, headway, road occupancy, etc., and real-time upload of traffic parameters to the control center after summarizing them at set time intervals. The vehicle type can be judged and the vehicle type (large, medium, small) can be output in the additional information. The specific dimensions (meters) of the vehicle length and width can also be output, which is convenient for users to view or call information.
3. Vehicle violation evidence collection function
Detect, lock, and track vehicles in high-definition images. Based on the vehicle's movement trajectory and the lane markings defined by the user, determine whether the vehicle has violated traffic rules, such as running over the line, crossing the line, or driving in the wrong direction. Distinguish lane dividers and double yellow lines, and take photos of illegal vehicles to obtain evidence for law enforcement.
Face feature map detection and extraction
Automatically scan (detect) captured high-definition images, use efficient face detection algorithms, cooperate with advanced target fusion and decision-making strategies, and locate the coordinates of the area containing face information in real time to achieve intelligent real-time face detection.
By organizing a large amount of face and non-face image data, applying advanced algorithms, training to obtain the best face description features, and organizing a large number of features according to a certain structure, a special face feature classifier is formed. The feature classifier is used to perform a full range of face searches on the images input from the front-end system, and the detection area in the image that meets the feature classifier screening conditions (the matching degree with the feature classifier meets certain conditions) is determined as a face target, completing the positioning of the face target.
The state recognition mechanism can reduce the interference of complex background on the recognition system;
Detect faces that are rotated <15° left to right and <10° up to down;
Fast processing speed: for 5-megapixel high-definition images, the detection time is less than 300ms;
·Has a frontal face detection rate greater than 95%;
The detected facial image undergoes a series of operations such as image processing, model adjustment and positioning, and image standardization. Finally, feature extraction is performed on the standardized image to obtain a specific facial description feature map that is easy for machine recognition and classification.
Fill light subsystem
The lane is equipped with one detection fill light and one snapshot fill light. The detection fill light can be automatically turned on by the light control circuit when the ambient brightness is low, such as at night; fill light is provided in the detection area of the high-definition camera, and is synchronized with the exposure signal of the CCD, so that the image quality meets the requirements for vehicle video detection and license plate recognition. The detection fill light uses a high-power LED, has a constant voltage/constant current drive circuit, and a good heat dissipation structure and measures, so that the detection fill light can operate stably and reliably. When the bayonet integrated machine detects the vehicle and sends a snapshot trigger signal, the fill light control module sends a trigger pulse to the snapshot fill light after receiving the snapshot trigger signal; the snapshot fill light flashes, and the flashing time is synchronized with the flash trigger signal of the high-definition bayonet integrated machine. The high-definition bayonet integrated machine captures a picture at the same time to ensure that the features and details of the motor vehicle (including the faces of the driver and passengers) in the picture are clear and recognizable.
The exposure synchronization circuit for detecting fill light and capturing fill light is synchronized with the exposure of the high-definition bayonet integrated machine, and can effectively shoot various objects during the day, at night, when the vehicle headlights are turned on, etc., so that the captured pictures can clearly identify the vehicle's appearance features and the facial features of the driver and passengers, and effectively identify the vehicle license plate and objects. It has the advantages of high efficiency and little visual interference to the driver.
Data Transfer
It can be divided into front-end equipment data transmission and control center data transmission.
1. Front-end equipment data transmission:
·High-definition bayonet integrated machine←→LED capture fill light/detection fill light, digital signal-light-on transmission;
·HD camera ←→ front-end storage, Ethernet data transmission;
2. Data transmission from control center:
·High-definition camera ←→ control center, Ethernet data transmission.
Control center computer system
1. Control center system composition and functions
The computer system of the control center consists of servers, central platform software, network switching equipment, etc. It receives data and images from the front-end equipment, processes them and stores them in the database. It provides functions such as statistical query by time, location, license plate number, etc., and comparison query by time period, and can retrieve and query vehicle history records in an accurate or fuzzy manner, and can display and print them in the form of graphics and charts. It receives facial feature parameters/images sent back by the front-end equipment, extracts facial feature parameters, and stores facial feature images and corresponding feature vectors in the database. The control center software is based on a modular architecture and adopts a multi-module model to design and develop the software. The main features are as follows.
Stability: For long-term business systems, multi-modular architecture provides more reliable stability;
Easy to maintain: Due to the modularization of the software, when the business rules change, only the corresponding module needs to be modified, and other modules are basically unchanged;
Flexible system expansion: Based on a multi-modular architecture, when the business grows, more application servers can be deployed to improve the response to the client, and all changes are transparent to the client.
2. Vehicle Information Management System
The vehicle information data management system is the starting data module of the central platform, which supports entering past vehicle data and uploading pictures and videos, checking past vehicle data, printing past vehicle documents, querying, counting and exporting past vehicle data.
3. Audit and control management system
The inspection and control management system is the vehicle inspection module of the central platform, which supports the entry and import of controlled vehicle data, review, cancellation, case closing and supports customized control settings. While reviewing the controlled vehicles, the data exchange bus is notified to release the relevant controlled vehicles to the front-end industrial computer. After the industrial computer captures the controlled vehicles, it uploads the alarm. The data exchange bus processes the control alarm and notifies the real-time alarm system. The control alarm query supports post-event query.
4. Real-time alarm management system
The system is a C/S form application, which is installed on the client that needs to process the control alarm information and illegal alarm information in real time. When there is illegal information or control alarm information, it will alarm according to the system configuration of alarm sound, alarm icon and other related information, so as to realize real-time alarm of alarm information. Each client can set the configuration information such as alarm sound and alarm icon separately.
5. Traffic event detection system
Traffic statistics are collected at each intersection, and statistical analysis is performed on the data of traffic at each intersection that exceeds the intersection alarm traffic. All intersections and driving directions can be customized to combine, and the combination of intersections and directions can be used to count the passing vehicle traffic.
6. Facial feature parameter processing/portrait comparison
The server can receive face thumbnails sent back by multiple front-end devices, extract face feature parameters and store face thumbnails and corresponding feature vectors in the database. Under standard configuration conditions, a single server can achieve a processing capacity of 35 faces per second.
The detected face image undergoes a series of operations such as image preprocessing, model adjustment and positioning, and image standardization. Finally, feature extraction is performed on the standardized image to obtain specific face description features that are convenient for machine recognition and classification.
Image processing: face positioning, key point positioning, feature extraction, feature vectorization, etc.;
Feature extraction: extract facial features and generate a 6.4K-byte/person feature file;
Information storage: Save each face image, feature file, and location, time, and channel information.
The face comparison server is responsible for comparing the face data information stored in the database with the target face image and obtaining the final comparison result. Under standard configuration conditions, the comparison rate is 2 million times per second.
The features of the current processing model are compared with the individual features in the preset background model library one by one, and the comparison similarity scores are obtained respectively. Then the score values are sorted in descending order. Finally, a series of related individuals with the highest similarity scores are retained and used as the output result of the system.
Support face image data input in multiple formats;
The face size should be no less than 80×100 pixels;
Realize batch import of database data and batch output of comparison results;
When the server is in standard configuration, the comparison speed is 2 million times per second;
The matching accuracy can be set as needed;
When the output data is reduced by 1.5 orders of magnitude compared to the input data, the accuracy is 98%.
Previous article:Application of HD-SDI Camera in Urban Road Monitoring
Next article:A brief analysis of the technology and application of megapixel high-definition cameras
- Popular Resources
- Popular amplifiers
- Mir T527 series core board, high-performance vehicle video surveillance, departmental standard all-in-one solution
- Akamai Expands Control Over Media Platforms with New Video Workflow Capabilities
- Tsinghua Unigroup launches the world's first open architecture security chip E450R, which has obtained the National Security Level 2 Certification
- Pickering exhibits a variety of modular signal switches and simulation solutions at the Defense Electronics Show
- Parker Hannifin Launches Service Master COMPACT Measuring Device for Field Monitoring and Diagnostics
- Connection and distance: A new trend in security cameras - Wi-Fi HaLow brings longer transmission distance and lower power consumption
- Smartway made a strong appearance at the 2023 CPSE Expo with a number of blockbuster products
- Dual-wheel drive, Intellifusion launches 12TOPS edge vision SoC
- Toyota receives Japanese administrative guidance due to information leakage case involving 2.41 million pieces of user data
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- Rambus Launches Industry's First HBM 4 Controller IP: What Are the Technical Details Behind It?
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- Design of filter circuit
- Remove the vicor module power supply
- IoT Gateways - A Simple Guide to IoT
- Help with MS41908M lens driver chip driver
- Wideband digital receiver based on real signal processing and FPGA implementation
- RISC-V MCU Development (V): Debug Configuration
- I want to ask about the difference between EL357 and LTV354, two optocouplers.
- [AD21] How to solve the error Board Clearance Constraint
- Read the good book "Operational Amplifier Parameter Analysis and LTspice Application Simulation" + Reading to learn about amplifiers
- Can anyone make a 433M remote control?