There are often tens of thousands of surveillance cameras in public places in cities, which monitor and record day and night. While improving public security, they also generate a large amount of video that needs to be processed by the surveillance management platform. It is very difficult to manually capture key images from these massive videos.
First of all, it is necessary to browse all the video recordings for several seconds to find the key images, which is undoubtedly a huge workload, even like looking for a needle in a haystack.
Secondly, even if the key images are found, it is still a lot of work and inefficient to manually capture them and record the time points of the corresponding original video images. Due to the limitations of human physiological characteristics, long-term viewing of video recordings can easily cause visual fatigue and may miss important images and clues.
Ideally, once an important event occurs, the system can quickly find clues afterwards. For the post-analysis of the massive surveillance video recordings mentioned above, the traditional video clue search based on human wave tactics obviously cannot meet the requirements of efficient search and is facing huge challenges. There is an urgent need for a more efficient, automatic and intelligent system to meet the above requirements. At present, video condensation and summarization technology and video classification retrieval technology can solve this problem. Through video condensation and summarization, the playback time of video events is shortened, and through target classification screening, the functions of quickly finding event clues and narrowing the viewing scope are realized. These functions greatly reduce the workload of professional patrol teams and system maintenance personnel. The realization and application of these video analysis functions will greatly improve monitoring efficiency.
This technology transforms the traditional post-event passive call for image information into active discovery of suspicious points in video surveillance images. Through the verification and entry system, it is responsible for centrally storing the images and picture resources captured by the professional patrol team when suspicious incidents are discovered, and generating description information of the event, including based on time, location (channel), event type (technical defense alarm, motion detection, etc.), tags, etc., to achieve precise query function based on event description and precise positioning of video recordings in seconds.
The "Intelligent Massive Video Surveillance Recording Analysis and Verification Recording Solution" described in this solution is the core product solution of Shenzhen Jiuling Software Technology Co., Ltd. Its core technology mainly includes three parts: video summary and image snapshot recording; video target classification screening and retrieval; video enhancement and night video processing. These three blocks can be used separately or organically combined to form an overall solution. The system fills the gap in my country's post-analysis and evidence collection of massive video surveillance and video target screening and retrieval, and the technology is in a leading position internationally.
1. Video summary and image snapshot recording subsystem
Since video browsing and retrieval are time-consuming, most videos are not watched and checked from beginning to end. Video condensed summaries have become an effective tool for browsing and retrieving videos.
1. Technical Principle
"Video summary" refers to a shorter video clip that is edited by extracting the three-dimensional information of the activity pipeline of the target of interest from the original video and synthesizing it with the background video. It compresses a day's video into a short event summary video of several tens of minutes by playing multiple events at the same time; it contains all the important target activity details and snapshots in the original video. The video summary can use the original video resolution or reduce the resolution according to storage requirements. Managers can play the original video before and after the target appears by clicking on the target in the condensed video or the snapshot on the left side of the video.
A quicker way is to extract active targets by segmenting the foreground and background, and then display them in a more intuitive and convenient way using a snapshot list. In the video summary and snapshot list, for human or vehicle targets, the target border can be automatically highlighted, the time when the target appeared, the movement trajectory, and the user can click on the target snapshot to play the original video before and after the target appeared, or the video synthesized by the target activity and the background.
24-hour original surveillance video
Condensed into a few minutes video summary
2. Application
In order to increase the concentration ratio and improve the effect, you can specify a "sensitive area" when concentrating the video, so that only active targets that enter this area will be summarized.
The system supports restricted area settings, extracts targets entering and leaving restricted areas, and detects and automatically labels boundary tripping events, people following, abnormal movements, vehicle violations, and human-vehicle interaction events.
In this system, professional inspectors (managers) browse condensed videos or snapshot lists. When they find a target of interest, they click on the target to play the original video when the target appears. Based on the system's automatic generation of the target's time, location (channel), target height, movement direction, target's primary and secondary colors, and event type, managers can manually improve the event type, enter event tags, and other information for indexing. It supports querying events based on the time, location (channel), target height, movement direction, dominant color, and event type of the target. Compared with traditional video playback to find event clues and annotations, this system can provide dozens of times the efficiency and effectively reduce the workload of the inspection team.
Figure 3: Find the original video through 3×3 video snapshots
Figure 1. Automatically find the original video by clicking on the condensed video summary
3. Features
● Watch a day's worth of videos in just a few minutes, no need to fast forward, the event details are still played at real speed
● Concentration density, region of interest or event detection area can be freely set
● Select and click a target object from the condensed video to replay
● Supports automatic recording of target appearance time, location (channel), target height, movement direction, primary color, secondary color and event type
● Supports manual modification of event attributes and input of event tags
● Freely add classification conditions such as trajectory, person and vehicle category to achieve classification-based video concentration
● Display target snapshots by person, vehicle, color, size, speed, direction, etc.
● Easy to use, seamlessly connect DVR/DVS and third-party monitoring management platform
● Supports querying events based on target appearance time, location (channel), target height, movement direction, color and event type
2. Video Target Classification Screening and Retrieval System
The system is developed based on the video summary and image snapshot entry subsystem, and new functions are added based on it.
1. Principle and application
With the help of clues provided by eyewitnesses, after a quick glance at the video summary of the accident scene, the patrol personnel (administrators) can find some useful clues. If some targets of interest are found in a certain video summary, it is usually necessary to search and check in the video files recorded by cameras in more locations whether the target (suspect or vehicle) has appeared in other places.
The administrator requests a search by providing target features (person and vehicle category, color, height, direction, speed, etc.), a snapshot sample or a sketch. The video target classification screening retrieval system first uses an effective motion segmentation method to extract the details of the moving target, and then the basic features of these moving targets are extracted as metadata and stored in the database. During the entire retrieval process, the system will extract the features of the retrieval input and request to compare the target features that have been indexed in the database without reprocessing the video. Finally, the video target snapshots with a sufficiently high relevance will be displayed as retrieval screening results.
The system can retrieve videos from multiple cameras simultaneously and seamlessly connect to the central storage system for verification and entry. The system supports a variety of search conditions, including precise query functions based on time, location (channel), event (technical defense alarm, motion detection, etc.), and mark, and can achieve accurate positioning of video recordings in seconds. It provides fuzzy query functions based on monitoring probe attributes (including geographic location/GIS coordinates, probe specifications, monitoring range, special application functions, etc.), events, target snapshots, target sketches, etc. The corresponding fuzzy recognition method realizes the image resource query between the municipal bureau and branch bureaus, police stations, and streets through database sharing under the same platform framework.
2. Features
● Supports multiple search conditions, including accurate search functions such as time, location (channel), event (technical defense alarm, motion detection, etc.), and tag
● Support fuzzy query retrieval functions based on events, target classification or sample retrieval
● Videos from multiple cameras can be retrieved simultaneously, and seamlessly connected to the central storage system for verification and entry
● Realize cross-branch synchronous retrieval by checking and entering into the storage center system
● Better able to take advantage of cloud computing parallel processing
● Generate a snapshot list directly for administrators to search and quickly find the possible location of the target
● Click on the target snapshot to play the original video
● Supports automatic recording of target appearance time, location (channel), target height, movement direction, primary color, secondary color and event type
● Supports manual modification of event attributes and input of event tags
3. Video Enhancement and Night Video Processing Subsystem
This subsystem is a supplement to the previous two systems. The video enhancement and night video processing subsystem can be used as the pre-processing module of this system to improve the video quality.
To address the problems of low illumination and high noise at night, the block effect and screen flickering caused by video compression, and blur caused by motion, the system pioneered the use of optical flow estimation and edge sharpening technology to solve the problems of video deblurring and low-quality image enhancement under low-light conditions at night. The deblurring technology that estimates the blur kernel of motion blur can restore blurred images that are indistinguishable to the human eye.
Unclear image of a truck in motion
Image restored after motion blur removal
To sum up, the intelligent massive video surveillance recording analysis system can solve the actual needs of users: the system video real-time viewing and recording playback functions, and can realize the capture function of key images, and the capture time, location, point number, capture reason, brief content of the case, capture target characteristics and other information of the captured photos can be recorded and queried. It can fully meet the functional requirements and has obvious advantages over traditional methods.
We can summarize it as the "five assured" video enhancement, condensed summary and retrieval products, namely:
Leaders are satisfied (leaders can view high quality on their mobile phones)
Comfortable to use (no need to search for a needle in a haystack when searching by category)
Easy maintenance (only a few minutes of browsing per day)
Environmentally friendly and reliable (saving hard disks)
Win the hearts and minds of the people (improve public order)
The application of the massive video surveillance recording analysis and verification recording system solution can be described as:
1. The manager records the case information based on clues of the case, such as an "adult suspect in a red shirt" appearing at the scene, and the time and location of the crime (camera channel number);
2. Check and input software to dispatch video recordings of relevant locations and nearby locations to the storage center;
3. If the video is not clear, pre-process it with video enhancement software;
4. First process the video from a single camera, use the video summary and image snapshot recording subsystem to generate a snapshot list and condensed video files, find the suspect through the snapshot, and record the event information into the verification recording system
5. Search the video files of surrounding cameras before and after the incident through the video target classification screening and retrieval subsystem to find the entire trajectory from the target entering to leaving the scene, and enter the entire information of the cross-camera incident into the verification and entry system
6. The above automatic search system can coexist with the manual verification and entry system.
Previous article:AXIS's new network surveillance camera uses MIPS multi-threading technology
Next article:A comprehensive analysis of high-definition megapixel network camera technology
- Popular Resources
- Popular amplifiers
- Mir T527 series core board, high-performance vehicle video surveillance, departmental standard all-in-one solution
- Akamai Expands Control Over Media Platforms with New Video Workflow Capabilities
- Tsinghua Unigroup launches the world's first open architecture security chip E450R, which has obtained the National Security Level 2 Certification
- Pickering exhibits a variety of modular signal switches and simulation solutions at the Defense Electronics Show
- Parker Hannifin Launches Service Master COMPACT Measuring Device for Field Monitoring and Diagnostics
- Connection and distance: A new trend in security cameras - Wi-Fi HaLow brings longer transmission distance and lower power consumption
- Smartway made a strong appearance at the 2023 CPSE Expo with a number of blockbuster products
- Dual-wheel drive, Intellifusion launches 12TOPS edge vision SoC
- Toyota receives Japanese administrative guidance due to information leakage case involving 2.41 million pieces of user data
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- New breakthrough! Ultra-fast memory accelerates Intel Xeon 6-core processors
- New breakthrough! Ultra-fast memory accelerates Intel Xeon 6-core processors
- Consolidating vRAN sites onto a single server helps operators reduce total cost of ownership
- Consolidating vRAN sites onto a single server helps operators reduce total cost of ownership
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- 【New Year's Festival Competition】+ Free transportation for migrant workers returning home to spread positive energy to society
- Tutorial on Varistor Basics, Working Principles and Characteristics
- About CCS7.2 debug initialization error
- Test yourself | How much do you know about brushless DC controller design challenges? Get 5-20 core points!
- STTS751 temperature sensor data sheet, driver, PCB package
- AD import brd file prompts cadence allegro installation not found... error
- Free Review: Tuya Sandwich Wi-Fi & BLE SoC NANO Main Control Board
- Please tell me what is the function of the transistor in this circuit diagram
- DSP28335 User Experience
- How to understand this part of the circuit