On April 24, it was reported that it is becoming more and more common for cameras to use AI to recognize faces and bodies in images and videos. From supermarkets and offices to autonomous driving and smart cities, smart cameras that can quickly capture human bodies and recognize faces are becoming ubiquitous.
However, recently, a research team designed a special colorful pattern. As long as you hang this 40cmx40cm magical sticker on your body, you can avoid the surveillance of AI cameras.
The team, from Katholieke Universiteit Leuven in Belgium, published a paper titled “Fooling automated surveillance cameras: adversarial patches to attack person detection.”
The three researchers named on the paper, Simen Thys, Wiebe Van Ranst, and Toon Goedeme, used the popular YOLOv2 open source object recognition detector for the demonstration, and they successfully fooled the detector by using some tricks.
Paper link: https://arxiv.org/pdf/1904.08653.pdf
They have published the source code in the paper:
https://gitlab.com/EAVISE/adversarial-yolo.
Artifact stickers that act like an "invisibility cloak"
Let’s first take a look at what this research team has done.
As shown in the picture, the person on the right has a colorful sticker on his body. This sticker successfully deceived the AI system, so that even when facing the camera, he was not detected by the AI system like the person on the left (pink frame).
The person on the right reversed the sticker and was immediately detected.
After the person on the right hands the sticker to the person on the left, the AI instantly fails to detect the person on the left.
The researchers note that the technique could be used to "maliciously bypass surveillance systems," allowing intruders to "perform sneaky actions by holding a small piece of cardboard in front of their body towards a surveillance camera."
According to foreign media reports, Van Ranst, one of the authors of the paper, revealed that it should not be too difficult to use existing video surveillance systems to solve this problem. "At present, we still need to know which detector is in use. What we want to do in the future is to generate a patch that works on multiple detectors at the same time." "If this works, then the patch may also work on the detectors used in the surveillance system."
▲The angle and position of the patch have different effects on the AI's ability to detect people
The team is now planning to apply the patches to clothing.
Drawing parallels to the famous William Gibson sci-fi novel Zero History, the researchers wrote: “We believe that if we combine this technique with sophisticated clothing simulation, we could design a T-shirt print that would render a person virtually invisible to automated surveillance cameras.”
Future work will focus on making patches more robust and transferable, as they do not scale well to different detection architectures such as Faster R-CNN.
How is the “anti-patch” made?
The core purpose of this research is to create a system that can generate printable adversarial patches that can be used to "fool" human detectors.
“We achieve this by optimizing the image to minimize the different probabilities associated with the appearance of people in the detector output,” the researchers wrote. “In our experiments, we compared different methods and found that minimizing object loss created the most effective patches.”
They then printed out the optimized patches and tested them by photographing people holding them.
The researchers found that the patches worked well as long as they were positioned correctly .
“Based on our results, we can see that our system is able to significantly reduce the accuracy of person detectors… In most cases, our patches are able to successfully hide the person from the detector. In cases where this is not the case, the patch is not aligned with the center of the person,” the researchers said.
The goal of the optimizer is to minimize the total loss function L. The specific optimization objectives include three loss functions: Lnps (non-printability score), Ltv (total image change), and Lobj (maximum object score in the image).
Lnps represents the extent to which the colors in the sticker can be printed by a normal printer;
Ltv ensures that the optimizer supports images with smooth color transitions and prevents image noise;
Lobj is used to minimize the target or category score output by the detector.
The above three loss functions are added together to get the total loss function:
The YOLOv2 detector outputs a grid of cells, each cell contains a series of anchor points, and each anchor point contains the location of the bounding box, object probability, and class score.
To get the detector to ignore people in images, the researchers trained it using the MS COCO dataset and tried three different approaches: minimizing the person classification score (Figure 4d), minimizing the object score (Figure 4c), or a combination of the two (Figures 4b and 4a).
Among them, the first method may cause the generated patches to be detected as another class of the COCO dataset. The second method does not have this problem, but the generated stickers are less specific to a certain class than other methods.
By experimenting with various types of "patches", the researchers finally found that photos of random objects that had undergone multiple image processing had the best effect. They tried a variety of random transformations, including image rotation, random enlargement and reduction, random addition of random noise, and random modification of accuracy and contrast.
Finally, the researchers evaluated the obtained patches together with NOISE (randomly added noise) and CLEAN (patch-free baseline) on the Inria test set, focusing on how many alarms these patches can avoid from the monitoring system.
The results show that the OBJ patch triggers the lowest number of alerts (25.53%).
The test results of different patches are compared as follows:
The first row has no patch, the second row uses a random patch, and the third row uses the best patch. The effect is obvious. The best patch can successfully allow humans to evade AI detection in most cases.
However, the patch is not perfect, and when it doesn't work well it may be because it is not aligned with the person.
Adversarial attacks are nothing new
With the rapid development of AI, multiple AI sub-sectors such as video surveillance, autonomous driving, drones and robots, speech recognition, and text screening are involving more and more security issues.
Deep learning networks, which have become increasingly accurate, have been found to be extremely vulnerable to adversarial attacks.
For example, adding a little carefully designed perturbation to the original image may cause a classification model to make an incorrect judgment, causing it to label the modified image as something completely different.
Based on this background, more and more researchers are interested in real-world adversarial attacks.
In November 2016, researchers from Carnegie Mellon University and the University of North Carolina demonstrated how to defeat facial recognition systems using a pair of printed eyeglass frames.
"When an attacker's image of her was presented to a state-of-the-art facial recognition algorithm, the glasses allowed her to evade being recognized or impersonate someone else," the researchers said.
It is said that the recognition error rate after wearing the frames is as high as 100%. Bad guys can use this recognition loophole to avoid software detection, and then easily disguise themselves as others and successfully unlock the device with facial recognition.
▲The above is a real person, and the below is a person identified by the system
In October 2017, Su Jiawei and others from Kyushu University in Japan proposed a black-box DNN attack. By using differential evolution to perturb a small number of pixels (only 1, 3, or 5 pixels out of 1024 pixels are perturbed), they can cause the deep neural network to make completely wrong judgments by simply modifying a few pixels in the image.
In 2018, the Berkeley Artificial Intelligence Research Laboratory (BAIR) demonstrated physical adversarial examples against the YOLO detector, also in the form of sticker perturbations, by pasting some carefully designed stickers on the "STOP" road sign.
As a result, the YOLO network cannot perceive the "Stop" sign in almost all frames. Similarly, the physical adversarial examples generated for the YOLO detector can also fool the standard Faster-RCNN.
We can imagine that if an autonomous car driving on the road sees a road sign with such a sticker on it, if it is misled by the sticker and fails to understand the road sign, an irreparable traffic tragedy may occur.
Conclusion: The optimal defense strategy is still under exploration
Adversarial attacks have long been an interesting and important topic in machine learning.
Today, AI is gradually being widely used in daily surveillance cameras and software, appearing in many scenarios such as retail, workspaces, communities, and transportation.
Adversarial samples may exploit loopholes in neural networks, such as allowing thieves to avoid surveillance cameras and steal things freely in unmanned stores, or allowing intruders to successfully enter a building.
Currently, researchers are still far from finding the optimal defense strategy against these adversarial examples, and we can expect breakthroughs in this exciting research field in the near future.
Previous article:Allegro DVT Launches Industry’s First Real-Time AV1 Video Encoder Hardware IP for 4K/UHD Video Encoding
Next article:Urban brain "eye disease" and upgrade: Analysis of the "digital retina" system proposed by Academician Gao Wen
- Popular Resources
- Popular amplifiers
- Mir T527 series core board, high-performance vehicle video surveillance, departmental standard all-in-one solution
- Akamai Expands Control Over Media Platforms with New Video Workflow Capabilities
- Tsinghua Unigroup launches the world's first open architecture security chip E450R, which has obtained the National Security Level 2 Certification
- Pickering exhibits a variety of modular signal switches and simulation solutions at the Defense Electronics Show
- Parker Hannifin Launches Service Master COMPACT Measuring Device for Field Monitoring and Diagnostics
- Connection and distance: A new trend in security cameras - Wi-Fi HaLow brings longer transmission distance and lower power consumption
- Smartway made a strong appearance at the 2023 CPSE Expo with a number of blockbuster products
- Dual-wheel drive, Intellifusion launches 12TOPS edge vision SoC
- Toyota receives Japanese administrative guidance due to information leakage case involving 2.41 million pieces of user data
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- Detailed explanation of intelligent car body perception system
- How to solve the problem that the servo drive is not enabled
- Why does the servo drive not power on?
- What point should I connect to when the servo is turned on?
- How to turn on the internal enable of Panasonic servo drive?
- What is the rigidity setting of Panasonic servo drive?
- How to change the inertia ratio of Panasonic servo drive
- What is the inertia ratio of the servo motor?
- Is it better for the motor to have a large or small moment of inertia?
- What is the difference between low inertia and high inertia of servo motors?
- UART to Wi-Fi Bridge Adds Connectivity to Existing Hardware
- This flyback switching power supply based on TinySwitch-III
- RF wireless transmission distance
- TMS320C6678 Evaluation Module
- Data collection and display of dsp
- Power Supply Problem Summary
- The one with the highest score wins: ST Sensors Conquer the World: Driver Transplantation Competition + Bone Vibration Sensor Evaluation
- EEWORLD University Hall----TI's new generation C2000? microcontroller: full support for servo and motor drive applications
- [RVB2601 Creative Application Development] Experience Sharing 1: Unboxing, Environment Building, and Outputting HelloWorld
- The 10 Best Pico Accessories of 2021