Magic stickers fool AI! Humans are "invisible", is the crisis of smart surveillance coming?

Publisher:星辰小鹿Latest update time:2019-04-24 Source: 智东西 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

On April 24, it was reported that it is becoming more and more common for cameras to use AI to recognize faces and bodies in images and videos. From supermarkets and offices to autonomous driving and smart cities, smart cameras that can quickly capture human bodies and recognize faces are becoming ubiquitous.

However, recently, a research team designed a special colorful pattern. As long as you hang this 40cmx40cm magical sticker on your body, you can avoid the surveillance of AI cameras.

The team, from Katholieke Universiteit Leuven in Belgium, published a paper titled “Fooling automated surveillance cameras: adversarial patches to attack person detection.”

The three researchers named on the paper, Simen Thys, Wiebe Van Ranst, and Toon Goedeme, used the popular YOLOv2 open source object recognition detector for the demonstration, and they successfully fooled the detector by using some tricks.

Paper link: https://arxiv.org/pdf/1904.08653.pdf

They have published the source code in the paper:

https://gitlab.com/EAVISE/adversarial-yolo.

Artifact stickers that act like an "invisibility cloak"

Let’s first take a look at what this research team has done.

As shown in the picture, the person on the right has a colorful sticker on his body. This sticker successfully deceived the AI ​​system, so that even when facing the camera, he was not detected by the AI ​​system like the person on the left (pink frame).

The person on the right reversed the sticker and was immediately detected.

After the person on the right hands the sticker to the person on the left, the AI ​​instantly fails to detect the person on the left.

The researchers note that the technique could be used to "maliciously bypass surveillance systems," allowing intruders to "perform sneaky actions by holding a small piece of cardboard in front of their body towards a surveillance camera."

According to foreign media reports, Van Ranst, one of the authors of the paper, revealed that it should not be too difficult to use existing video surveillance systems to solve this problem. "At present, we still need to know which detector is in use. What we want to do in the future is to generate a patch that works on multiple detectors at the same time." "If this works, then the patch may also work on the detectors used in the surveillance system."

▲The angle and position of the patch have different effects on the AI's ability to detect people

The team is now planning to apply the patches to clothing.

Drawing parallels to the famous William Gibson sci-fi novel Zero History, the researchers wrote: “We believe that if we combine this technique with sophisticated clothing simulation, we could design a T-shirt print that would render a person virtually invisible to automated surveillance cameras.”

Future work will focus on making patches more robust and transferable, as they do not scale well to different detection architectures such as Faster R-CNN.

How is the “anti-patch” made?

The core purpose of this research is to create a system that can generate printable adversarial patches that can be used to "fool" human detectors.

“We achieve this by optimizing the image to minimize the different probabilities associated with the appearance of people in the detector output,” the researchers wrote. “In our experiments, we compared different methods and found that minimizing object loss created the most effective patches.”

They then printed out the optimized patches and tested them by photographing people holding them.

The researchers found that the patches worked well as long as they were positioned correctly .

“Based on our results, we can see that our system is able to significantly reduce the accuracy of person detectors… In most cases, our patches are able to successfully hide the person from the detector. In cases where this is not the case, the patch is not aligned with the center of the person,” the researchers said.

The goal of the optimizer is to minimize the total loss function L. The specific optimization objectives include three loss functions: Lnps (non-printability score), Ltv (total image change), and Lobj (maximum object score in the image).

Lnps represents the extent to which the colors in the sticker can be printed by a normal printer;

Ltv ensures that the optimizer supports images with smooth color transitions and prevents image noise;

Lobj is used to minimize the target or category score output by the detector.

The above three loss functions are added together to get the total loss function:

The YOLOv2 detector outputs a grid of cells, each cell contains a series of anchor points, and each anchor point contains the location of the bounding box, object probability, and class score.

To get the detector to ignore people in images, the researchers trained it using the MS COCO dataset and tried three different approaches: minimizing the person classification score (Figure 4d), minimizing the object score (Figure 4c), or a combination of the two (Figures 4b and 4a).

Among them, the first method may cause the generated patches to be detected as another class of the COCO dataset. The second method does not have this problem, but the generated stickers are less specific to a certain class than other methods.

By experimenting with various types of "patches", the researchers finally found that photos of random objects that had undergone multiple image processing had the best effect. They tried a variety of random transformations, including image rotation, random enlargement and reduction, random addition of random noise, and random modification of accuracy and contrast.

Finally, the researchers evaluated the obtained patches together with NOISE (randomly added noise) and CLEAN (patch-free baseline) on the Inria test set, focusing on how many alarms these patches can avoid from the monitoring system.

The results show that the OBJ patch triggers the lowest number of alerts (25.53%).

The test results of different patches are compared as follows:

The first row has no patch, the second row uses a random patch, and the third row uses the best patch. The effect is obvious. The best patch can successfully allow humans to evade AI detection in most cases.

However, the patch is not perfect, and when it doesn't work well it may be because it is not aligned with the person.

Adversarial attacks are nothing new

With the rapid development of AI, multiple AI sub-sectors such as video surveillance, autonomous driving, drones and robots, speech recognition, and text screening are involving more and more security issues.

Deep learning networks, which have become increasingly accurate, have been found to be extremely vulnerable to adversarial attacks.

For example, adding a little carefully designed perturbation to the original image may cause a classification model to make an incorrect judgment, causing it to label the modified image as something completely different.

Based on this background, more and more researchers are interested in real-world adversarial attacks.

In November 2016, researchers from Carnegie Mellon University and the University of North Carolina demonstrated how to defeat facial recognition systems using a pair of printed eyeglass frames.

"When an attacker's image of her was presented to a state-of-the-art facial recognition algorithm, the glasses allowed her to evade being recognized or impersonate someone else," the researchers said.

It is said that the recognition error rate after wearing the frames is as high as 100%. Bad guys can use this recognition loophole to avoid software detection, and then easily disguise themselves as others and successfully unlock the device with facial recognition.

▲The above is a real person, and the below is a person identified by the system

In October 2017, Su Jiawei and others from Kyushu University in Japan proposed a black-box DNN attack. By using differential evolution to perturb a small number of pixels (only 1, 3, or 5 pixels out of 1024 pixels are perturbed), they can cause the deep neural network to make completely wrong judgments by simply modifying a few pixels in the image.

In 2018, the Berkeley Artificial Intelligence Research Laboratory (BAIR) demonstrated physical adversarial examples against the YOLO detector, also in the form of sticker perturbations, by pasting some carefully designed stickers on the "STOP" road sign.

As a result, the YOLO network cannot perceive the "Stop" sign in almost all frames. Similarly, the physical adversarial examples generated for the YOLO detector can also fool the standard Faster-RCNN.

We can imagine that if an autonomous car driving on the road sees a road sign with such a sticker on it, if it is misled by the sticker and fails to understand the road sign, an irreparable traffic tragedy may occur.

Conclusion: The optimal defense strategy is still under exploration

Adversarial attacks have long been an interesting and important topic in machine learning.

Today, AI is gradually being widely used in daily surveillance cameras and software, appearing in many scenarios such as retail, workspaces, communities, and transportation.

Adversarial samples may exploit loopholes in neural networks, such as allowing thieves to avoid surveillance cameras and steal things freely in unmanned stores, or allowing intruders to successfully enter a building.

Currently, researchers are still far from finding the optimal defense strategy against these adversarial examples, and we can expect breakthroughs in this exciting research field in the near future.


Reference address:Magic stickers fool AI! Humans are "invisible", is the crisis of smart surveillance coming?

Previous article:Allegro DVT Launches Industry’s First Real-Time AV1 Video Encoder Hardware IP for 4K/UHD Video Encoding
Next article:Urban brain "eye disease" and upgrade: Analysis of the "digital retina" system proposed by Academician Gao Wen

Latest Security Electronics Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号