Facebook has developed a de-identification system that can make you invisible in live videos
▲Click above Leifeng.com Follow
Using AI to deceive AI, this operation is simply 666.
Text | Liu Lin
Recently, a paper titled "DeepPrivacy: A Generative Adversarial Network for Face Anonymization" from the Norwegian University of Science and Technology claimed to have deceived the face recognition system with a new and more challenging representation method: anonymizing the face without changing the original data distribution. In more popular terms, it outputs a realistic face without changing the original person's posture and background. With this technology, the face recognition system can still operate normally, but it is completely unable to recognize the original face identity, and the forger can pretend to be someone else and freely enter and exit facilities with face recognition systems.
According to the authors' tests, the anonymized faces still retain a level of recognizability close to the original images, and the average accuracy of ordinary face recognition for anonymized images only drops by 0.7%. The natural information contained in the face is naturally 100% non-overlapping.
Using AI to deceive AI, this operation is simply 666.
Previously, Facebook had also tried to combat facial recognition, and recently, the results were finally achieved.
Foreign media VentureBeat reported that recently, Facebook's artificial intelligence laboratory Facebook AI Research (FAIR) developed a "de-identification" system that can deceive facial recognition systems, for example, making facial recognition systems identify you as a female star.
The technology uses machine learning to change key facial features of people in a video in real time, tricking facial recognition systems into misidentifying the subject.
The technology allegedly pairs an adversarial autoencoder with a trained face classifier to slightly distort a person's face, thereby confusing facial recognition systems while maintaining a natural look that people can recognize, and it can be used in videos, even real-time videos.
In fact, this kind of "de-identification" technology has already existed in the past. D-ID, an Israeli automatic anti-face recognition system provider, has developed de-identification technology for static images. In addition, there is a so-called adversarial example, which exploits the weaknesses of computer vision systems. People wear adversarial patterns to trick facial recognition systems into seeing things that do not exist.
Previous techniques were usually applied to photos, still images obtained from surveillance cameras or other channels, or pre-planned adversarial images to deceive facial recognition systems. Now, FAIR's research targets real-time images and video footage, and FAIR claims that this technical achievement is the first in the industry and is sufficient to resist sophisticated facial recognition systems.
Facebook also published a paper explaining its attitude towards new technologies. It puts forward the view that facial recognition may violate privacy and face replacement technology may be used to create misleading videos. In order to control the abuse of facial recognition technology, the company has introduced a method to de-identify videos, which has achieved good results.
In addition, according to VentureBeat, Facebook does not intend to use this anti-face recognition technology in any commercial products, but this research may have an impact on future personal privacy protection tools. And, as the research emphasizes in "Misleading Video", it can prevent personal portraits from being used to create fake videos.
In fact, anti-facial recognition technology has developed rapidly in recent years. As early as last year, a team led by Professor Parham Aarabi and graduate student Avishek Bose of the University of Toronto developed an algorithm that can dynamically disrupt facial recognition systems.
In simple terms, the method they chose is to hinder face recognition by interfering with the face recognition algorithm. The algorithm changes the detection results of the recognizer by changing some tiny pixels that are almost unrecognizable to the human eye. Although the algorithm's modification of pixels is very small, it is fatal to the detector.
The researchers also confirmed the feasibility of this method by testing results on the 300-W database. This dataset contains more than 600 face photos of multiple races, different lighting conditions and background environments, and is an industry standard library. The results show that their system can reduce the proportion of detectable faces from nearly 100% to 0.5%.
What’s even more frightening is that this anti-face recognition system has the ability of autonomous learning through neural networks, and can continuously change itself as the face recognition system evolves.
But what is even more terrifying to the editor of Leifeng.com is that in the era of AI, we cannot even preserve our own "face".
Previous recommendations
Leifeng.com Annual Selection
Find the best AI implementation practices in 19 industries