The AI ​​face-changing app is fun, but its potential dangers and insecurities cannot be underestimated

Publisher:红尘清梦Latest update time:2019-09-05 Source: eefocusKeywords:ZAO Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

The potential dangers and insecurities that ZAO may cause are also challenges that people will inevitably face when artificial intelligence provides products to society.

  

The AI ​​face-changing software named ZAO has become popular on the Internet, but the good times did not last long. The WeChat sharing link of "ZAO" that became popular overnight has been blocked. The privacy anxiety and risk concerns brought about by "ZAO" are still spreading.

  

In fact, the ZAO software is a modified face-changing APP that originated in the United States in 2017. It is called Deepfakes, which is the abbreviation (collective name) of deep machine learning and fake photos. The most vivid name is deep (photo) fake.

  

Since it is fake, it must be banned. It may not only cause various insecurities due to the leakage of personal information, but if it involves crime, the easy face-changing operation will make it difficult to distinguish between real and fake criminals, and increase judicial costs. Of course, in some countries, such as the United States, face-changing may also become a nightmare for anti-terrorism, making it easy for terrorists to commit crimes and escape by changing their faces, and increasing the difficulty of anti-terrorism for security departments.

  

All these issues are potential dangers and insecurities that may arise from ZAO and similar software, and they are also challenges that people will inevitably face when artificial intelligence provides products to society. Therefore, how to regulate is the most realistic and serious issue.

  

Now the ZAO software is facing the difficult choice of whether to ban or release it. As for face-swapping, the United States, the European Union and other countries believe that even if it is a potential threat, it is extremely serious and must be prevented first.

  

On January 28 this year, the Carnegie Endowment for International Peace published an article titled "How Should Countries Respond to Deepfakes?", pointing out that face-changing technologies represented by Deepfakes have a series of potential hazards, including inciting political violence, undermining elections, disrupting diplomatic relations, providing false evidence and interfering with the judiciary, and carrying out blackmail. It is hoped that countries will clearly define the improper use of Deepfakes, and that society urgently needs to define what is acceptable and what is not acceptable. This is not only conducive to social and legal management, but also conducive to social media regulating its platforms and managing online content.


On June 13 this year, the U.S. House Intelligence Committee held a hearing on Deepfakes. At the hearing, the committee chairman, Congressman Adam Schiff, said that the spread of doctored videos has brought a "nightmare" scenario to the 2020 presidential election, making it "difficult for lawmakers, news media and the public to distinguish what is real and what is fake." Therefore, he and Daniel Citron, a professor at the Maryland School of Law, suggested that Congress consider amending Section 230 of the Communications Standards Act (Internet services do not have to be responsible for the behavior of their users) to combat Deepfakes and protect users from being misled by fake news.

  

In response to the destructive and dangerous content produced by Deepfakes face-changing technology, the European Union also issued response guidelines in early 2019 to help the public distinguish the source of a piece of information, how the information was produced, and whether the information is trustworthy.

  

It can be seen that the United States and the European Union, which attach great importance to the potential dangers of face-swapping technology, have not yet introduced laws to ban it, but are only conducting discussions. However, under pressure, the discussion board on face-swapping technology on Reddit in the United States was deleted, face-swapping technology was also banned in the United States, and its GitHub open source code was also cleared.

  

Although it seems that the ZAO software has not caused any actual harm at present, as netizens pointed out, one potential harm is very real: "With mobile phone numbers and facial images, criminals can use technical synthesis to make calls to your family on your behalf." Therefore, the public's concerns are not unfounded, and it is necessary to regulate them with clear legal provisions as soon as possible.


Keywords:ZAO Reference address:The AI ​​face-changing app is fun, but its potential dangers and insecurities cannot be underestimated

Previous article:Are pagers dead? Quite the opposite, actually.
Next article:1983 Nintendo Teardown: Why Was Electronic Technology So Great Back Then?

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号