The potential dangers and insecurities that ZAO may cause are also challenges that people will inevitably face when artificial intelligence provides products to society.
The AI face-changing software named ZAO has become popular on the Internet, but the good times did not last long. The WeChat sharing link of "ZAO" that became popular overnight has been blocked. The privacy anxiety and risk concerns brought about by "ZAO" are still spreading.
In fact, the ZAO software is a modified face-changing APP that originated in the United States in 2017. It is called Deepfakes, which is the abbreviation (collective name) of deep machine learning and fake photos. The most vivid name is deep (photo) fake.
Since it is fake, it must be banned. It may not only cause various insecurities due to the leakage of personal information, but if it involves crime, the easy face-changing operation will make it difficult to distinguish between real and fake criminals, and increase judicial costs. Of course, in some countries, such as the United States, face-changing may also become a nightmare for anti-terrorism, making it easy for terrorists to commit crimes and escape by changing their faces, and increasing the difficulty of anti-terrorism for security departments.
All these issues are potential dangers and insecurities that may arise from ZAO and similar software, and they are also challenges that people will inevitably face when artificial intelligence provides products to society. Therefore, how to regulate is the most realistic and serious issue.
Now the ZAO software is facing the difficult choice of whether to ban or release it. As for face-swapping, the United States, the European Union and other countries believe that even if it is a potential threat, it is extremely serious and must be prevented first.
On January 28 this year, the Carnegie Endowment for International Peace published an article titled "How Should Countries Respond to Deepfakes?", pointing out that face-changing technologies represented by Deepfakes have a series of potential hazards, including inciting political violence, undermining elections, disrupting diplomatic relations, providing false evidence and interfering with the judiciary, and carrying out blackmail. It is hoped that countries will clearly define the improper use of Deepfakes, and that society urgently needs to define what is acceptable and what is not acceptable. This is not only conducive to social and legal management, but also conducive to social media regulating its platforms and managing online content.
On June 13 this year, the U.S. House Intelligence Committee held a hearing on Deepfakes. At the hearing, the committee chairman, Congressman Adam Schiff, said that the spread of doctored videos has brought a "nightmare" scenario to the 2020 presidential election, making it "difficult for lawmakers, news media and the public to distinguish what is real and what is fake." Therefore, he and Daniel Citron, a professor at the Maryland School of Law, suggested that Congress consider amending Section 230 of the Communications Standards Act (Internet services do not have to be responsible for the behavior of their users) to combat Deepfakes and protect users from being misled by fake news.
In response to the destructive and dangerous content produced by Deepfakes face-changing technology, the European Union also issued response guidelines in early 2019 to help the public distinguish the source of a piece of information, how the information was produced, and whether the information is trustworthy.
It can be seen that the United States and the European Union, which attach great importance to the potential dangers of face-swapping technology, have not yet introduced laws to ban it, but are only conducting discussions. However, under pressure, the discussion board on face-swapping technology on Reddit in the United States was deleted, face-swapping technology was also banned in the United States, and its GitHub open source code was also cleared.
Although it seems that the ZAO software has not caused any actual harm at present, as netizens pointed out, one potential harm is very real: "With mobile phone numbers and facial images, criminals can use technical synthesis to make calls to your family on your behalf." Therefore, the public's concerns are not unfounded, and it is necessary to regulate them with clear legal provisions as soon as possible.
Previous article:Are pagers dead? Quite the opposite, actually.
Next article:1983 Nintendo Teardown: Why Was Electronic Technology So Great Back Then?
- Popular Resources
- Popular amplifiers
- Red Hat announces definitive agreement to acquire Neural Magic
- 5G network speed is faster than 4G, but the perception is poor! Wu Hequan: 6G standard formulation should focus on user needs
- SEMI report: Global silicon wafer shipments increased by 6% in the third quarter of 2024
- OpenAI calls for a "North American Artificial Intelligence Alliance" to compete with China
- OpenAI is rumored to be launching a new intelligent body that can automatically perform tasks for users
- Arm: Focusing on efficient computing platforms, we work together to build a sustainable future
- AMD to cut 4% of its workforce to gain a stronger position in artificial intelligence chips
- NEC receives new supercomputer orders: Intel CPU + AMD accelerator + Nvidia switch
- RW61X: Wi-Fi 6 tri-band device in a secure i.MX RT MCU
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- LED chemical incompatibility test to see which chemicals LEDs can be used with
- Application of ARM9 hardware coprocessor on WinCE embedded motherboard
- What are the key points for selecting rotor flowmeter?
- LM317 high power charger circuit
- A brief analysis of Embest's application and development of embedded medical devices
- Single-phase RC protection circuit
- stm32 PVD programmable voltage monitor
- Introduction and measurement of edge trigger and level trigger of 51 single chip microcomputer
- Improved design of Linux system software shell protection technology
- What to do if the ABB robot protection device stops
- CGD and Qorvo to jointly revolutionize motor control solutions
- CGD and Qorvo to jointly revolutionize motor control solutions
- Keysight Technologies FieldFox handheld analyzer with VDI spread spectrum module to achieve millimeter wave analysis function
- Infineon's PASCO2V15 XENSIV PAS CO2 5V Sensor Now Available at Mouser for Accurate CO2 Level Measurement
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- Advanced gameplay, Harting takes your PCB board connection to a new level!
- A new chapter in Great Wall Motors R&D: solid-state battery technology leads the future
- Naxin Micro provides full-scenario GaN driver IC solutions
- Interpreting Huawei’s new solid-state battery patent, will it challenge CATL in 2030?
- Are pure electric/plug-in hybrid vehicles going crazy? A Chinese company has launched the world's first -40℃ dischargeable hybrid battery that is not afraid of cold
- Share the information of a mini 51 MCU learning board, small and beautiful, with reference programs, schematics and other information
- Test of HDMI audio playback function of Rockchip RK3399 chip development board Orange Pi 4
- I can't believe it: the official routines of C8051F58x_59x also have errors!
- BQ30Z55 bq CHEM (Chemical ID) Configuration
- How to choose the right Cortex-based MCU application design
- DIY a USB to CAN tool for debugging
- STC12C5A60 MCU has large working current
- Award-winning live broadcast - Tektronix's new 2 series: a new concept of personal test terminal
- A complete set of Chinese FPGA introductory and advanced tutorials
- Zhongke Bluexun Bluetooth Headset SDK Analysis and Sharing