2527 views|4 replies

1120

Posts

0

Resources
The OP
 

2020 is about to pass, and the problems brought by these new technologies are still there [Copy link]

2020 is the year when AI gradually penetrates into life and forces various industries to transform digitally. At the same time, with the emergence of new technologies, it also begins to bring a lot of troubles and problems. How to correctly understand new technologies and solve the new problems that come with them like solving problems has become a test.
In 2020, despite the economic and social uncertainties brought about by the epidemic, artificial intelligence technology is still accelerating. This year, Gartner's AI technology maturity curve has added several new technology categories including generative AI, composite AI, responsible AI, embedded AI, and artificial intelligence enhanced design.
Among them, generative AI appeared in the "maturity curve" for the first time. This is a technology commonly used to create "deep fake" videos and digital content. Some people with bad intentions will try to use generative AI to create "deep fake" content. This reflects that even new technologies will bring corresponding troubles and problems.


01 Information leakage behind the black industry of face recognition
Face recognition is the most widely used and mature technology. With the large-scale promotion of technology, the problems behind it also follow. Last year, the sale of 30,000 face photos for only 8 yuan was widely reported. What is terrifying is that you don’t know who is collecting faces, nor do you know what these face photos will be used for.
Last year, Machine Heart went undercover in the face recognition group to investigate the status of the industrial chain behind it. Black and gray industry practitioners call high-definition frontal headshots and photos holding ID cards "materials". Based on "materials", a specific set of face recognition technologies is used, usually a combination of facial animation software and hardware, to pass the face recognition of payment, social and life service apps. In
recent years, face recognition and authentication applications have been widely used and promoted in the Internet market, and black and gray industries that profit from it have also emerged. In this industry chain, the upstream demand group uses real-name authentication accounts to realize cash (fleecing), promote, and resell real-name accounts; the midstream technical service providers make a living by "facial recognition authentication on behalf of others", among which there are many "teaching and helping" master-apprentice studios or self-employed individuals who have provided facial recognition services and software; the downstream is the providers of various ID cards, facial photos and other information.
"As long as the technology is good (facial recognition on behalf of others), the investment cost is low and the profit is considerable. When the market is good, it is not a problem to make 30,000 yuan a month." A black and gray industry practitioner once told Machine Heart that he made a living by teaching facial recognition technology. A set of APP facial recognition technology can be sold for as low as 800 yuan, and it is guaranteed to teach. Among them, the cost of the front and back of the ID card and the half-length photo required is extremely low, usually not more than 5 yuan, while it takes at least 40 yuan to pass an APP facial recognition. Excluding the cost price and ignoring the operating fee, one order can earn at least 35 yuan. For some difficult-to-authenticate APPs, there is greater room for profit. 8Facial
recognition technology has been applied in many industries and terminals. Due to the lack of effective supervision at the regulatory and privacy levels, many fraud cases have occurred. This year, according to the Southern Metropolis Daily, more than a dozen home sellers in Nanning, Guangxi were defrauded of more than 10 million yuan because of "selling houses by face recognition". A real estate agent conducted facial recognition authentication on the home seller on the grounds of checking the house file. Subsequently, the home sellers found that their houses had been mortgaged to a third party by the buyer. According to the public information query of the Southern Metropolis reporter, there have been many cases of fraud using facial recognition across the country. -Through
this incident, it is reflected that personal biometric information represented by face, etc., as highly sensitive user personal information, has certain privacy leakage and information security risks. According to the latest research report "Mobile Payment Authentication: Biometrics, Regulation and Forecast 2019-2024" by Juniper Research, facial recognition hardware (such as Face ID on the iPhone) will become the fastest growing component in smartphone biometric hardware, with shipments reaching more than 800 million in 2024, compared with an estimated 96 million in 2019. 0 E2 `6 ~4 y& p- R- L) V, UFace
recognition technology is everywhere. With the integration of deep learning technology and biometric technology, face recognition technology based on deep fake has also brought a lot of controversy. Take AI face-changing as an example. As the name suggests, AI face-changing is to replace one face with another in an image or video.
Various "face-changing" technologies have created a large number of fake videos and formed a corresponding black industry chain. The industry chain is that the upstream provides software and tutorials; the midstream provides video and photo customization; the downstream sells finished videos, which greatly reduces the industry's technical threshold and the cost of counterfeiting.
In addition to deep fake technology, there is also "anti-counterfeiting" technology, and the two are in a state of attack and defense. In addition to information privacy leakage, this technology also brings other problems. The biggest problem is that there is no mature means and basis for judging deep fake biometric information.

:
02 "Out of control" smart devices 9
With the development of technologies such as the Internet of Things, Internet of Vehicles, and 5G, all devices around us are becoming intelligent, but there are also hidden risks. Especially at the node where remote work is popular, some smart home products with poor security have become the focus of hacker attacks.
Recently, researchers from the University of Cambridge found that any smart device that can receive voice commands, such as smart speakers or mobile phones, can infer a large amount of information entered by nearby smartphones, including passwords, just by listening.
Studies have shown that the privacy threats posed by audio acquisition devices go beyond monitoring private conversations, and information entered on physical keyboards and mobile phone touch screen keyboards cannot escape their monitoring. Researchers have verified that the so-called hacker attacker can extract PIN codes (personal identification passwords for SIM cards) and text messages from recordings collected by voice assistants located half a meter away.
Although the Internet of Things technology brings convenience to people's lives, it does not mean unlimited security and reliability. Risks in this regard are also exposed in the field of communication technology. At the GeekPwn2020 International Security Geek Competition, Li Guancheng and Dai Ge, senior researchers at Tencent Security Xuanwu Laboratory, demonstrated a 5G security research finding. Through the design of 5G communication protocol, hackers can "hijack" the TCP communication of any mobile phone under the coverage of the same base station, including all kinds of SMS sending and receiving, APP and server communication.
This study means that hackers can use the bugs of 5G communication protocol to carry out various forms of attacks, such as sending a link with a Trojan horse to the victim. Once the link is clicked, the victim's bank card information can be stolen. Otherwise, it is possible to forge the victim's mobile phone number to send text messages to family members, requesting transfers or other requests.
With the upgrade and iteration of communication technology, 5G has greater security overall, but it does not mean that this technology has greater security. It may also bring other native security issues besides the immaturity of the technology itself.
When everything develops into an intelligent body, the damage to privacy is also greater. The frequent downtime and information leakage of self-driving cars under the Internet of Vehicles are the most typical cases.
In May this year, a domestic Tesla owner posted on Weibo that the Tesla App had a large-scale downtime, resulting in the inability to connect the phone to the car and obtain vehicle information.
Since Tesla was produced, many people have described it as an Android phone with wheels, which can be unlocked and locked with a mobile application. At least, in the era of the Internet of Vehicles, the convenience of the network makes it possible for vehicles to be intelligent. Once important components of smart connected cars, such as the vehicle operating system and the cloud platform, are attacked by the network, data and information leakage may occur at the least, and the vehicle may lose control at the worst.
Similar "loss of control" is often used as a "teaching material" for safety drills. At the 2020 GeekPwn conference, there was such a man-made Tesla Model 3 crash. A security researcher made a small radar jammer that can disrupt the detection of the Model 3 millimeter-wave radar, making it impossible for the vehicle system to brake accurately, thus creating a man-made car accident.
In real life, there are not many such man-made traffic accidents, but they also reveal the security vulnerabilities of autonomous vehicles. With the maturity of the Internet of Vehicles and the increase in the number of vehicles with autonomous driving, the risks brought by such security vulnerabilities will become greater and greater.


03 Enterprise data that is easy to attack but difficult to defend
As the digital transformation of enterprises accelerates, data, as an important element, is regarded as the lifeblood of enterprises. Even so, with the continuous intrusion of hacker technology, enterprise data theft is not uncommon.
At the beginning of this year, the COVID-19 data developed by a domestic AI medical imaging company during the epidemic was stolen by hackers. The AI-assisted system they developed in the past two months and the COVID-19 training data accumulated by the company were publicly sold by hackers at a price of 4 bitcoins. At that time, the company responded that the data obtained by hackers did not involve source code and customer data. Although the loss was limited, it also sounded the alarm.
During the COVID-19 epidemic, many hacker organizations used "COVID-19" as an inducing topic to launch cyber attacks on the computers of medical institutions and medical staff, thereby achieving the purpose of extortion and stealing information. : W
According to data, in the first half of 2020, municipal, medical, and manufacturing industries with weak security protection capabilities became the hardest hit areas of cyber attacks, and the attacks on medical structures during the COVID-19 epidemic were even more harmful, including Fresenlus, the largest private hospital operator in Europe, which had suffered ransomware attacks.
According to CyberMDX research, for various reasons, most hospitals do not patch more than 40% of vulnerable devices. 80% of medical device manufacturers and medical institutions said that devices are very difficult to protect due to a lack of knowledge and training in secure development and related product information security testing procedures.


04 Conclusion
With the development of Internet science and technology, although 5G, AI, privacy computing, intelligent algorithms, the Internet of Things, big data and other technical applications have penetrated into people's lives and production activities, we also need to re-examine the coexistence of new technologies and new problems in the AI era through these hotly debated social topics.
Even if new technologies bring new problems, we cannot just give up and not use them, or even not develop new technologies. At each stage of technological development, different problems will arise, and problems will also force social development. Optimistically, the emergence of AI face-changing and face recognition black industries has increased people's discussions on security and privacy, and promoted legislative exploration; frequent failures of smart connected vehicles have forced changes in autonomous driving technology.
The popularity of AI applications has exacerbated security and moral risks such as privacy protection, data security, and ethics, but now various types of network security products and solutions are constantly emerging, and security vendors that focus on solving network security problems are also working hard.
In the "Top Ten Trends in Industrial Internet Security (2021)" recently released by Tencent Security Strategic Research Department and Tencent Security Joint Laboratory, the development characteristics of industrial security and security industry in 2021 were discussed from multiple dimensions such as laws and regulations, industrial development trends, and key security risks.

At the node of the current industry digital transformation, a large amount of data is being collected. How to ensure security has become an important issue in the digitalization of the industry. Security, as the foundation of the industrial Internet, is also in a stage of rapid change and evolution.
First, the policy and regulatory system is gradually improving. The "Civil Code", "Data Security Law (Draft)", "Personal Information Protection Law (Draft)" and other laws have been issued one after another to clarify the direction of security construction.
Second, the number of security scenarios is increasing. Supply chain collaborative security, 5G application security, and the ecologicalization of black and gray production have become important challenges in the process of enterprise digitalization.
Third, technology and concept innovation are active. Privacy computing, zero trust architecture, artificial intelligence, cloud native security, etc. are accelerating the application of security scenarios, providing support for the construction of enterprise security protection systems. 8
Especially in the era of industrial Internet, it will become the norm for enterprises to migrate digital businesses to the cloud. However, at the same time, the scale of security threats on the cloud is rapidly expanding, and the black and gray industries are more threatening to launch attacks using public cloud platforms. On
the one hand, cloud native security will build full life cycle protection for security services, consolidate the security foundation at the beginning of business construction, and systematize from security tools, products to services, accompanying the entire process of business development.
On the other hand, cloud security products are evolving towards modularization, agility and elasticity. While responding to high-intensity attacks, they also release excess computing power during the stable period, which reduces the application cost of enterprises and improves the overall security level, becoming the "optimal solution" that takes into account cost, efficiency and security.
Driven by industrial digitalization, new applications and scenarios such as smart medical care, industrial Internet, and Internet of Vehicles continue to emerge. The security of these traditional industry digital businesses will be directly related to the safety of people's lives and property and national information security. For enterprises, it is necessary to consider more security and privacy in the early stages of business construction and avoid risks to reduce the cost of solving security problems.
In addition to relying on cloud native security, enterprises themselves also need to establish top-level security thinking. The recognition of security requires not only the efforts of the enterprise itself, but also needs to be expanded to the level of social security recognition, and ordinary users should pay more attention to network threats, personal data security, privacy, etc. The
accelerated development of new technologies represents new power, but also brings people a variety of new problems. New problems require new technologies to solve. It seems to be in harmony and confrontation, but it is actually a major test.

"This article is reproduced from the Internet, the copyright belongs to the original author, if there is any infringement, please contact and delete"

This post is from RF/Wirelessly

Latest reply

I've learned a lesson. Thank you.  Details Published on 2021-1-2 09:32
 

7422

Posts

2

Resources
2
 

Big data, price discrimination, Internet finance. Please read the central government’s spirit during this period, hahaha.

This post is from RF/Wirelessly
Personal signature

默认摸鱼,再摸鱼。2022、9、28

 
 

5791

Posts

44

Resources
3
 

I really don't dare to play with that facial recognition. Even though my phone has it integrated, I don't really use it. I suddenly miss mechanical devices.

This post is from RF/Wirelessly
Personal signature

射频【放大器】

 
 
 

661

Posts

18

Resources
4
 

Facial recognition technology is good, but the problem is how to store the data safely.

This post is from RF/Wirelessly
 
 
 

5

Posts

0

Resources
5
 

I've learned a lesson. Thank you.

This post is from RF/Wirelessly
 
 
 

Just looking around
Find a datasheet?

EEWorld Datasheet Technical Support

快速回复 返回顶部 Return list