Article count:10400 Read by:146798529

Account Entry

Yao Qizhi and Hinton Bengio jointly issued an article: The scale of AI will expand 100 times in 18 months, and someone needs to take care of it.

Latest update time:2023-10-28
    Reads:
Mengchen comes from Aofei Temple
Qubits | Public account QbitAI

Three Turing Award winners Hinton, Bengio, and Yao Qizhi jointly published an article "Managing AI Risks in an Era of Rapid Development . "

还有UC伯克利宋晓东、Pieter Abbeel,清华张亚勤等更多知名学者合著。

This is a consensus paper . In addition to AI researchers, there are also scholars in public governance, such as Xue Lan, Tsinghua University Artificial Intelligence International Governance Institute, and Nobel Prize winner Daniel Kahneman.

Hinton believes that companies are planning to scale up AI model calculations 100 times in 18 months. No one knows how powerful these models will be, and there are no regulations on how they use these models.

(The 100x plan in 18 months comes from InflectionAI)

The paper is currently published on the independent domain name site managing-ai-risks.com, and the arXiv version will be uploaded later.

Text translation

Looking back at 2019, GPT-2 still doesn’t count perfectly.

In just four years, deep learning technology has helped develop software, create realistic virtual environments, provide professional advice, and equip robots with language understanding capabilities.

The progress of these systems continues to exceed expectations and is jaw-dropping.

And the next progress may be even more amazing.

Although existing deep learning systems still have many limitations, major companies are already scrambling to develop general AI that can compete with human intelligence. They are investing more and more resources, innovative technologies are changing with each passing day, and the self-evolution capability of AI has accelerated this trend.

There is no sign that progress in AI will stop once it reaches the level of human intelligence.

In fact, AI has surpassed humans in some specific areas. They can learn faster, process more information, perform well in large-scale computations, and can be easily replicated.

The rapid progress of technology is astounding. Some technology giants have the ability to increase the scale of their AI training several times in a short period of time. Considering the continuous investment in AI research and development and the trend of self-evolution, the possibility of general AI surpassing humans in the next ten to twenty years cannot be ignored.

If applied correctly and fairly, advanced AI can solve long-standing problems for mankind, such as disease, poverty and environmental problems. But at the same time, powerful AI also brings huge risks, risks that we are far from ready to deal with. We need to balance AI capabilities with safety.

Unfortunately, we have lagged behind in adjusting our strategy. We need to anticipate and respond to risks in advance, not wait until problems actually arise. Environmental issues are one example, but the pace at which AI is developing means we don’t have much time to wait.

The rapid progress of AI has brought huge challenges to society. If left unmanaged, they can exacerbate social inequalities, threaten social stability and undermine our understanding of the real world. They may even be used as tools for crime or to exacerbate global inequality and conflict.

More worryingly, the development of autonomous AI may amplify these risks. Although current AI is not yet fully autonomous, this status is changing. For example, although GPT-4 is powerful, if it controls a robot or program with autonomous goal pursuit, its behavior may become unpredictable, and it may be difficult for us to intervene.

To this end, we call on the world to work together to regulate the development and application of autonomous AI through technical, policy and legal means. Otherwise, we may face a future dominated and misused by AI.

This is just the beginning. We urgently need more research and discussion to address these issues, and hope for the support and assistance of the community to ensure that AI brings real benefits to humanity, rather than threats.

Reference links:
[1] managing-ai-risks.com
[2]
https://twitter.com/geoffreyhinton/status/1717967329202491707

-over-

"Qubit 2023 Artificial Intelligence Annual Selection" has begun!

This year, the Qubit 2023 Artificial Intelligence Annual Selection has established 5 categories of awards from the three dimensions of enterprises, people, and products/solutions! Welcome to scan the QR code to register

MEET 2024 conference has started! Click here to learn more .


Click here ???? Follow me and remember to star~

Three consecutive clicks of "Share", "Like" and "Watching"

Advances in cutting-edge science and technology are seen every day ~


Latest articles about

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号