GPT-5 cannot be stopped! Andrew Ng’s LeCun responded to Musk live: Cars haven’t been invented yet, so why do we need seat belts?
Mengchen comes from Aofei Temple
Qubits | Public account QbitAI
Research on large models cannot stop!
Ng Enda and LeCun even held a live broadcast themselves for this matter. After all, if we don't take action, the situation will become more and more serious:
Musk and others' call to halt the development of GPT-5 has escalated again, with more than 13,500 people signing the petition.
The two also bluntly said that suspending AI research is anti-rational:
The 6-month suspension of AI research is the real harm.
It is the AI products that should be regulated, not the research behind them .
The previous call to suspend AI experiments more powerful than GPT-4 has been signed and supported by Yoshua Bengio, one of the three deep learning giants. Hinton did not sign but said that "it will take longer than 6 months."
This time, Ng Enda and LeCun, in addition to expounding their respective views live, also responded to more questions that netizens were concerned about.
Netizens who watched the live broadcast and video replay said that the video provided more context and subtle differences in tone than the tweet.
AGI escaped from the laboratory, do you need to worry?
LeCun believes that people’s concerns and fears about AI should now be divided into two categories:
1. Related to the future, there are speculations that AI will be uncontrolled, escape from the laboratory, and even rule humans.
2. Related to reality, AI’s shortcomings in fairness and bias and its impact on social economy.
Regarding the first category, he believes that AI in the future is unlikely to still be a ChatGPT-style language model, which cannot make security regulations for things that do not exist.
Cars haven't been invented yet, how should we design seat belts?
Regarding the second type of concerns, both Andrew Ng and LeCun said regulation is necessary, but not at the expense of research and innovation.
Ng Enda said that AI creates huge value in education, medical care and other aspects and helps many people.
Suspending AI research would harm these people and slow down the creation of value.
LeCun believes that doomsday theories such as "AI escapes" or "AI rules humanity" also give people unrealistic expectations for AI.
ChatGPT brings this idea to people because it is fluent in language, but language is not all intelligence.
Language models have a very superficial understanding of the real world, and although GPT-4 is multi-modal, it still doesn’t have any “experience” of reality, which is why it still talks nonsense.
Moreover, LeCun responded to this issue in an article written with Cold Spring Harbor Laboratory neuroscientist Anthony Zador in Scientific American 4 years ago, titled "Don't Be Afraid of the Terminator . "
During the live broadcast, LeCun once again reiterated the main points of the article.
The motivation to dominate only appears in social species, such as humans and other animals, that need to survive and evolve in competition.
And we can completely design AI to be a non-social species, to be non-dominant, submissive, or to abide by specific rules in order to conform to the interests of mankind as a whole.
Ng Enda compared it with the "Ahiloma Conference", a milestone event in the history of biological sciences.
In 1975, recombinant DNA technology was just emerging, and its safety and effectiveness were questioned. Biologists, lawyers and government representatives from all over the world held meetings and, after public debate, finally reached a consensus on suspending or banning some experiments and proposing guidelines for scientific research actions.
Ng Enda believes that the situation back then was different from what is happening in the field of AI today. DNA viruses escaping from the laboratory is a real concern, but he does not see any risk of AI escaping from the laboratory today, at least dozens of times. It may take years or even hundreds of years.
When answering the audience's question "Under what circumstances would you agree to suspend AI research?", LeCun also said that "potential harm, real harm" and "imagined harm" should be distinguished, and measures should be taken to regulate products when real harm occurs.
The first cars were not safe. There were no seat belts, good brakes, or traffic lights. Past technologies have gradually become safer, and AI has nothing special.
Regarding the question "What do you think of Yoshua Bengio signing a joint name?" LeCun said that he and Bengio have always been friends. He believes that Bengio's concern is that "it is inherently bad for companies to master technology for profit," but he does not think so. , the two people agree that AI research should be conducted openly.
Bengio also recently explained in detail on his personal website why he signed.
With the arrival of ChatGPT, business competition has become more than ten times more intense. The risk is that companies will rush to develop huge AI systems and leave behind the open and transparent habits of the past ten years.
After the live broadcast, Ng Enda and LeCun were still having further exchanges with netizens.
Regarding "Why don't you believe that AI will escape the laboratory?" LeCun said that it is difficult to keep AI running on a specific hardware platform.
The response to "AI reaches a singularity and mutates and becomes uncontrollable" is that in the real world, every process will have friction, and exponential growth will quickly turn into a Sigmoid function.
Some netizens joked that language models are often described as "parrots that randomly spit out words", but real parrots are much more dangerous, with beaks, claws and the intention to harm people.
LeCun added that Australian cockatoos are more vicious and I am calling for a six-month ban on cockatoos.
One More Thing
More and more people are expressing their views on the proposal to suspend AI for six months, which is becoming more and more influential.
Bill Gates told Reuters "I don't think suspending one particular organization will solve these problems. In a global industry, a suspension is difficult to enforce."
According to Forbes, former Google CEO Eric Schmidt believes that "most people in the regulatory authorities do not understand technology well enough to properly regulate its development. In addition, if a six-month suspension is imposed in the United States, it will only benefit other countries."
At the same time, the influence of another voice in the AI research community is gradually increasing.
A petition started by the non-profit organization LAION-AI (provider of Stable Diffusion training data) also has more than 1,400 signatures.
This project calls for the construction of a publicly funded international super AI infrastructure equipped with 100,000 state-of-the-art AI acceleration chips to ensure innovation and security.
It is equivalent to CERN (European Organization for Nuclear Research) in the world of particle physics.
Supporters include well-known researchers such as Jürgen Schmidhuber, the father of LSTM, and Thomas Wolf, co-founder of HuggingFace.
Full video playback:
https://www.youtube.com/watch?v=BY9KV8uCtj4&t=33s
AI transcribed text:
https://gist.github.com/simonw/b3d48d6fcec247596fa2cca841d3fb7a
Reference links:
[1]
https://twitter.com/AndrewYNg/status/1644418518056861696
[2]
https://blogs.scientificamerican.com/observations/dont-fear-the-terminator/
-over-
The "artificial intelligence" and "smart car" WeChat community invites you to join!
Friends who are interested in artificial intelligence and smart cars are welcome to join the exchange group to communicate and discuss with AI practitioners, and not miss the latest industry development & technological progress.
PS. When adding friends, please be sure to note your name-company-position~
click here