Article count:16428 Read by:87919360

Hottest Technical Articles
Exclusive: A senior executive of NetEase Games was taken away for investigation due to corruption
OPPO is going global, and moving forward
It is reported that Xiaohongshu is testing to directly direct traffic to personal WeChat; Luckin Coffee is reported to enter the US and hit Starbucks with $2, but the official declined to comment; It is reported that JD Pay will be connected to Taobao and Tmall丨E-commerce Morning News
Yu Kai of Horizon Robotics stands at the historical crossroads of China's intelligent driving
Lei Jun: Don't be superstitious about BBA, domestic brands are rising in an all-round way; Big V angrily criticized Porsche 4S store recall "sexy operation": brainless and illegal; Renault returns to China and is building a research and development team
A single sentence from an overseas blogger caused an overseas product to become scrapped instantly. This is a painful lesson. Amazon, Walmart, etc. began to implement a no-return and refund policy. A "civil war" broke out between Temu's semi-hosted and fully-hosted services.
Tmall 3C home appliances double 11 explosion: brands and platforms rush to
Shareholders reveal the inside story of Huayun Data fraud: thousands of official seals were forged, and more than 3 billion yuan was defrauded; Musk was exposed to want 14 mothers and children to live in a secret family estate; Yang Yuanqing said that Lenovo had difficulty recruiting employees when it went overseas in the early days
The app is coming! Robin Li will give a keynote speech on November 12, and the poster reveals a huge amount of information
It is said that Zhong Shanshan asked the packaged water department to sign a "military order" and the entire department would be dismissed if the performance did not meet the standard; Ren Zhengfei said that it is still impossible to say that Huawei has survived; Bilibili reported that employees manipulated the lottery丨Leifeng Morning News
Account Entry

Exclusive interview with Geoff Hinton: Brand new ideas will have a greater impact than small improvements

Latest update time:2018-12-17
    Reads:

▲Click above Leifeng.com Follow

Text | Wang Xuepei

Report from Leiphone.com (leiphone-sz)

According to Leiphone.com AI Technology Review, WIRED recently conducted an exclusive interview with Hinton. During the interview, WIRED asked questions about the ethical challenges and challenges faced by artificial intelligence. The following is the content of the conversation, compiled and edited by Leiphone.com AI Technology Review.

“As a Google executive, I didn’t think it was appropriate to complain about (the Pentagon contract) publicly, so I complained privately,” Geoff Hinton said.

In the early 1970s, a British graduate student named Geoff Hinton began building simple mathematical models of how neurons in the human brain visually understand the world. Artificial neural networks, as they are called, remained a far-fetched technology for decades. But in 2012, Hinton and two of his graduate students at the University of Toronto used them to dramatically improve the accuracy with which computers could recognize objects in photographs. Less than six months later, Google acquired a startup founded by the three researchers. The previously obscure artificial neural networks suddenly became the talk of Silicon Valley. Now, all the major tech companies are making the technology that Hinton and a small group of others have painstakingly developed a top priority for their future plans and are integrating it into our lives.

At the first G7 AI conference, representatives from the world's major developed countries discussed how to take advantage of AI while minimizing disadvantages such as job losses and discriminatory algorithms. WIRED met with Hinton at the conference and edited the conversation. Here is the interview.

WIRED: Canadian Prime Minister Justin Trudeau said at the G7 meeting that humans need to do more work on the ethical challenges posed by artificial intelligence. What do you think of this?

Geoff Hinton: I've always been concerned about the potential for abuse in lethal autonomous weapons, and I think there should be some kind of Geneva Convention-like regulation to guard against that. Even if not every country signs up, it would serve as a moral barometer. You'd notice who didn't sign up.

WIRED: More than 4,500 Google colleagues signed a letter protesting the Pentagon contract to apply machine learning to drone imagery. Google says it's not intended for offensive weapon use. Did you sign the letter?

Geoff Hinton: As a Google executive, I didn't think it was in my job to protest publicly, so I protested privately. I didn't sign the letter, but I talked to Google co-founder Sergey Brin. He said he was a little concerned about it, too. So they decided to drop the contract next.

WIRED: Google's leaders decided to finish the contract but not renew it. They also issued some guidelines for the use of artificial intelligence, including a pledge not to use the technology to create weapons.

Geoff Hinton: I think Google made the right decision. There are going to be all kinds of things that require cloud computing, and it's hard to know where the boundaries of technology are, and in some sense, the boundaries are arbitrary. I'm glad Google drew the line here, and these principles make sense to me.

“You should regulate based on how well these systems perform. You can run experiments to see if they’re biased, or if it’s likely to kill fewer people than a person would.”

WIRED: AI also raises ethical questions in everyday life, for example, when software is used to make decisions in social services or health care. What should we be aware of?

Geoff Hinton: I'm an expert in making technology work, but not an expert in social policy. One place where I do have relevant technical expertise is whether regulators should insist that you explain how your AI systems work. I think that would be a complete disaster.

People can't explain most of the things they do. When you decide whether to hire someone, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People don't really know how they did it. If you ask them to explain their decision, you're forcing them to make up a story.

Neural networks have a similar problem. When you train a neural network, it learns billions of numbers that represent the knowledge it extracted from the training data. If you feed it an image, it can detect the correct result, for example, whether a certain object in the image is a pedestrian. But if you ask "Why does it think so?" Well, if there were any simple rules to determine whether an image contains a pedestrian, the problem would have been solved a long time ago.

WIRED: So how do we know when we can trust these systems?

Geoff Hinton: You should regulate based on how well these systems perform. You can run experiments to see if they're biased or if it's likely to kill fewer people than a human. For autonomous driving, I think people are now somewhat comfortable with it. Even if you don't quite understand how it works, if autonomous driving has far fewer accidents than a human, that's a good thing. I think we have to treat it like we treat humans: you just look at how well they perform, and if they keep getting stuck, they're not that good.

WIRED: You've said that thinking about how the brain works inspired your research on artificial neural networks. Our brains take in information from our senses through networks of neurons connected by synapses. Artificial neural networks feed it data through mathematical networks of neurons that are connected by weights. In a paper published last week, you and several co-authors argue that we should do more to uncover the learning algorithms in the brain. Why?

Geoff Hinton: The way the brain solves problems is very different from most of our neural networks. Humans have about 100 trillion synapses, and artificial neural networks are usually at least 10,000 times smaller than that in terms of the number of weights. When learning a piece of information, the brain uses many, many synapses to learn as much as possible. Deep learning is good at learning with fewer connections between neurons because there are many events or examples between neurons to learn. I think the brain is not concerned with how to connect a lot of knowledge, it is concerned with how to use these connections to quickly acquire knowledge.

WIRED: How can we build more powerful machine learning systems?

Geoff Hinton: I think we need to get a different kind of computer. Fortunately, I have one here.

Hinton reaches into his wallet and pulls out a large, shiny piece of silicon. It’s a sample from Graphcore, a British startup developing a new type of processor to power machine/deep learning algorithms.

Almost all computer systems we use to run neural networks, even Google's special hardware, use RAM to store the program in use. It's incredibly expensive to pull the weights of a neural network out of RAM so that the processor can use them. Once the software has fetched the weights, it uses them many times. This takes a huge toll.

On the Graphcore chip, the weights are stored in a cache on the processor, not in RAM, so they don't need to be moved around. Because of that, some things become easier to explore. Then we might end up with a system that might have a trillion weights, but only uses a billion of them in each example. That's closer to the scale of the brain.

"In the long run, a completely new idea will have more impact than a minor improvement."

WIRED: The recent surge in interest and investment in artificial intelligence and machine learning means more money will be available for research. Does this rapid development in the field also bring new challenges?

Geoff Hinton: A big challenge facing the community is that if you want to publish a machine learning paper now, there has to be a table with all the different datasets at the top and all the different methods on the side, and your method has to look like the best method. If it's not like that, it's hard to get published. I don't think that encourages people to think about completely new ideas.

Now, if you publish a paper with a completely new idea, it's unlikely to be accepted because the junior reviewers won't understand it. If you have a senior reviewer, because he's trying to read as many papers as possible, if he doesn't understand it the first time, he's going to think that the paper must be nonsense. Anything that people can't understand is not accepted. I think that's really bad.

What we should be pursuing is completely new ideas, especially at basic science meetings. In the long run, a completely new idea will have a greater impact than a minor improvement. This is what I think is the main challenge we face now, with a small number of senior people in this field and countless young people.

WIRED: Will this problem hinder the development of this field?

Geoff Hinton: Just wait a few years and the imbalance will correct itself. It's only temporary. Companies are busy educating people, universities are educating people, and universities will eventually hire more professors in this field, and it will correct itself.

WIRED: Some academics warn that the current hype could lead to an “AI winter,” like the one in the 1980s, when interest and funding dried up because progress didn’t live up to expectations.

Geoff Hinton: No, there won't be an AI winter because AI technology is already in mobile phones. In the old AI winter, AI wasn't actually part of your daily life, but now it is.

Source: https://www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/

- END -


Recommended Reading


Lei Jun finally found a "good wife" for Xiaomi

Apple refuses to accept the ruling! Qualcomm sues Apple again, demanding a ban on the sale of iPhone XS/XR

Xiaomi announces a new round of personnel structure adjustments

Weak and powerless Double 12

iPhone is banned in China? Here's the truth



About AI Investment Research


Leifeng.com's member organizations focus on 10 AI+ fields: AI+Automobile (intelligent driving), AI+Education, AI+Finance, AI+Smart City, AI+Security, AI+Medical, etc. Only 199 early bird seats are left, welcome to join.




Latest articles about

Database "Suicide Squad" 
Exclusive: Yin Shiming takes over as President of Google Cloud China 
After more than 150 days in space, the US astronaut has become thin and has a cone-shaped face. NASA insists that she is safe and healthy; it is reported that the general manager of marketing of NetEase Games has resigned but has not lost contact; Yuanhang Automobile has reduced salaries and laid off employees, and delayed salary payments 
Exclusive: Google Cloud China's top executive Li Kongyuan may leave, former Microsoft executive Shen Bin is expected to take over 
Tiktok's daily transaction volume is growing very slowly, far behind Temu; Amazon employees exposed that they work overtime without compensation; Trump's tariff proposal may cause a surge in the prices of imported goods in the United States 
OpenAI's 7-year security veteran and Chinese executive officially announced his resignation and may return to China; Yan Shuicheng resigned as the president of Kunlun Wanwei Research Institute; ByteDance's self-developed video generation model is open for use丨AI Intelligence Bureau 
Seven Swordsmen 
A 39-year-old man died suddenly while working after working 41 hours of overtime in 8 days. The company involved: It is a labor dispatch company; NetEase Games executives were taken away for investigation due to corruption; ByteDance does not encourage employees to call each other "brother" or "sister" 
The competition pressure on Douyin products is getting bigger and bigger, and the original hot-selling routines are no longer effective; scalpers are frantically making money across borders, and Pop Mart has become the code for wealth; Chinese has become the highest-paid foreign language in Mexico丨Overseas Morning News 
ByteDance has launched internal testing of Doubao, officially entering the field of AI video generation; Trump's return may be beneficial to the development of AI; Taobao upgrades its AI product "Business Manager" to help Double Eleven丨AI Intelligence Bureau 

 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号