Exclusive interview with Geoff Hinton: Brand new ideas will have a greater impact than small improvements
▲Click above Leifeng.com Follow
Text | Wang Xuepei
Report from Leiphone.com (leiphone-sz)
According to Leiphone.com AI Technology Review, WIRED recently conducted an exclusive interview with Hinton. During the interview, WIRED asked questions about the ethical challenges and challenges faced by artificial intelligence. The following is the content of the conversation, compiled and edited by Leiphone.com AI Technology Review.
“As a Google executive, I didn’t think it was appropriate to complain about (the Pentagon contract) publicly, so I complained privately,” Geoff Hinton said.
In the early 1970s, a British graduate student named Geoff Hinton began building simple mathematical models of how neurons in the human brain visually understand the world. Artificial neural networks, as they are called, remained a far-fetched technology for decades. But in 2012, Hinton and two of his graduate students at the University of Toronto used them to dramatically improve the accuracy with which computers could recognize objects in photographs. Less than six months later, Google acquired a startup founded by the three researchers. The previously obscure artificial neural networks suddenly became the talk of Silicon Valley. Now, all the major tech companies are making the technology that Hinton and a small group of others have painstakingly developed a top priority for their future plans and are integrating it into our lives.
At the first G7 AI conference, representatives from the world's major developed countries discussed how to take advantage of AI while minimizing disadvantages such as job losses and discriminatory algorithms. WIRED met with Hinton at the conference and edited the conversation. Here is the interview.
WIRED: Canadian Prime Minister Justin Trudeau said at the G7 meeting that humans need to do more work on the ethical challenges posed by artificial intelligence. What do you think of this?
Geoff Hinton: I've always been concerned about the potential for abuse in lethal autonomous weapons, and I think there should be some kind of Geneva Convention-like regulation to guard against that. Even if not every country signs up, it would serve as a moral barometer. You'd notice who didn't sign up.
WIRED: More than 4,500 Google colleagues signed a letter protesting the Pentagon contract to apply machine learning to drone imagery. Google says it's not intended for offensive weapon use. Did you sign the letter?
Geoff Hinton: As a Google executive, I didn't think it was in my job to protest publicly, so I protested privately. I didn't sign the letter, but I talked to Google co-founder Sergey Brin. He said he was a little concerned about it, too. So they decided to drop the contract next.
WIRED: Google's leaders decided to finish the contract but not renew it. They also issued some guidelines for the use of artificial intelligence, including a pledge not to use the technology to create weapons.
Geoff Hinton: I think Google made the right decision. There are going to be all kinds of things that require cloud computing, and it's hard to know where the boundaries of technology are, and in some sense, the boundaries are arbitrary. I'm glad Google drew the line here, and these principles make sense to me.
“You should regulate based on how well these systems perform. You can run experiments to see if they’re biased, or if it’s likely to kill fewer people than a person would.”
WIRED: AI also raises ethical questions in everyday life, for example, when software is used to make decisions in social services or health care. What should we be aware of?
Geoff Hinton: I'm an expert in making technology work, but not an expert in social policy. One place where I do have relevant technical expertise is whether regulators should insist that you explain how your AI systems work. I think that would be a complete disaster.
People can't explain most of the things they do. When you decide whether to hire someone, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People don't really know how they did it. If you ask them to explain their decision, you're forcing them to make up a story.
Neural networks have a similar problem. When you train a neural network, it learns billions of numbers that represent the knowledge it extracted from the training data. If you feed it an image, it can detect the correct result, for example, whether a certain object in the image is a pedestrian. But if you ask "Why does it think so?" Well, if there were any simple rules to determine whether an image contains a pedestrian, the problem would have been solved a long time ago.
WIRED: So how do we know when we can trust these systems?
Geoff Hinton: You should regulate based on how well these systems perform. You can run experiments to see if they're biased or if it's likely to kill fewer people than a human. For autonomous driving, I think people are now somewhat comfortable with it. Even if you don't quite understand how it works, if autonomous driving has far fewer accidents than a human, that's a good thing. I think we have to treat it like we treat humans: you just look at how well they perform, and if they keep getting stuck, they're not that good.
WIRED: You've said that thinking about how the brain works inspired your research on artificial neural networks. Our brains take in information from our senses through networks of neurons connected by synapses. Artificial neural networks feed it data through mathematical networks of neurons that are connected by weights. In a paper published last week, you and several co-authors argue that we should do more to uncover the learning algorithms in the brain. Why?
Geoff Hinton: The way the brain solves problems is very different from most of our neural networks. Humans have about 100 trillion synapses, and artificial neural networks are usually at least 10,000 times smaller than that in terms of the number of weights. When learning a piece of information, the brain uses many, many synapses to learn as much as possible. Deep learning is good at learning with fewer connections between neurons because there are many events or examples between neurons to learn. I think the brain is not concerned with how to connect a lot of knowledge, it is concerned with how to use these connections to quickly acquire knowledge.
WIRED: How can we build more powerful machine learning systems?
Geoff Hinton: I think we need to get a different kind of computer. Fortunately, I have one here.
Hinton reaches into his wallet and pulls out a large, shiny piece of silicon. It’s a sample from Graphcore, a British startup developing a new type of processor to power machine/deep learning algorithms.
Almost all computer systems we use to run neural networks, even Google's special hardware, use RAM to store the program in use. It's incredibly expensive to pull the weights of a neural network out of RAM so that the processor can use them. Once the software has fetched the weights, it uses them many times. This takes a huge toll.
On the Graphcore chip, the weights are stored in a cache on the processor, not in RAM, so they don't need to be moved around. Because of that, some things become easier to explore. Then we might end up with a system that might have a trillion weights, but only uses a billion of them in each example. That's closer to the scale of the brain.
"In the long run, a completely new idea will have more impact than a minor improvement."
WIRED: The recent surge in interest and investment in artificial intelligence and machine learning means more money will be available for research. Does this rapid development in the field also bring new challenges?
Geoff Hinton: A big challenge facing the community is that if you want to publish a machine learning paper now, there has to be a table with all the different datasets at the top and all the different methods on the side, and your method has to look like the best method. If it's not like that, it's hard to get published. I don't think that encourages people to think about completely new ideas.
Now, if you publish a paper with a completely new idea, it's unlikely to be accepted because the junior reviewers won't understand it. If you have a senior reviewer, because he's trying to read as many papers as possible, if he doesn't understand it the first time, he's going to think that the paper must be nonsense. Anything that people can't understand is not accepted. I think that's really bad.
What we should be pursuing is completely new ideas, especially at basic science meetings. In the long run, a completely new idea will have a greater impact than a minor improvement. This is what I think is the main challenge we face now, with a small number of senior people in this field and countless young people.
WIRED: Will this problem hinder the development of this field?
Geoff Hinton: Just wait a few years and the imbalance will correct itself. It's only temporary. Companies are busy educating people, universities are educating people, and universities will eventually hire more professors in this field, and it will correct itself.
WIRED: Some academics warn that the current hype could lead to an “AI winter,” like the one in the 1980s, when interest and funding dried up because progress didn’t live up to expectations.
Geoff Hinton: No, there won't be an AI winter because AI technology is already in mobile phones. In the old AI winter, AI wasn't actually part of your daily life, but now it is.
Source: https://www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/
- END -
◆ ◆ ◆
Recommended Reading
Lei Jun finally found a "good wife" for Xiaomi
Apple refuses to accept the ruling! Qualcomm sues Apple again, demanding a ban on the sale of iPhone XS/XR
Xiaomi announces a new round of personnel structure adjustments
Weak and powerless Double 12
iPhone is banned in China? Here's the truth
About AI Investment Research
Leifeng.com's member organizations focus on 10 AI+ fields: AI+Automobile (intelligent driving), AI+Education, AI+Finance, AI+Smart City, AI+Security, AI+Medical, etc. Only 199 early bird seats are left, welcome to join.