Why do we honor Marvin Minsky?
[Editor's Note] This article is translated by New Wisdom. Source: World Science, translator In memory of Minsky, this article is selected for readers to understand Minsky's great ideas in artificial intelligence.
Marvin Minsky, the MIT professor who straddles the boundaries of science and science fiction and is the father of artificial intelligence, has influenced everyone from Isaac Asimov to the digital chess champion Deep Blue to HAL, the computer star in 2001: A Space Odyssey.
尽管他在校园里以“明斯基老人”著称,实际上在当今的人工智能研究领域,其活跃程度与1950年代的他致力于开拓这一领域相比毫不逊色。 自始至终,明斯基都用他那充满哲学思考的笔调为人工智能著书立说。在1985年出版的《大脑社会》一书中,他总结了大脑是如何工作的系列理论,并推测“思考”这一复杂现象可以分解为一系列简单、特定的过程,就如同在一个社会中协同工作的各独立个体一样。他的最新力作《情感机器》延续了《大脑社会》中的一些理念,反映了他近20年来新的研究成果。《情感机器》是明斯基试图构建未来会思考的机器人的蓝图(这是一种会自我反省的人工智能),使人们朝可预见的未来又迈出了一步。为此,《科学发现》杂志记者苏珊·克鲁格林斯基(Susan Kruglinski)不久前就《情感机器》中所涉及的人工智能研究等方面的问题采访了明斯基。
Super Robot Project: An Interview with Marvin Minsky, the Father of Artificial Intelligence
Susan: What is the latest understanding of human thinking that you describe in The Emotion Machine?
Minsky: The core idea of the book is that humans are uniquely resourceful animals because they can approach anything in multiple ways. For example, when you think about something, you might think about it in words, or in logical terms, or in diagrams, images, or even in some kind of structure. If one way doesn't work, you can quickly switch to another way, which is why we are so good at dealing with all kinds of situations. Other animals can't imagine what the room would look like if the bed in the room turned from black to red. But humans can form this imaginary picture, or describe the scene with words, or a little logic.
Susan: Nowadays, neuroscientists are seeking to understand consciousness, which is a hot research area. However, you often use psychological methods and theories to explain some phenomena. This seems not serious enough. Is your research outside of this mainstream?
Minsky: I never thought neuroscience was a serious business. They just have these ridiculous little theories, and then they do complicated experiments to prove them; and once the theory turns out to be wrong, they don't know what to do next. The Emotion Machine discusses a very complete theory of consciousness. Consciousness may be a combination of about 16 different processes. Most neuroscientists think that everything is either conscious or unconscious. Even Freud agreed that there are different levels of consciousness. When you talk to a neuroscientist, you will find that they are so innocent. They are mainly biology, and they know something about potassium channels and calcium channels, but they have no professional knowledge of psychology. Neuroscientists often ask: What phenomenon should I try to explain? Can I find a theory to explain this phenomenon? Can I design an experiment to test whether one theory is better than another theory? If they don't have two theories, they can't do the experiment, and usually they don't even have a theory themselves.
Susan: As you see it, AI is like a lens through which we can see the mind, but it also keeps the secrets of how the mind works locked away. Is that right?
Minsky: Yes, we have to go a step further and build models that simulate the lens of artificial intelligence. If a theory is very simple, you can use mathematics to predict how it works. If it is complex, you have to design a simulation experiment. In my opinion, the only way to test a theory of something as complex as the brain is to simulate it and observe its behavior. But there is a problem now, that researchers are often reluctant to tell us what the simulation model cannot do. They will say: "Oh, the machine I designed can recognize handwritten text with an accuracy of 79%." But they don't tell us what is wrong with the part that is not successful.
Susan: Neuroscientists like Oliver Sacks and VS Ramachandran, who specialize in studying brain-damaged patients, say that what doesn't happen in the brain is more valuable than what does happen. Is that the same thing you're talking about?
Minsky: Yes, the two you mention are probably the best thinkers in neuroscience. Antonio Damasio is good, but Ramachandran and Sacks are more comprehensive than most of their peers. They are able to consider alternative theories rather than trying to prove a particular theory.
Susan: What other problems in neuroscience or AI interest you?
Minsky: Very few. There are about 20,000 or 30,000 people working on neural networks, 40,000 or 50,000 people working on statistical prediction, and thousands of people working on logical systems that can do common-sense thinking. But almost no one that I know of can reason by analogy. The reason this is important is that the way humans solve problems is to start with a large amount of common-sense knowledge, about 50 million anecdotes or little items, and then come up with some unknown system that finds about 5 to 10 of these 50 million old stories that are related to it, and that's reasoning by analogy. I know of only about 3 or 4 people who are aiming in this direction. But they are not famous because they don't claim to have found a universal theory out of it.
Susan: Is it possible for artificial intelligence to have common sense like humans?
Minsky: There are several big projects going on right now. One is by Douglas Lenat in Texas, who started working on it in 1984. He has two million pieces of common sense, like "people live in houses" and "you get wet when it rains," all very carefully categorized. But we don't have the answers to the questions that fill the minds of three-year-olds, and we're collecting the answers to those questions now. If you ask childish questions like "Why do some people not want to get wet when it rains," the computer gets confused. Because people don't want to get wet when it rains, but they want to get wet when they take a shower.
Susan: What is the value of developing AI that thinks like a 3-year-old?
Minsky: The history of artificial intelligence is very interesting. The first truly intelligent machine was a wonderful thing that could do logical proofs and was a master of arithmetic. Then we tried to make machines that could answer questions about things like first-grade books. No machine can do that yet, because AI researchers have focused on solving some very advanced problems (like chess) but have not made much progress on problems that are considered simple. It's a kind of "backward progress." I expect that we will make progress on this problem soon with the development of common-sense reasoning machines. Of course, the premise is that we can get enough funding, and there is also the problem that people are generally skeptical of this kind of research.
Susan: Artificial intelligence usually refers to exploring the practical functions of the brain, such as language understanding or problem solving. But there are many behaviors that people do that don't seem to have very clear practical uses, such as watching TV, fantasizing, and joking. Why are these behaviors necessary?
Minsky: Pleasure is as simple, absolute, innate and fundamental as pain. As far as I can tell, pleasure is a mechanism for shutting down different parts of the brain, just like sleep. I suspect that the main function of pleasure is to shut down parts of the brain, to keep fresh the memory of new things that have been learned with great effort. It is a buffer for short-term memory, and that is one of the theories about pleasure. However, there is a flaw in this, which is that if you can control pleasure, you will do so over and over again; according to the theory, if you can control the pleasure center, you will shut down parts of the brain. This is a very serious problem because it can lead to addiction. I think that football fans, pop music fans, TV fans, and so on, do this, they suppress their normal goals and do something else. This can be observed in young people who are addicted to computer games until they become obese.
Susan: Many people feel that AI has been on a downward spiral since the 1980s, when it failed to live up to its early promises. Do you agree?
Minsky: Of course not. It's just that something has happened that is beyond the expectations of the higher thinkers. Everyone in the field today is pursuing some kind of logical reasoning system, genetic computing system, statistical reasoning system, or neural network, and none of them have made any major breakthroughs because they are too simple. If you try to build a new theory, at best you can only solve some problems and not others. We have to admit that a neural network cannot do logical reasoning. For example, if it calculates probabilities, it cannot understand what those numbers really mean. We have not yet received funding to study something completely different because the government agency wants you to say exactly what progress you will make every month during the contract period. And the days of the old National Science Foundation grants that were not limited to a specific project are gone.
Susan: Why has the tide of funding for scientific research changed?
Minsky: Funders wanted to see practical applications and had little respect for basic science. In the 1960s, Bell Labs became a legend. I worked there for a summer, and it was said that they were not going to fund anything that would be successful in 40 years. CBS Labs, Stanford Labs—there were many great labs in this country, but now there is none left.
Susan: The Emotion Machine is like a book about understanding the human mind, but that wasn’t your original intention when you wrote the book, was that right?
Minsky: The book is essentially a plan for how to build intelligent machines. I would very much like to hire a bunch of programmers to implement the architecture of the emotional machine described in the book, which can switch between the various modes of thinking I discuss. So far, no one has built a system that has or can acquire self-reflective knowledge, and such a system will become more and more capable of solving problems over time. If I have five good programmers, I can achieve this goal in three to five years.
Susan: You're going to build a very smart robot, which is good. But your ultimate goal is to build a robot that is almost a replica of a human, right?
Minsky: Or a robot that is better than a human. We humans are not the end of evolution, so if we can make a robot that is as smart as a human, then we can also make a robot that is smarter than a human. There is not much point in making a robot that is exactly the same as a human, and you would also like to make a robot that can do things that we humans cannot do.
Susan: What's the purpose of that?
Minsky: That's right. As the birth rate continues to decline, but the population is still growing, there will be more and more elderly people. We need smart robots to help them do housework, keep things or grow vegetables. There are also some problems that we cannot solve. For example, if the sun no longer shines on the earth, or the earth is destroyed, what should we do? We might as well "make" more and better physicists, engineers and mathematicians. We must plan for our own future. If we can't do this, our civilization will disappear.
Susan: What was your main role as a consultant on 2001: A Space Odyssey?
Minsky: I didn’t discuss the plot, but I consulted on what the HAL 9000 computer should look like. They had a computer decorated with colorful labels. Stanley Kubrick asked me, “What do you think of this?” I said, “It’s very beautiful.” He asked, “Is this really what you think?” I said, “I think this computer should actually be just a lot of little black boxes, because the computer needs wires to pass information to know what’s going on inside it.” So he took the decorations off and designed a simpler, prettier-looking HAL 9000 computer. Kubrick wanted all the technical details to be reasonable, but he didn’t tell me what HAL would do.
Susan: If we invent a perfect artificial brain, how would it differ from a real human brain?
Minsky: At least the artificial brain will not die. Some people think that the human brain should die naturally, but others think that death is a nuisance. I belong to the latter group, so I think death should be eliminated.