In the previous articles, we discussed mathematics, logic, digital circuits, mechanical computers, and the electronic computers that are now popular today. These efforts, which condensed human wisdom, finally blossomed in the 1950s. Humans began to try to understand wisdom through calculations.
Although the concept of "artificial intelligence" can be seen everywhere in our lives today, it is still synonymous with high technology. The great power contained in this technology has only just begun to be released. So, who exactly proposed the concept of artificial intelligence? What is the difference between the original artificial intelligence and today's artificial intelligence?
The Beginning of Artificial Intelligence
In the 1950s, shortly after the end of World War II, many military technologies used in the war flourished. In the post-war United States, these scientists and technical experts continued to promote the development of these technologies and even formed new disciplines, such as Norbert Wiener's cybernetics and Claude Elwood Shannon's information theory.
In the context of the budding development of information technology, many scientists began to consider how to use automatic decision-making systems or mechanical methods to explain human decision-making. In 1965, John McCarthy, a young assistant professor at Dartmouth College, invited some scientists who were interested in "thinking machines", including Shannon, to his home field. They included Marvin Minsky from MIT and Herbert Simon from Carnegie Institute of Technology (the predecessor of today's Carnegie Mellon University).
At this conference, McKinsey and many experts had intense discussions and eventually established "Artificial Intelligence" as the name of this new discipline. During the discussions over the past few days, these experts in mathematics, logic and informatics also discussed issues such as artificial intelligence and neural networks. After the conference, everyone returned to their universities to absorb and innovate new ideas, which not only made their universities important centers for artificial intelligence research, but also laid the foundation for the subsequent development of the artificial intelligence discipline.
Among the participants was Alan Newell, a student of Sima He. Although Sima He was Newell's teacher, their lifelong collaboration was equal. They shared the 1975 Turing Award, and three years later Sima He won the Nobel Prize in Economics. Newell and Sima He represented another route of artificial intelligence - the "physical symbol system hypothesis." Simply put, intelligence is the operation of symbols, which was later shortened to "symbolism."
Together with Alan Perlis, the then head of the mathematics department and the first Turing Award winner, they founded the Computer Science Department at Carnegie Mellon University. Since then, Carnegie Mellon University (CMU) has become a major center of computer science and has continued to this day. The original Computer Science Department has also developed into the most comprehensive computer science school in the United States and even the world. The CMU Robotics Institute where the author previously visited and studied was named after the two pioneers: Newell-Simon Hall.
After returning to MIT, Minsky founded the Artificial Intelligence Lab. He and Simon Papert published the book "Perceptron", which mentioned the limitations of the earliest neural network models in solving the XOR problem. He pointed out that neural networks were considered to be full of potential, but in fact they could not achieve the functions people expected. Neural network research quickly fell into a trough, and artificial intelligence entered a "dark" period.
In the 1960s, Minsky first proposed the concept of "telepresence". By using micro cameras, motion sensors and other equipment, Minsky allowed people to experience things that did not happen in reality, such as flying an airplane, fighting on the battlefield, and swimming underwater. This also established his important position as an advocate of "virtual reality".
John Holland is a computer scientist at the University of Michigan. He took a different approach and began to study random optimization problems and proposed the "genetic algorithm". Because many artificial intelligence problems can eventually be transformed into optimization problems. And the "genetic algorithm" itself can be directly used for any problem. It only needs to define the "chromosome" and fitness function. It is a very convenient "plug-and-play" (Off-the-Shelf) algorithm.
Holland guided his students to complete many papers on genetic algorithm research. In 1971, Hollstienne used genetic algorithms for function optimization for the first time in his doctoral thesis. In 1975, Holland published Adaptation in Natural and Artificial Systems, which was the first monograph to systematically discuss genetic algorithms. In this book, Holland systematically expounded the basic theories and methods of genetic algorithms, and proposed the schema theory, which is extremely important for the theoretical research and development of genetic algorithms. On this basis, various theoretical and applied research continued to emerge, and many journals and conferences were born, gradually forming "Evolutionary Computation", an important branch of artificial intelligence.
Ambitions and difficulties
After the Dartmouth Conference, these first-generation AI scientists were ambitious. Herbert A. Simon even said, "Before 1968, computers will defeat human chess masters." "Before 1985, computers will be able to do all the work that humans can do." Marvin Minsky also predicted, "By 1973-1978, we will be able to build a computer with average human intelligence." These confident words made the government and military at the time very interested, and they invested a lot of money in the field of artificial intelligence.
However, these experts in the field of artificial intelligence seem to have misjudged the difficulty of the discipline of artificial intelligence, and almost none of their confident predictions have come true. It was not until 1997 that IBM's computer "Deep Blue" successfully defeated the human chess world champion. It was not until 2016 that the artificial intelligence "AlphaGo" defeated the human Go champion. And to this day, no artificial intelligence can do all the work of humans. Therefore, in the 1970s, the government was very disappointed with these experts who could not fulfill their predictions, and reduced funding for the field of artificial intelligence, and research in the field of artificial intelligence also fell into a trough.
Although it is very difficult to develop a computer that can do all the work of humans, it is not difficult to use the powerful computing power and information storage capacity of computers to make them surpass the level of ordinary people in a certain field. Therefore, expert systems came into being. Expert systems can collect a lot of professional knowledge when they are designed, and perform calculations, analysis, predictions and other functions according to certain procedures.
For example, the earliest expert system "Dendral" was designed by Edward Feigenbaum in 1965. "Dendral" is an expert system applied in the field of chemistry, which can analyze the possible components of a compound based on the degree of the spectrum. In an era when human experts are relatively scarce, this system can enable more scientific research to proceed smoothly.
In addition, there are expert systems specifically used to diagnose diseases, which can make up for the possible negligence of human doctors in diagnosis. Predictive expert systems can predict the development trend of future things by integrating various professional knowledge backgrounds, such as predicting the migration and spread of pollutants in a river, so as to take effective measures in advance.
How far are we from “strong artificial intelligence”?
At the beginning of the 21st century, with the popularization of the information industry and the Internet, machine learning, as a method of learning the laws behind big data, became the mainstream of artificial intelligence research. In particular, the subsequent development of deep learning has given us hope for artificial intelligence, but of course, it has also aroused people's concerns, followed by various technical, philosophical, and ethical discussions.
People see hope in artificial intelligence, but of course, it also raises concerns.
One of the key points of discussion is that the current intelligence based on logic and calculation is called "weak artificial intelligence".
"Weak AI" is able to show intelligence or look like intelligence in a certain aspect, but does not want to develop the same intelligence and thinking as humans. For example, AI in image recognition and speech recognition can only be intelligent in specific areas (image recognition and speech recognition). Although image recognition and speech recognition AI also have the ability to self-learn, they will only learn in their own fields, and will not generate their own curiosity like humans to explore new areas.
Although weak AI has a "weak" in its name, in fact, its strength should not be underestimated. The current mainstream research focuses on this type of weak AI, and has produced huge research breakthroughs. For example, the Go robot Alpha Go, which can defeat top human players, is also a "not weak" AI; the face recognition software that can see the target person among millions of faces at a glance is also weak AI; the logistics robot that can shuttle through Amazon's logistics warehouses and find charging piles to automatically charge when the battery is low, and the self-driving car that can see the road conditions clearly and automatically send people to their destination safely, all belong to weak AI.
Weak artificial intelligence has brought great convenience to our lives, and can most directly apply research results to the practice of production and life. Therefore, countries have invested huge funds in the research of weak artificial intelligence.
The opposite of "weak artificial intelligence" is "strong artificial intelligence". Although scientists hope to create an artificial intelligence that can think independently and have its own personality like humans, there has been no breakthrough in this area of research. Strong artificial intelligence can only exist in science fiction and literary works, such as Ava in "Ex Machina" and the mother "Matrix" in "The Matrix".
Strong artificial intelligence emphasizes that computers need to have their own thinking. After acquiring their own thinking, whether computers will still think in accordance with human thinking and moral systems is difficult for current scientists to determine. Therefore, according to the difference in computer thinking, it can be divided into artificial intelligence with human-like thinking and artificial intelligence that is different from human thinking. For example, Baymax in "Big Hero 6" belongs to the former. Although its appearance is not human, its thinking is consistent with that of humans. The "Matrix" ("The Matrix") and "Skynet" ("Terminator") systems that have acquired the ability to think independently belong to the latter. They have developed values that are different from humans and perform the task of "protecting humans" in their own way.
After all, from another perspective, creating a strong artificial intelligence means creating a life form that can think independently, and the difficulty is imaginable. Therefore, there are also many religious scholars and philosophers who oppose the research of strong artificial intelligence. If strong artificial intelligence is a skyscraper in a modern city, then the current progress made by humans in artificial intelligence can only be equivalent to the caves where primitive people live. There is still a long way to go from today's weak artificial intelligence to the development of strong artificial intelligence.
Previous article:What’s next for artificial intelligence?
Next article:Will artificial intelligence "dominate" humans?
- Popular Resources
- Popular amplifiers
- "Cross-chip" quantum entanglement helps build more powerful quantum computing capabilities
- Why is the vehicle operating system (Vehicle OS) becoming more and more important?
- Car Sensors - A detailed explanation of LiDAR
- Simple differences between automotive (ultrasonic, millimeter wave, laser) radars
- Comprehensive knowledge about automobile circuits
- Introduction of domestic automotive-grade bipolar latch Hall chip CHA44X
- Infineon Technologies and Magneti Marelli to Drive Regional Control Unit Innovation with AURIX™ TC4x MCU Family
- Power of E-band millimeter-wave radar
- Hardware design of power supply system for automobile controller
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Intel promotes AI with multi-dimensional efforts in technology, application, and ecology
- ChinaJoy Qualcomm Snapdragon Theme Pavilion takes you to experience the new changes in digital entertainment in the 5G era
- Infineon's latest generation IGBT technology platform enables precise control of speed and position
- Two test methods for LED lighting life
- Don't Let Lightning Induced Surges Scare You
- Application of brushless motor controller ML4425/4426
- Easy identification of LED power supply quality
- World's first integrated photovoltaic solar system completed in Israel
- Sliding window mean filter for avr microcontroller AD conversion
- What does call mean in the detailed explanation of ABB robot programming instructions?
- STMicroelectronics discloses its 2027-2028 financial model and path to achieve its 2030 goals
- 2024 China Automotive Charging and Battery Swapping Ecosystem Conference held in Taiyuan
- State-owned enterprises team up to invest in solid-state battery giant
- The evolution of electronic and electrical architecture is accelerating
- The first! National Automotive Chip Quality Inspection Center established
- BYD releases self-developed automotive chip using 4nm process, with a running score of up to 1.15 million
- GEODNET launches GEO-PULSE, a car GPS navigation device
- Should Chinese car companies develop their own high-computing chips?
- Infineon and Siemens combine embedded automotive software platform with microcontrollers to provide the necessary functions for next-generation SDVs
- Continental launches invisible biometric sensor display to monitor passengers' vital signs
- Raspiberry Pico runs openmv successfully
- CC2640 Flash Programmer 2
- Maxim's open source portable precision calibrator
- About the story of Xiaobai writing bootloader
- Transient response
- 10 Rules for PCB Component Layout
- Six questions you must know when learning RFID
- eetop.cn_100 Practical Tips for FPGA Design Experts (Original English Version)
- Some experience summary on AD conversion design
- How to conduct standard ZigBee testing and certification of products