AI is winning the game of capture the flag, but can it translate successfully into the real world?

Publisher:开国古泉Latest update time:2019-06-03 Source: eefocus Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

Chess and Go were once child's play. Now AI is winning capture-the-flag games. Will these skills eventually translate to the real world?

 

Kids play capture the flag in open spaces at summer camps, and it's part of popular games like Quake III and Overwatch.

 

In either case, it's a team sport. Each side guards a flag while planning how to capture the other side's flag and bring it back to their home base. Winning the game requires good old-fashioned teamwork and a coordinated balance between defense and attack.

 

In other words, capture the flag requires a skill set that might seem like it could only be performed by a human. But researchers at an artificial intelligence lab in London have shown that machines can master the game, at least virtually.

 

In a paper published May 30 in the journal Science, researchers report that they have designed automated "agents" that behave just like humans in the capture-the-flag game of Quake III. The agents can team up to fight against human players or play alongside them, adjusting their behavior accordingly.

 

“These agents can adapt to teammates with arbitrary skills,” said Wojciech Czarnecki, a researcher at Alphabet’s DeepMind lab.

 

Through thousands of hours of play, the agents learned very specific skills, such as running at top speed to attack the opponent's base camp when a teammate was about to capture the flag. As human players know, when the opponent's flag is captured and brought to their own base camp, a new flag will appear in the opponent's base camp that can be captured.

 

DeepMind's project is part of an effort to build artificial intelligence for complex 3D games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual realm will eventually lead to better applications of artificial intelligence in the real world.

 

For example, these skills can benefit warehouse robots when they work in groups to move goods from one place to another, or help self-driving cars navigate in traffic jams. "Games have always been the benchmark for AI," said Greg Brockman, the project leader at OpenAI, a San Francisco-based lab that has similar research projects. "If you can't solve the game problems, you can't expect AI to solve other problems."

 

Until recently, building a system that could match human players at games like Quake III seemed impossible. But in the past few years, DeepMind, OpenAI, and other labs have made significant progress, thanks to a mathematical technique called reinforcement learning, which enables machines to learn through extreme trial and error.

 

By playing the game over and over, these automated agents learn which strategies are successful and which are not. If an agent consistently wins more points by moving toward an opponent’s base camp when a teammate is about to capture the flag, it adds that strategy to its game strategy.

 

In 2016, DeepMind researchers used the same basic technology to build the system that enabled AlphaGo to beat the world's top players in the oriental game of Go. Given the enormous complexity of the game, many experts had thought that such a breakthrough would not be achieved so soon, but at least within the next decade.

 

First-person video games are exponentially more complex, especially when it comes to coordination between teammates. DeepMind's agents learned how to capture the flag through about 450,000 rounds of games, accumulating about 4 years of game experience in a few weeks of training. At first, the agents failed miserably. But by learning how to follow teammates when raiding the opponent's base camp, they gradually understood the essence of the game.

 

Since that project, DeepMind researchers have also designed a system that can beat professional players of StarCraft II. At OpenAI, researchers built a system that can master Dota 2, a game that’s like a souped-up version of Capture the Flag. In April, a team of five agents beat a team of five of the world’s best human players.

 

Last year, professional Dota 2 player and commentator William Lee, who goes by the handle Blitz, played one-on-one against an agent in a version that didn't allow for teamfights; at the time, William was lukewarm about it. But as the agent continued to learn the game and participated in teamfights, he was astounded by the agent's skills.

 

“I thought it was impossible for a machine to play five-on-five, let alone win,” he said. “I was absolutely blown away.”

 

The application of this technology in games is impressive, but many AI experts question whether it can ultimately be translated into solving real-world problems. Mark Riedl, a computer professor at Georgia Institute of Technology who focuses on artificial intelligence, questioned that DeepMind's agents are not actually cooperating. They just respond to what happens in the game, rather than exchanging information with each other like human players. (Even ants can cooperate by exchanging chemical signals.)

 

While the result looks like collaboration, this is because the agents, as individuals, fully understand what is happening in the game.

 

Max Jaderberg, another DeepMind researcher who led the project, said: "How to define teamwork is not the problem I want to solve. But an agent sitting in the opponent's base camp waiting for the flag to appear is only possible when relying on teammates."

 

Games like this aren't as complex as the real world. " The 3D environment is designed to be easy to navigate," Dr. Riedl said. "The strategy and coordination in Quake is simple."

 

Reinforcement learning is well suited to these kinds of games. In video games, it’s easy to identify indicators of success: getting more points. But in the real world, no one keeps score. Researchers have to define success in other ways.

 

It’s achievable, at least for simple tasks. Researchers at OpenAI trained a robotic hand to manipulate letter blocks much like a child would—tell it to show you the letter A, and it will show you the letter A.

 

At Google’s robotics lab, researchers have demonstrated that machines can learn to pick up random objects, such as ping-pong balls and plastic bananas, and toss them into a trash bin a few feet away. Such technology could one day be applied to the massive warehouses and distribution centers operated by Amazon, FedEx and other companies, tasks that are currently performed by human workers.

 

If labs like DeepMind and OpenAI want to tackle bigger problems, they may start to need a lot of computing power. Because OpenAI's system learned to play Dota in a few months by completing more than 450,000 rounds of games that would take years to complete, it relied on thousands of computer chips. Brockman said that buying these chips alone cost the lab millions of dollars.

 

Devendra Chaplot, a researcher at Carnegie Mellon University, said DeepMind and OpenAI, which are funded by various Silicon Valley giants including Khosla Ventures and tech billionaire Reid Hoffman, can afford this computing power. But academic labs and other small companies cannot. For some, the worry is that those well-funded labs will dominate the future of artificial intelligence.

 

But even large labs may not have the computing power needed to transfer these techniques to the complexities of the real world, which would likely require more powerful forms of AI — AI that learns faster. While machines can now win capture-the-flag games in virtual worlds, getting them to win on an open field at summer camp remains a daunting prospect for some time.


Reference address:AI is winning the game of capture the flag, but can it translate successfully into the real world?

Previous article:Give you a comprehensive understanding of today's Made in China
Next article:"Zhihui Huayun" - Detailed interpretation of the full-process software delivery using container technology

Latest Embedded Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号