AI “AlphaStar”, a big win over professional gamers, is promoted to “Grand Master”, the top 0.2% of humanity in “Starcraft 2”


by

Marco Verch Professional Photographer and Speaker

Announced that AlphaStar , a real-time strategy for Starcraft 2 developed by artificial intelligence (AI) company DeepMind , ranked in the top 0.2% of human players. DeepMind evaluates this as a “big achievement for machine learning” and says that applying similar technology may help solve various issues.

Grandmaster level in StarCraft II using multi-agent reinforcement learning | Nature
https://www.nature.com/articles/s41586-019-1724-z


AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning | DeepMind
https://DeepMind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning



Google ’s AI company, DeepMind, has been collaborating with Blizzard Entertainment since 2016 and started developing “AlphaStar,” an AI that can play StarCraft 2. AlphaStar achieved the achievement of “Decisive 10 consecutive wins against the top human players” in January 2019.

DeepMind's AI `` AlphaStar '' wins 10-1 with `` Starcraft 2 '' against the world's top players-GIGAZINE



DeepMind lists the following four points to note about the new AlphaStar.

◆ 1: Let AlphaStar play under “more human conditions”
Until now, AI was directly reading and playing maps and data displayed on the system. However, under the supervision of Mr. Dario TLO Wünsch , a professional player of Starcraft 2, AlphaStar has restrictions equivalent to humans by watching the play screen through the camera and setting strong restrictions on the frequency of actions Is imposed.

◆ 2: Automated agent training
Until now, agents had to manually program some action sequences. However, the new AlphaStar announced by DeepMind has succeeded in automating agent training through multi-agent reinforcement learning.



Training will be done only by agents trained by supervised learning , not by agents trained in past experiments. DeepMind says, “We chose to learn directly from game data using general-purpose machine learning technologies such as neural networks, self-play with reinforcement learning, multi-agent learning, imitation learning, etc.” .

◆ 3: Played online with the same map and conditions as humans
AlphaStar was playing on Battle.net, the official game server. In other words, you were playing with the same map and conditions as a human player. All game replays on Battle.net are distributed at:

AlphaStar Resources | DeepMind



◆ 4: All three races enter Grand Master League
StarCraft 2 has three races, Protoss, Terran, and Zerg, and AlphaStar can play all of them. AlphaStar has agents that have been trained for each race, and all three agents are linked to one neural network. In StarCraft 2, you can participate in the Grand Master League by entering the top 200 in the 4 areas of the Americas, Europe, Asia, and China. All three AlphaStar agents are promoted to the Grand Master League. This shows that AlphaStar ranked in the top 0.2% of Starcraft 2 players.

Dario TLO Wünsch, who actually played with the new AlphaStar, said: “AlphaStar's gameplay was very impressive. AI is very skilled in determining strategic positions, and engaging and unengaging with opponents. AlphaStar has the right control ability, but it doesn't feel superhuman, and of course not at a level that humans cannot theoretically achieve. “I felt like StarCraft was playing a game like a real person,” he said.

DeepMind says, “We are interested in understanding the possibilities and limitations of open-end learning. Open-end learning allows us to develop robust and flexible agents that can handle complex real-world domains. In games like Starcraft 2, players must use limited information to make dynamic and difficult decisions that affect multiple levels and time scales, so Starcraft 2 Is an excellent training ground to advance these approaches. '

in Software,   Game, Posted by log1i_yk