Google’s artificial intelligence system DeepMind is to play StarCraft II to try to hone its learning skills. It’s the next step in a process that began with Atari 2600 titles.
The DeepMind project has already made the news with a program that beat leading players at Go. While that was hugely impressive given the sheer range of possible moves in the game, one advantage was that the program was custom-built with the rules of Go in mind.
In a separate project last year, a DeepMind computer was set up to play 49 different Atari 2600 titles with no other information other than that the goal was to maximize its score. Before figuring out a strategy, the computer first had to experiment with tracking the pixels on screen, trying out different commands, and eventually deducing the rules of the game. With 29 of the games it reached the threshold of scoring 75 percent of the total achieved by an expert human player.
Now it will take on the far more complicated strategy of StarCraft II. That means not only dealing with the need for speedy responses (which will likely mean the computer needs to find a good option quickly rather than exhaust all possibilities to find the optimum solution), but also the problem of imperfect information. Unlike Chess or Go, StarCraft players start with only a partial view of the overall game map and only find out more detail as and when they choose to send out scouts.
To make the process even harder, the computer won’t get a magical feed of all the game data. Instead it will have to work from the images on the screen and “visually” translate them into information in the same way as a human player does.
There’s no word yet on if and when the computer will be put up against leading human players. While in theory it should have some advantages as far as speed and accurate memory go, it will be interesting to see if human challengers adapt their strategy to deal with the computer player and in turn if the computer can respond accordingly.