Advertisement

AI learns to recreate video game after two minutes of watching someone play it

For the first time, the new system learns the game’s pixels and not the code

Video

By Heather Hamilton, contributing writer

A group of researchers from the Georgia Institute of Technology are training AI to learn how video games work instead of asking them to play the games. The team, who describes the AI system in a paper, said that the system recreates the game engine by watching it being played, learning only the pixels and not the code.

While this hasn’t been done in the AI video game world, Hot Hardware reports that there are limitations in the research — for example, AI systems don’t learn games in their entirety, instead receiving a visual dictionary featuring every sprite in the game and then a set of concepts that include object positioning and velocity. In this way, the AI system looks at the game one frame at a time, labeling what it sees and identifying rules that explain the resulting actions.

“For each frame of the video, we have a parser that goes through and collects the facts — what animation state Mario is in, for example, or what velocities things are moving at. So imagine the case where Mario is just above a Goomba in one frame, and then in the next frame, the Goomba is gone. From that, it comes up with the rule that when Mario is just above the Gooma and his velocity is negative, the Goomba disappears,” lead author Matthew Guzdial told The Verge.

The team presented their research at a recent AI conference, explaining that within two minutes of gameplay, the AI could build its own model of how a game operates by studying frames and making predictions of future events that include the path that a character chooses or how other characters may react. The AI is trained by watching a human do a speed-run through “Super Mario Bros.,” and researchers discovered that the game engine predicted video frames more similar to those in the original game when compared to the same test on a neural network, providing an accurate general model using only video footage, reported  Silicon Republic.

Once the engine was built, the team used a second AI to test the actual gameplay on unique levels that the system created. The cloned engine was nearly indistinguishable from the real one, they reported, save a few missing frames and temporarily disappearing protagonists.

While the researchers first focused on “Super Mario Bros.,”they’ve begun training other 2D titles, including “Mega Man” and “Sonic the Hedgehog,” reports New Atlas. Games like “Clash of Clans,” in which action takes place off-screen, may be beyond its capabilities. Still, the team believes that their research offers value.

“Intelligent agents need to be able to make predictions about their environment if they are to deliver on the promise of advancing different technology applications,” said Guzdial. “Our model can be used for a variety of tasks in training or education scenarios, and we think that it will scale to many types of games as we move forward.”

Sources: Georgia Tech, Hot Hardware, The Verge, Silicon Republic, New Atlas
Image Source: Pixabay

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply