Using AI to produce footage of video games with a consistent world and rules could prove useful to game designers
By Alex Wilkins
19 February 2025
The Muse AI was trained on the video game Bleeding Edge
Microsoft
An artificial intelligence model from Microsoft can recreate realistic video game footage that the company says could help designers make games, but experts are unconvinced that the tool will be useful for most game developers.
Neural networks that can produce coherent and accurate footage from video games are not new. A recent Google-created AI generated a fully playable version of the classic computer game Doom without access to the underlying game engine. The original Doom, however, was released in 1993; more modern games are far more complex, with sophisticated physics and computationally intensive graphics, which have proved trickier for AIs to faithfully recreate.
Read more
Google creates self-replicating life from digital 'primordial soup'
Advertisement
Now, Katja Hofmann at Microsoft Research and her colleagues have developed an AI model called Muse, which can recreate full sequences of the multiplayer online battle game Bleeding Edge. These sequences appear to obey the game’s underlying physics and keep players and in-game objects consistent over time, which implies that the model has grasped a deep understanding of the game, says Hofmann.
Muse is trained on seven years of human gameplay data, including both controller and video footage, provided by Bleeding Edge’s Microsoft-owned developer, Ninja Studios. It works similarly to large language models like ChatGPT; when given an input, in the form of a video game frame and its associated controller actions, it is tasked with predicting the gameplay that might come next. “It’s really quite mind-boggling, even to me now, that purely from training models to predict what’s going to appear next… it learns a sophisticated, deep understanding of this complex 3D environment,” says Hofmann.
To understand how people might use an AI tool like Muse, the team also surveyed game developers to learn what features they would find useful. As a result, the researchers added the capability to iteratively adjust to changes made on the fly, such as a player’s character changing or new objects entering a scene. This could be useful for coming up with new ideas and trying out what-if scenarios for developers, says Hofmann.