Microsoft published a post that proposes ways in which machine learning implemented with deep neural networks can make games better. From improving NPC AI to lowering game development times, the company envisions a usage for machine learning in all aspects of gaming.
Not long after announcing Windows Machine Learning (WinML)--a set of APIs for Windows 10 that allows inference to be done locally (see the coverage at our sister site Anandtech)--Microsoft published how it envisions the technology could be used in gaming. In order to provide some background for this topic, we’ll detail the software side of Microsoft’s announcement first. WinML uses the best hardware available on a machine to carry out inference operations. We’ve detailed the difference between training and inference before, but in a nutshell, training is the process of creating an AI model and inference is the process of using it.
WinML And DirectML
For example, the inference portion of an object recognition AI would be what allows it to recognize objects that it already knows, while the training portion would be what allows it to recognize new objects. Having inference running locally means your PC is able to use an AI without having a connection to a cloud computing service. In the past, inference was too computationally intensive and had to be done on a cloud-computing farm, but with new GPUs and even SOCs integrating specialized hardware, local inference is becoming a possibility.
It is specifically because GPUs, both new and old, are very well suited for processing inference tasks that Microsoft created DirectML, an extension to the Direct3D graphics platform used by most Windows games. DirectML implements inference tasks with compute instructions that GPUs can understand, and is currently the faster choice for WinML tasks. For machines without GPUs, WinML can also send inference tasks to the CPU. To make even more efficient use of GPUs, however, Microsoft is working with GPU designers to implement inference instructions directly into their low-level programming interface.
Machine Learning For Gameplay
Naturally, gamers are the biggest market for GPUs (if you don’t count cryptocurrency mining). That means gaming machines are already well equipped to leverage DirectML. Microsoft is a gaming company, too, so it’s already envisioning directions that new AI could push games in. The first application listed is… well, AI. Almost everything that isn’t visible and audible in a game relates to the game’s AI. In single-player games there are NPCs; in multiplayer games there are bots; in almost all games there is a dynamic environment. Today, with the advent of neural network-based AI, we do not call these old, pre-programmed behavior models AI anymore, but they still form the basis of the actual gameplay portion of games.
Microsoft envisions that neural network-based AIs can replace these older forms to make for more dynamic gameplay. Examples include making games that better adapt to a players’ individual skill, making open-world games that dish out content according to individual preference, and, of course, making smarter NPCs and bots. Those among us who wear the hardcore-gamer label with honor will probably only be interested in the last point. To this end, Microsoft posted an interesting video of experimental development happening at EA.
The video shows how an AI player, which is trained with input from a real player playing the same game, fares in a multiplayer game against bots with classic behavior-based AI. The AI player’s inputs are the rendered view of the game, a short range radar that stands in place for sound input, and knowledge of its own health and ammo. The video shows how the AI player prioritizes finding ammo when it runs out, succeeds in preserving its own life, and is able to generally own lesser enemy NPCs. This new paradigm for NPC AI is interesting. Depending on what kinds of players are used for the training process, games of the future could have bots from noob to Major-League-Gaming level. You might not even be able to tell if your server is underpopulated and filled with bots instead of real players.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Machine Learning For Improved Visual Quality
Nvidia’s AI-based upsampling demo was listed by Microsoft as a way that machine learning could improve game graphics. The idea is that a game could be rendered in lower resolution and upsampled to a higher one. Methods like this already exist in many current games, but Microsoft thinks that higher quality upsampling will make the method more viable. We’re not entirely convinced, however. First, because the upsampling demo is far from instantaneous. And second, because we have no idea how much GPU horsepower is being used behind that demo. Something like this would definitely be better used for video content. It could be used to upsample a 1080p game stream from a cloud gaming provider to 4K, for example, but that’s only if the method doesn’t require a GPU that’s already powerful enough to run the game in the first place.
Machine Learning For Game Development
Microsoft also envisions that machine learning will make the process of game creation easier. We’ve heard of some of these examples, such as using AI for dynamically generated environments, before, but Microsoft listed an interesting example of using AI for animation. The company reports how the developers of Quantum Break used an AI that was trained to associate facial motions and speech to generate 80% of the game’s facial animation. This is a natural application for such a technology. We can already imagine how much better this would make games that have to be localized into different languages, as it would avoid unsynchronized facial animations on “dubbed” dialogue.
However, as the same discussion goes for all industries, AI could paint a bleak future for jobs in game development. Microsoft happily points out how AI will free up developers from “arduous” tasks “to focus on doing their best work.” This becomes somewhat contradictory when Microsoft highlights that game developers are artists. What is an artist’s best work if not the thing they put the most effort into? If that’s a game world, then the game world is art. If that’s animation, then the animation is art. What makes some of the best open-world games great is not their absolute size, but the amount of handcrafted detail that exists in them relative to their size. Games like The Elder Scrolls V: Skyrim and The Witcher 3 may not have the biggest worlds, but they’re certainly more explorable than the quasi-infinite cosmos present in No Man’s Sky.
It really was a combination of bigger budgets, cinema-inspired storytelling and motion capture, and revolutions in graphics rendering that turned game experiences into what they are today. AI has great potential to improve gaming, but it could also bring about a paradigm shift that will turn the medium into something less recognizable. Targeted content could make all games into carbon copies of archetypes, and smarter NPCs could turn previously interactive experiences into solitary ones.
-
photonboy "Targeted content could make all games into carbon copies of archetypes, and smarter NPCs could turn previously interactive experiences into solitary ones."Reply
First, so "smarter NPCs" is bad?
How does stupid AI that pops up and down behind a low wall make a game better?
I think I get the point but...
Having new tools is good. People will vote with their wallets. In fact, "Hellblade Senua's Sacrifice" did well because of the loving care put into it yet there are so many similar "carbon" copy games that nobody's heard of precisely because the lack of originality.