• Menu

The menagerie of ways AI is transforming video game creation

Jan 12 2023

Lewis Packwood

Screenshot of a space-themed shoot-'em-up game, Shoon, created in Midjourney. At left, one small ship fires at several larger ones on the right, with a large explosion in the middle. The battle occurs over a grey cityscape.

The shoot-'em-up game Shoon, created on AI system Midjourney. Screenshot by L'Atelier.

In 2022, social media feeds flooded with astonishing creations from artificial intelligence (AI), chiefly the tools DALL-E 2, Midjourney and Stable Diffusion. Users could generate seemingly any imaginable image with incredible fidelity by typing a few simple words. 

Artist Martin Nebelong, who landed a job with Media Molecule for his impressive creations in Dreams, posted a thread of eye-opening images he generated using Midjourney last August. They range from an “elderly Norwegian queen” to “futuristic traditional oriental buildings.” In each, the quality of the AI-created artwork is incredible, practically indistinguishable from something made by a talented human hand.

Seemingly overnight, AI tools went from being a novelty—a way to generate silly Pokémon names or psychedelic Mona Lisas—to a technology that could revolutionise how we work, notably in video games. Indeed, someone has already created a shoot ‘em up made entirely with art generated from Midjourney.


Raph D’Amico thinks we shouldn’t be surprised at how quickly AI art has blossomed. A UX designer by trade, he previously worked at Google, and has used AI and followed the technology's exponential progress for years.

“We've had this lesson, I think, from COVID, on what an exponential looks like, and we're just really bad at knowing that we're on an exponential,” he says. “When you're on an exponential curve, you feel like things are going at a rapid clip—but it's nothing compared to what it will be in the next iteration.”

What is AI, anyway?

Before we get going, we should define terms. Tools like Midjourney are based on machine learning, but what is that, and is it the same as AI? 

“AI is the big circle,” explains Matthew Guzdial, who researches AI at the University of Alberta in Edmonton, Canada. “Machine learning is a smaller circle inside of AI. So all machine-learning approaches are AI approaches, but not vice versa.”

Simply put, machine learning involves training an artificial intelligence to recognise patterns by using a vast set of data, such as text or images. 

Outside the machine learning circle, but still within the big circle of AI, are several techniques that are already in widespread use in video games, such as procedural generation—which is how games like Minecraft and No Man’s Sky can generate massive, unique worlds.

Procedural generation and machine learning rely on a mathematical algorithm, explains Guzdial, but the difference is “whether what that algorithm is doing is based on looking at examples, or whether what that algorithm is doing is hand authored, written by a person.”

The big leap forward

For astonished social media users, AI art might seem to have emerged from nowhere. That’s far from the case, says Guzdial. “Things like this have been around for decades,” he says. “At this point, it's just the quality of the output and the fact that it is so public facing that's newer.”

D’Amico adds that the building blocks of AI art were gradually laid over many years by various different labs and companies. If you looked at the state of those blocks just a couple of years ago, he says, “you'd have these small, 256 x 256 pixel images that looked kind of janky [and] definitely weren't usable.”

What changed was the development of large language models, such as GPT-3 from OpenAI, which can “write almost like a human can,” he says. GPT-3, released in 2020, was trained using unimaginably vast amounts of data scraped from the internet, encompassing hundreds of billions of words. 

In January 2021, OpenAI released DALL-E, which uses a version of GPT-3 to generate images rather than text. “That's what really lit the fire,” says D’Amico.

DALL-E and its accompanying program CLIP were trained using millions of images, along with text describing those images. Put simply, DALL-E creates an image from a prompt, then CLIP tells it whether it’s any good.

“Essentially, you’ve got two computers sitting next to each other,” explains D’Amico. “One of them is able to paint, and the other one is able to see and talk, and describe what it sees. And so the one that's painting is going, ‘Does this look more like the thing that someone was asking for?’, and CLIP looks at that and goes, ‘It looks a little bit more [like it], you're on the right track’. It's kind of like, ‘Are you hot or cold’?”

A vastly improved follow-up, DALL-E 2, launched in beta by OpenAI in July 2022. It soon had competition from the eponymous program released by research lab Midjourney in the same month, and later from Stability.Ai, which released Stable Diffusion in August.

The three AIs have slightly different strengths and weaknesses, but all are capable of producing highly nuanced, incredibly detailed art from a few descriptive terms.

How does AI art work?

D’Amico likens these AI art generators to the character creation screen in the famous role-playing game Skyrim, which features sliders for determining details like how far apart a character’s eyes are, or how big their nose is. The AI program has billions of these sliders, controlling every conceivable aspect of the image.

“What has happened is that it has started with all those sliders being random and meaning nothing,” says D’Amico. “Then every image that it saw—when it saw a boat, for example—it learned to understand: Okay, every time I see a boat, there's these things that seem to be associated with it.”

All three of the main AI art programs rely on users entering a detailed description of the picture they want to generate, although Stable Diffusion also lets you upload a reference image. Even with the same search terms, the program will return a slightly different result every time; getting the image you want might involve refining a prompt again and again. 

Typically, your first few tries are free, but after that you have to pay. Guzdial reckons it’s like gambling: “You're paying for tokens to roll over and over again, to see if you can generate something good and hit the jackpot.”

Implications for artists

D’Amico has been experimenting with AI art for The Zone, a tabletop RPG he has been working on. He has already created a digital version of the game, but for the upcoming physical release, he initially used AI to generate artwork for the cards. 

“I generated something like 15.000 images,” he says. “You would quickly realise that half of them would be good [and] half of them would be bad. They'd be inconsistent with each other, and so you'd have to figure out ways to get the art direction to be consistent.” This involved endless prompt-wrangling, trying to type in just the right thing to create images of different subjects that shared some artistic similarity. 

In the end, D’Amico had second thoughts about using AI art for the cards, for reasons we’ll get to shortly. Nevertheless, he was impressed with the AI tools. “I think it is possible to create very powerful art,” he concludes.


Difficulty generating images with a consistent art style is the major reason AI programs are unlikely to be used to generate all artwork for video games in the near future. But they are likely to have a major impact on another area: the creation of concept art. “Concept art is not something that you ship, it's something that helps you think, and it helps groups of people get aligned,” says D’Amico. 

As a way to generate ideas, AI is an unparalleled tool. “It's like a search engine, but instead of just giving you an image, it gives you the thing that you're thinking about. And so I think it can accelerate the artistic process, and it can inspire—and I think that's great.”

The implications for the games industry are huge, says Guzdial, who thinks AI art programs could replace many entry-level artist roles. “[In] the cases where you have an art director or producer, and then many artists underneath them, you might end up getting rid of junior artists there—because a lot of what the junior artists would do was just give a whole bunch of stuff to the art director to look through, or for the producer to look through.” 

In short, AI art could be a boon to small indie developers, allowing them to generate ideas quickly and on a large scale ... but it could also mean that many artist roles at larger studios are wiped out overnight.

Ethics and copyright

There remain all sorts of problems with AI art, not least the ethical implications, which prompted D’Amico to rethink his decision to use AI-generated art in The Zone. “I'm currently leaning on not using the output in the commercial version,” he says. “It's been a bit of an emotional struggle.”

“I started looking more deeply into the ethics of it,” he continues. All these models are trained on millions of images created by artists, likely unaware that their art has been used in this way. An August 2022 analysis of 12 million images, from the 2,3 billion that compose Stable Diffusion’s dataset, found that a large portion were scraped from Pinterest, stock-image websites, and blogs. 

“Pinterest hasn’t agreed to this,” points out D’Amico. “The artists whose images are in those pictures haven't agreed to it. And so then you enter this fascinating discussion of should you not use it at all, because it is trained in this way.”

He is quick to point out that there is nothing illegal about what AI companies have been doing, and data scraping is used widely in many contexts. “But there's really no legal precedent for … using AI to have people train their own replacements,” he points out. “And even if it's not technically illegal, it's pretty much a breach of a social contract, right? All the artists who are putting their work out into the public commons, there was no expectation when they did that, that it would be used in this way.”

Then there’s the question of copyright: Who owns an image generated by an AI? The AI company? The user who wrote the prompts to generate the image? Or the potentially millions of artists whose work was used to train the AI?

It’s a grey area, and the law has yet to catch up. “My take on it is it's gonna be really interesting when we finally get some court decisions,” says Guzdial. As it currently stands, AI art is a something of a nightmare when it comes to ethical and copyright grounds.

It’s not just art

We’ve seen that AI-generated art is likely to have a significant effect on both small and large video game studios. Over time, it will almost certainly become regarded as an indispensable tool. But what other areas of game development will be affected by advances in this field? One possibility is that entire games could be designed by an AI. Matthew Guzdial has already shown this can be done.

Back in 2018, Guzdial and Mark Riedl at the Georgia Institute of Technology’s School of Interactive Computing trained an AI with videos of people playing Mega Man, Mario and Kirby games. From that, the AI could extract concepts of level design and the rules and mechanics of the games. It eventually came up with a game of its own called Killer Bounce. The game looks basic, but has an interesting and unique mechanic that’s somewhat like a combination of Mario and Breakout, whereby blocks disappear when the main character leaps on them.

So could games be entirely written by AIs in the near future? Guzdial thinks it’s unlikely, since getting an AI to write workable game code is harder than getting it to generate pictures. “There might be a whole bunch of individual mechanics for one dynamic: A whole bunch of different lines of code … to represent Mario jumping,” he says. “The jump in Mario might look similar to … the jump in Kirby or Mega Man, but they're actually very different in terms of different phases to them, different accelerations to them, different points when the acceleration tapers off, different amounts of control…” 

Getting an AI to nail the nuances of that one mechanic, let alone a whole game, would be an Herculean task. In addition, applying machine learning to game code just wouldn’t be feasible, since there isn’t enough publicly available material to train the AI with. “Game companies are not in the habit of putting the source code of their games online,” notes Guzdial.

That said, there have been breakthroughs in teaching an AI to code. Raph D’Amico points to GitHub Copilot, an AI program that can automatically complete lines of code as you begin typing on the GitHub software development platform. It’s based on a modified version of OpenAI’s GPT-3 large language model, and has been trained on millions of lines of open-source code available through GitHub. “It's amazing,” says D’Amico. “It works astoundingly well.” 

However, it has also raised a certain amount of controversy, with familiar concerns about whether using code for machine learning violates certain licences, and the ongoing question of who owns the copyright to code produced by an AI.

Make me a model

What about other areas of game design? An AI can make 2D art, so what about 3D models? Could you ask an AI to create, say, a 3D chair that you could simply drop into a game level?

Technically yes, says Guzdial, pointing out that there have been several research papers proposing how it could be done. But as with game code, he doesn't believe there is enough available material to train an AI effectively. Programs like DALL-E 2 can produce incredibly detailed art because they were trained on billions of images, but 3D models aren’t available in anything like these numbers.

Guzdial reckons that a useful AI tool for generating 3D models would probably have to be a "classical AI" (i.e., created without using machine-learning methods), and would take enormous effort to develop and refine. 

Then again, towards the end of 2022, a slew of AI tools for generating 3D models were released, including Point-E from OpenAI and DreamFusion from Google. The output of these tools is currently fairly basic compared with the impressive output of 2D art generators, but this could be an area to watch.

Tracking bugs

One area of game development that is likely to be affected by AI in the future is quality assurance (QA) testing—in other words, playing the game to check for bugs. Max Booty, head of Xbox Game Studios, said he dreams of using an AI to quickly test a game after a new feature is added. There has already been development in this area, with various firms, like Electronic Arts (EA), Ubisoft and Google, looking at ways to train an AI to do QA testing. 

“EA Sports is using this, for example … to test if a golf course has any exploits,” says Guzdial. “Like can you get the hole in one more easily than anticipated or more consistently than anticipated?”

But development in this area is not easy. “It's really hard to make an AI agent play a game like a human.” He points to a video of an AI developed by Ubisoft that leaps around erratically when directed to reach checkpoints in a 3D level:


Parts of QA testing may be automated in future, but ultimately video games must be play-tested by humans at some point, since they are the only ones who can decide whether a game is fun. Still, AI programs could be used to quickly highlight major bugs without human involvement.

The move to automate QA testing is likely to be met with dismay by many in the industry. As with junior artists, any shift towards AI methods could eliminate numerous entry-level roles.

The future of AI 

These are just a few potential areas of game development that AI could affect. It is already being used to do things like upscale textures and predict customer churn in free-to-play games.

“In the short term, people who play games are not going to see any real impact,” says Guzdial. Behind the scenes, AI will increasingly be used to generate content and streamline processes, potentially speeding development and possibly eliminating some job roles as time wears on. Big gaming firms like EA and Ubisoft possess teams (known as SEED and La Forge, respectively) dedicated to researching AI and other cutting-edge processes.

This is part of the rise of AI programs across the board, as people “try to push them into basically every domain they can,” says Guzdial.

In many ways, it’s similar to the rise of the internet. What began with basic, ugly web pages and novelty dancing hamsters rapidly evolved, extending tendrils into every facet of everyday life. What once was novelty quickly became necessity, and soon you couldn’t imagine existing in a world without it. Welcome to the AI future.

Get the future in your inbox, every week.

Lewis Packwood

Gaming writer

Lewis Packwood is a freelance video-game writer with bylines in The Guardian, Eurogamer, PC Gamer, EDGE, Wireframe, Retro Gamer, Kotaku, and more. He lives in Darlington in the UK, and can be contacted at lewispackwood.com.


About L'Atelier

We are an interdisciplinary team of economists, journalists, scientists and researchers interested in emerging technology related cultural phenomena and change.

Learn more

Sign up for the latest news and insights

Your e-mail address is used to send you newsletters and marketing information on L’Atelier. You can unsubscribe at any time by using the unsubscribe link in our emails. More information about the management of your personal data and your rights can be found in our Data Protection Notice.