top of page

The Art of “Feigning” Intelligence – AI and Video Game NPCs

“Being impaled every morning is not a job!” yells an angry Spartan NPC from Assassin’s Creed Odyssey in this parody video. Life is not cool as an NPC – characters who are created to be watched, avoided, shoved or sent flying with a kick. The gamer always comes out on top and everyone else is destined to fail. But with the recent developments in AI technology, the behaviors of our virtual opponents are getting more and more cunning. That doesn’t necessarily make them harder to defeat, but it does make us feel smarter defeating them - and here’s how.

Let’s first talk about the history of NPC behavior. In the 1990s, the very first types of enemy AIs were based on a model called the Finite State Machine (FSM). It scripted NPC reactions to the player based on the state the player was in. The NPC is defined by its path of movement and its reaction to a change of state caused by the player. For example, coming within the line of sight of an enemy NPC will change his state from ‘patrol’ to ‘attack’.


Today, a common method of defining NPC behavior is using variants of behavior trees, first introduced in the Halo series in 2004. Behavior trees are similar to the FSM model, but the nodes are 'tree' shaped and doesn't 'loop' back like the FSM model, making it less flexible. There is also utility-based AI, where the NPC is even able to assign points to different options based on pre-defined criteria and choose the option best suited to the situation. And in games of pure strategy like chess, an algorithm called the Monte Carlo Search Tree (MCST) is used. The MCST allows the opponent to calculate all possible outcomes of any of the player’s next steps and his own next steps before proposing the best move for most probable success.

But all forms of AI have their own strengths and weaknesses, and we should be able to take parts of each model and combine them. For example, in The Division 2, the FSM model is used to determine when NPCs are in ‘alert state’, but the FSM model is not good enough to define the action sequences that come after. A behavior tree is then used for these action sequences, and a utility-based AI is used to help them choose their surveillance or fire station.


“AI is built to enhance the gaming experience of players, not to discourage the players.”

It would be rather cool if AI can indeed defeat players sometimes, but there are two primary conditions to this – it must be still fun, and accomplishable within technical constraints.

As Harbing Lou wrote for Harvard University, “AI is built to enhance the gaming experience of players, not to discourage the players.” As such, most designers avoid unexpected NPC behavior that could adversely affect a player’s experience rather than exploit the AI’s full potential. Developers could definitely spend time and money making enemies a lot smarter - like making NPCs able to target the player with the lowest HP in the group - but players are likely to think that enemies are cheating.

So what to developers do? You got us, we sort of ‘trick’ players into thinking that AI is truly intelligent by reinforcing the impression of realism. Games like FEAR and Halo are famous for giving players the impression that opponents are capable of strategizing by making the opponents say the tactic they’re going to execute out loud. For example, enemy NPCs may seem pretty smart to be able to throw a grenade to distract a player, but by declaring that they’re going to do so and giving the player enough time to react, the player feels like the smart one instead. And in The Division 2, while enemies are smart enough to encircle the player, they are stupid enough to say when they’re going to do it. This is rather ridiculous in real battle, but in a game, players get the impression that an NPC is intelligent when they realize what they’ve threatened.

But players are intelligent too, and not everything can be this obvious. There are some subtle NPC behavioral rules that a player most probably won't notice at first. In the Far Cry series, for example, only a certain number of enemies are allowed to shoot at the player at the same time. In the Batman Arkham series, opponents can’t turn around when the player sneaks behind them for a stealth attack. And in Uncharted, enemies have a 0% chance of hitting the player on first shot when they emerge from cover. It’s a fine balance between a bit of leniency and too much – the optimal experience is where a player regards the enemies as plausible threats, but is still able to come up with a strategy to defeat them after observing their behavior.


Game makers strive to do the least possible to create believable behavior. It’s better to modify and combine simple tools than to use an overly complex tool.

It might seem cool to be able to command an AI to do this and that, but game makers are not all-powerful. The complexity of AI algorithms is a huge technical barrier because it’s something few can understand well enough to develop games with. Because it’s so easy to create ‘spaghetti logic’ – meaning to create something so complex that you get entangled in your own logic – game makers strive to do the least possible to create believable behavior. It’s better to modify and combine simple tools than to use an overly complex tool.

There are processing constraints too. Each AI entity must complete certain specific tasks for a full AI loop to process. Sometimes simple mathematical calculations are used, but sometimes big chunks of data must be gathered from different sources in the game. This can take up a lot of processing capability, causing the game machine to be slower at completing other in-game tasks. And the more AI entities there are, the more difficult this process becomes. Game makers must continuously find methods to reduce the fight for machine resources, making sure that AI entities don't compromise the smoothness of the gaming experience.

The AI landscape is changing as well, with the development of more enhanced learning mechanisms such as machine learning and deep learning. This brings us to the exciting question – can NPCs learn from their mistakes? The answer is yes, but it’s a relatively recent development. Designers have started to create NPCs that learn how to maximize positive rewards and minimize negative ones after a few rounds of success and failure. For example, in Metal Gear Solid V, enemies are able to adapt to a player’s tactics. If you shoot too many enemies in the head, more soldiers will start wearing helmets. If the player attacks too often at night, enemies will start using more flashlights.


While machine learning can improve the intelligence of NPCs, its real value decidedly lies in the game development process. For example, it can help to build game levels or manage vehicle traffic in games by learning driving rules and automotive physics. It can also help developers automatically process repetitive data – something which must be done but isn’t the best use of a creative game maker’s time. This frees up time for developers to focus on innovations that really matter.

Data analysts also collect data from player behavior and attempt to replicate this “human-like” behavior in NPCs. In The Division, an algorithm was developed from studying the motivations of players. It was able to predict player behavior from the player’s profile and could even adjust gameplay to improve the quality of the player’s experience. In the future, NPCs used in one game can even be used in another game if they’re able to copy human behavior convincingly enough. It’s super cool, and it’s just the beginning of many mindblowing possibilities. And as we like to say, AI is the modern game-maker’s best ally. It'll only get more and more important in the years to come.

This article is adapted and translated from Ubisoft Stories. Please find the original source here.


bottom of page