Skip to main content

AI Theory

I've been working on the AI for the game the last couple of days.

[Don't worry, every thing is OK...]
In my smaller games AI was pretty simple. They had a simple game mechanic, try to walk in to the player and kill them. So basically they have a single type of behavior.

[I'll eat your brains!]
That's fine for simple enemies like zombies, but what about more complex characters?
Well, I've written a couple of different behaviors, we could call them:
  1. roaming
  2. seeking
  3. fleeing
  4. fighting
  5. following
  6. waiting
[while: guarding]

The behaviors are encompassed in a finite state machine. Each state can transition in to others, depending on the behavior. So a waiting agent sees an enemy they become alerted and try to attack, they transition to the seeking state.

After testing I'm confident I've got the bugs out of each of them.
But there's a problem. How to choose which behavior to use at any one time? How to know which state to enter next?

[trolls ahoy!]

If an agent sees the enemy, obviously they will do something different depending on whether they are an archer, a wizard, a dragon or a troll or whatever. When they lose sight of the enemy, which state should they return to? Clearly no single set of 6 states can fit all those scenarios.

Right away I'm back to the previous situation of wanting a state machine to manage my state machine. But again, I don't want to go down that road. Once I start looking for the complex solution, I'll get bogged down and never finish.

One simple possibility would be to have character attributes like self.default_state or self.alerted state, but then behavior becomes split between the states and the character so adding new characters means partly writing new behavior. I want all behavior to be managed from within the state machine. I don't want behavior entangled with specific agents types, but rather modular and easy to assign even during game play.

So I'm going to use inheritance to solve the problem.

[Do I need to draw you a picture?]
Firstly, some of the above states can be merged. Seeking, Fleeing and Roaming are all the same behavior, only one finds the closest tile to the target, one finds the tile furthest away from the target, and one chooses a random unvisited tile. So I now have a single state, Navigation.
[Navigation; You are here!]
Then I go back and create new states like Roaming(Navigation).
This inherits all the behavior from navigation but uses a different rule for deciding the next square. Fleeing(Navigation) and Seeking(Navigation) do likewise. *

The next step is to create specific versions of those states for a particular AI type. For example ArcherRoaming(Navigation) with some tweaks to the __init__ function to tailor how it transitions to other states. There's also a special custom exit check, to see if there's some special reason to not be roaming around. This reduces it down to just a few lines of code, rather than re-writing a specific behavior for archers.

After that I have to plan how those states will interact and which states are needed for each AI archetype. I'm using flowcharts for that:
[Inside the mind of a dungeon guard]

A little more complex now than:
try to walk in to the player and kill them.
Of course It's not difficult to go further, if I have archers who patrol instead of guarding I can reuse most of the states from archer, but subclass them again as ArcherRoaming instead of ArcherWaiting.

Where this gets really useful is being able to have switchable behaviors. By having a custom exit check we can give some AI archetypes the ability to switch to a different archetype. If we have a party of heroes and I want to give them orders I can do so through dialog choices or through hotkeys. It just asks them to switch to a different behavior archetype.

Become an archer, cast magic, follow me!
Scout ahead, serve drinks, sacrifice yourself for the good of the party!

* At this point I also diverged in to two different kinds of navigation, single tile navigation for close in to the target avoiding obstacles and tile chunk navigation which gives smoother movement (because it uses a shorter route to the target) but is worse at avoiding smaller obstacles. Then I get Attacking(TileNavigation)and Hunting(ChunkNavigation) two different behaviors for use when at different ranges from the target. However I don't want to add to the confusion here.


Popular posts from this blog

Make your game models POP with fake rim lighting.

I was watching one of my son's cartoons today and I noticed they models were using serious amounts of simulated rim lighting. Even though it wasn't a dark scene where you'd usually see such an effect, the result was actually quite effective.

The white edge highlighting and ambient occluded creases give a kind of high contrast that is similar to, but different from traditional comic book ink work.

I'll be honest, I don't know if there's a specific term for this effect in 3d design, since my major at university was in traditional art. I learned it as part of photography.

You can find plenty of tutorials on "what is rim lighting" for photography. It basically means putting your main sources of light behind your subject so that they are lit around the edges. It can produce very arresting photographs, either with an obvious effect when used on a dark subject...

..,or as part of a fully lit scene to add some subtle highlights. See how alive the subject look…

How to... build a strong art concept.

So you want to make some art assets for your game. The first on the list is a Steampunk Revolver for your main character to shoot up Cthulhu with. Quickly opening your internet browser you start with a Google image search. Ah, there is is!

It might be a good idea to find a few influences so you don't accidentally end up copying a famous design.

Just mash them up and you're ready to go! Off to your favorite modeling program.
But wait! isn't there more to building a strong design concept than that?

Of course there is.
One of the diseases of modern design is that of recursion. Everything is a copy of a copy of a copy. This is especially a problem with "historical" concepts. Over the course of that recursive process the concept becomes infected with modern design elements, and ends up looking very similar to everything else that anyone else has ever made.
If you want to come up with a really fresh idea, you have to get beyond secondary references and go look at real …


Ok, so it's not exactly skynet, but I have got my first AI state working, kind of.

The first state is "HOLD" in which case the agent stays in place where they are and shoots at any unit that comes in range. When I started writing this module, I found that the existing method of triggering actions wasn't good enough to allow the AI to choose the best weapon or target. It worked by simply sending a command to the unit to trigger the currently selected action.

If the action is valid, it triggered, if not it didn't.
That's fine for play controlled units, as that's all they need to do. But AI needs to know in advance if the action is valid. The player can get that info from UI feedback, but that wasn't available to the AI player.

There were three problems:

1. The UI feedback duplicated code in the action trigger function. These  two sets of code could get out of phase so that UI feedback was wrong.

2. The action trigger didn't give enough feedback for …