topnav

Series Info...Notes from the Dawn of Time #25:

Why Plan?

by Richard Bartle
August 14, 2002

The mechanism I’ve described so far for mobile AI isn’t actually how most computer game AI works. The trend has been to use the expert system approach, whereby you have ready-written, hard-coded responses ready for each situation. Every time it’s the AI’s turn, it assesses the current situation and decides what to do next there and then.

This isn’t actually a bad idea. Indeed, I do it myself in MUD2 for most generic mobile behaviour. If a situation changes rapidly (as it does in a fight, for example), time spent planning ahead is time wasted because by the time you get to execute a planned action your local part of the world has altered too much for it to make sense.

When a situation doesn’t change rapidly, though, planning is a boon. Let’s say someone invited me to give a talk in Vienna. I wouldn’t sit in my chair, figure out everything I needed to do, then stand up and figure it all out again. Once I have my plan, I don’t need to recreate it every time the world changes (although I would have to if it interfered with my plan, eg. I stood up and twisted my knee).

The same applies for mobiles. If your mobile has to go from A to B, you only need to figure out the route once and store it as a plan. Then, when the mobile gets to act, it can move one room and not have to work out the route all over again from its new location – a great saving. Sure, if the door it was expecting to be open is closed, then it would have to re-plan; otherwise, there’s no need. I use planning for my mobiles in MUD2 when they have complex behaviours that need to be undertaken over time. It’s especially useful when they’re working together as a group towards a common aim (repelling invading players, mostly!).

Situation/response of the “if you’re attacked, draw a weapon, drop everything heavy you have, and if your stamina is less than 50% then drink your healing potion” variety is fine as a specific response to a specific circumstance. For anything outside its narrow range of expertise, though, it’s useless.

Planning, on the other hand, is very flexible. The programmer provides mobiles with a set of actions, usually described in terms of preconditions and effects. Preconditions are things which have to be true for the action to take place, and effects are things which are true after the action has taken place. The preconditions for opening a door might be that you have a key and that the door is closed; the effect is that the door is open. Thus, if you want a door to be open you can look through all the actions you know that might open it and select one. If its preconditions are satisfied then you can perform the action, otherwise you have to perform some other action or series of actions to satisfy its preconditions first. This backward chaining of goals allows your mobiles to build up working, complex solutions to problems they might never encounter ever again.

I’ve made it sound easier than it actually is, by the way, as we shall see in later articles. Most of the time, though, this is pretty well how things go.

Big deal...

We now have two reasons for wanting to control mobiles using a planning system rather than hard-coding their actions: it’s usually more efficient; it can cover a wider range of activities, including ones that the programmer has not predicted.
Both these are OK, but neither of them is exactly compelling. Efficiency gains are OK, but it’s not like planning is going to happen often in the great scheme of things, and its most frequent use (movement planning) can itself be hard-coded fairly easily anyway. Likewise, it doesn’t really make that much of a difference if a pig can figure out how to assassinate a prince if it spends its entire life penned in its sty with no means of escape.

So what does the kind of AI system I’ve proposed give you that makes it worth investing expensive programmer time implementing?

It makes your mobiles look intelligent.

Imagine you engage a human-presenting mobile in conversation. You ask it questions, it replies (passively). Easy.

Now imagine it engages you in conversation. It asks the questions, you provide the answers. It wants to know something, and based on your replies it changes its questions in order to obtain the answers it seeks.

You can implement a passive mobile using a situation/response approach, but it all goes hideously wrong when the mobile has its own agenda. Simple response triggers are fine when you can rely on the intelligence of a player to guide a conversation, but if the mobile is doing the talking then it will rapidly seem truly bizarre. Either the mobile will harp on about the same things, seemingly ignoring your answers, or it’ll flit like a butterfly from concept to concept at the slightest provocation. What’s needed is an approach that allows the mobile to concentrate on getting an answer or to switch tack as the needs of the conversation dictate. This is what a planning-oriented approach delivers.

Here is a mocked-up conversations to illustrate the point:

Mobile: Have you seen the dagger of Rha?

Player: Yesterday I met a priest.

Mobile (butterfly mind): Tell me more about this priest.

Mobile (paranoid mind): Answer the question! Have you seen the dagger of Rha?

Mobile (planning mind): Are you saying this priest stole the dagger?

Interesting though this is, natural language conversation with mobiles is a topic in its own right and therefore I shan’t be pursuing the subject further right now. Maybe in a later series of articles...

Instead, next time I’ll reveal what this planning-oriented approach really buys you that makes it very exciting indeed.

Recent Discussions on Notes from the Dawn of Time:

jump new