topnav

Series Info...Notes from the Dawn of Time #28:

Problems with Planning

by Richard Bartle
September 25, 2002

Last time, I described the basic approach used in simple AI planning systems: backward chaining of goals. A mobile figured out that to get a loaf of bread from a stall to a cart, it had to pick the bread up, walk to the cart, and put it down.

It’s simple, but it’s not that simple...

For a planning engine as I have described it, there are four main difficulties:

  1. it makes a linearity assumption (i.e. that actions’ preconditions can be solved independently);
  2. it assumes the world is static (i.e. it won’t be changed by others while the plan is being executed);
  3. it assumes no multiple, conflicting goals;
  4. it can be inefficient.
For the small plans we’re going to be using for generating quests, these difficulties are all tractable.

As an example of where the linearity assumption fails, consider the example presented last time. The order in which the conjunctive goals of carrying bread and being at the cart are solved is important. It’s possible for the mobile to decide it needs to go to the cart before it realizes it has to pick up the bread. If it did that, it would have to walk back to the stall to pick up the bread, leading to a sub-optimal plan:

walk from stall to cart
walk from cart to stall
pick up bread at stall
walk from stall to cart
put down bread at cart

This issue can be addressed using a number of techniques, for example goal re-ordering, precondition protection or plan post-processing. I don’t envisage that in practice it would happen all that often in a quest generation system, though, because the goals and actions are so wide and varied. Besides, people make this kind of mistake all the time anyway. As far as handling the linearity assumption is concerned, therefore, a reasonable solution is probably not to worry too much about it..!

Ah, yes, I did say “I don’t envisage” back there. I guess I should fess up and admit that I haven’t ever actually implemented any of this as a quest-generation system. I’ve written AI planning systems before, and I’ve integrated them into a MUD, but I haven’t ever used one for quest generation. What you’re reading is therefore speculation – informed speculation, but nevertheless speculation. Your actual mileage may vary.

Interference and conflict

If the world changes during execution, then when the time comes for a mobile attempts to perform some action it might discover that it can’t. Maybe someone else locked a door that used to be unlocked, or perhaps a rock fall blocked off a passageway. What happens in this event is that the planner re-plans: it creates another plan to get it to a point where it can pick up its original plan again. Industrial strength AI planning systems would have contingency plans already in place, but we don’t have to work under the same kind of robustness constraints. All we want are quests, and re-planning won’t prevent our getting them.

Multiple goals are principally an issue because they can conflict. I want to impress the high priest and I want to impress the prince. Do I wear the blue gown to show my piety, or the red gown to show my honour? For a quest generation system, again, this isn’t too bad: we can have the mobile’s emotional system decide which goal is more important (or simply pick one at random).

Note that some goals are “background” ones that exist to preserve the status quo. For these, the emotional system can be called upon for conflict resolution. “I’m so thirsty, maybe I should drink this seawater? Whoa! That will drive me nuts! Is my desire not to be thirsty greater than my desire not to be nuts?”

Inefficiency

Suppose the mobile from last time knew a fourth action:

Action: teleport X from Y to Z
Preconditions: I am a mage
X is at Y
Effects: not (X is at Y)
X is at Z

The mobile might want to try teleport bread from stall to cart as its first (and only) step in the plan. Unfortunately, it knows no actions that can make I am a mage true, therefore it would have to give up and select the other action in its consideration set, put down bread at cart as before. Discarding an action and trying another is called backtracking.

For a single-step backtrack, there isn’t anything to worry about. However, what if the plan was 8 steps deep? And what if the root cause of the failure was a bad selection for the second step? The planner could spend ages wasting its time exhausting all possibilities multiple times over before concluding that it had made a mistake early on.

There are two ways to address this: limit the length a plan can be; provide so many actions that a solution can usually be found no matter what circuitous path is taken.

Alternatively, you can just ignore it. If our system is only going to invoke the planner when someone presses a quest button, that’s not going to happen so often that it’ll cause major slowdowns. A little inefficiency here and there isn’t going to be much more than a drop in the ocean. That said, having a wide variety of actions available to a mobile is a good idea anyway, to add to the variety of quests that can be generated. It makes sense, therefore, to provide lots of actions even if you don’t give a hoot about inefficiencies due to backtracking.

And next time, yes, I promise, I finally will show you how all this tackle can be used for creating novel, self-consistent and meaningful plans.

Recent Discussions on Notes from the Dawn of Time:

jump new