Last time, I described the basic approach used in simple AI planning systems: backward chaining of goals. A mobile figured out that to get a loaf of bread from a stall to a cart, it had to pick the bread up, walk to the cart, and put it down.
Its simple, but its not that simple...
For a planning engine as I have described it, there are four main difficulties:
- it makes a linearity assumption (i.e. that actions preconditions can be solved independently);
- it assumes the world is static (i.e. it wont be changed by others while the plan is being executed);
- it assumes no multiple, conflicting goals;
- it can be inefficient.
For the small plans were going to be using for generating quests, these difficulties are all tractable.
As an example of where the linearity assumption fails, consider the example presented last time. The order in which the conjunctive
goals of carrying bread and being at the cart are solved is important. Its possible for the mobile to decide it needs to go to the cart before
it realizes it has to pick up the bread. If it did that, it would have to walk back to the stall to pick up the bread, leading to a sub-optimal
walk from stall to cart
walk from cart to stall
pick up bread at stall
walk from stall to cart
put down bread at cart
This issue can be addressed using a number of techniques, for example goal re-ordering, precondition protection or plan post-processing. I dont envisage that in practice it would happen all that often in a quest generation system, though, because the goals and actions are so wide and varied. Besides, people make this kind of mistake all the time anyway. As far as handling the linearity assumption is concerned, therefore, a reasonable solution is probably not to worry too much about it..!
Ah, yes, I did say I dont envisage back there. I guess I should fess up and admit that I havent ever actually implemented any of this as a quest-generation system. Ive written AI planning systems before, and Ive integrated them into a MUD, but I havent ever used one for quest generation. What youre reading is therefore speculation informed speculation, but nevertheless speculation. Your actual mileage may vary.
Interference and conflict
If the world changes during execution, then when the time comes for a mobile attempts to perform some action it might discover that it cant. Maybe someone else locked a door that used to be unlocked, or perhaps a rock fall blocked off a passageway. What happens in this event is that the planner re-plans: it creates another plan to get it to a point where it can pick up its original plan again. Industrial strength AI planning systems would have contingency plans already in place, but we dont have to work under the same kind of robustness constraints. All we want are quests, and re-planning wont prevent our getting them.
Multiple goals are principally an issue because they can conflict. I want to impress the high priest and I want to impress the prince. Do I wear the blue gown to show my piety, or the red gown to show my honour? For a quest generation system, again, this isnt too bad: we can have the mobiles emotional system decide which goal is more important (or simply pick one at random).
Note that some goals are background ones that exist to preserve the status quo. For these, the emotional system can be called upon for conflict resolution. Im so thirsty, maybe I should drink this seawater? Whoa! That will drive me nuts! Is my desire not to be thirsty greater than my desire not to be nuts?
Suppose the mobile from last time knew a fourth action:
teleport X from Y to Z
I am a mage
X is at Y
not (X is at Y)
X is at Z
The mobile might want to try
teleport bread from stall to cart as its first (and only) step in the plan. Unfortunately, it knows no actions that can make
I am a mage true, therefore it would have to give up and select the other action in its consideration set,
put down bread at cart as before. Discarding an action and trying another is called backtracking.
For a single-step backtrack, there isnt anything to worry about. However, what if the plan was 8 steps deep? And what if the root cause of the failure was a bad selection for the second step? The planner could spend ages wasting its time exhausting all possibilities multiple times over before concluding that it had made a mistake early on.
There are two ways to address this: limit the length a plan can be; provide so many actions that a solution can usually be found no matter what circuitous path is taken.
Alternatively, you can just ignore it. If our system is only going to invoke the planner when someone presses a quest button, thats not going to happen so often that itll cause major slowdowns. A little inefficiency here and there isnt going to be much more than a drop in the ocean. That said, having a wide variety of actions available to a mobile is a good idea anyway, to add to the variety of quests that can be generated. It makes sense, therefore, to provide lots of actions even if you dont give a hoot about inefficiencies due to backtracking.
And next time, yes, I promise, I finally will show you how all this tackle can be used for creating novel, self-consistent and meaningful plans.