topnav

Series Info...Notes from the Dawn of Time #23:

Mobile AI

by Richard Bartle
July 17, 2002

Having suffered my rattling on about parsing for 14 articles, doubtless you were relieved to learn last time that today I’m beginning a new theme: mobile AI.

I’m using “mobile” in the MUD sense here, ie. monsters or characters controlled by the game. Strictly speaking, a mobile is an object that can (in game terms) move or otherwise act of its own volition which is controlled by the game itself. Those objects controlled directly by players are characters or personae; those controlled by independent computers or processes are bots. The game can’t tell the difference between personae and bots (and in some cases, neither can the players), but that doesn’t matter – it just lets them get on with their own thing. Mobiles, however, it can control, which is where AI (Artificial Intelligence) comes in.

The field of Artificial Intelligence is a large one, and there are many powerful strategies it can bring to bear. Everything from vision systems to production scheduling, from game theory to robotics, from belief systems to speech synthesis, falls under its auspices. What’s more, it’s all inter-related: if you’re writing a conversational system, for example, you need to be able to model what you know (knowledge representation), what you think your fellow interlocutor knows (belief systems), the effect you want your words to cause (planning), what any response given means (parsing), and you have to be able to do what you intend (acting).

There are many competing ideas on how to go about programming AI. These tend to come in waves (or, for the cynical, bandwagons). At the moment, for example, neural networks and parallel systems are still big, the idea being that you can get complex behaviours to emerge from vast numbers of simple processes that are interconnected at a low level. Fifteen years ago, expert systems were the cutting edge, driven by the hypothesis that intelligence is due to interaction between bodies of complex knowledge.

There are also philosophical differences as to where intelligence lies, and whether it can be partitioned into sub-systems at all. Is consciousness best defined in terms of processes or logic systems? Is it something tangible, spiritual, or simply delusional – a trick played by the brain on itself to get better behavioural results?

Fortunately, we don’t have to worry about things like that here. We’re only talking MUDs, not self-aware automata. We can afford to take a pragmatic point of view, because we know that there isn’t the processing power available to us even to approach the complexity of the top research projects. For MUDs, it’s enough that something behaves generally sensibly and occasionally surprisingly; no-one’s going to think a mobile is a real player for long (if at all), but that doesn’t mean we have to make them dumb.

An Outline for Mobile AI

You’ll always find someone willing to argue with you concerning AI, but the following is a reasonably non-contentious (because it’s very simplified) outline of how an entity with intelligence (artificial or otherwise) can be modelled:

bartle diagram

The basic mechanism is:

  • Events local to the mobile that occur in the game world are detected (either passively or actively) by the mobile’s senses and parsed into meaningful units.
  • The resulting observations are used to update the mobile’s internal view of the world (which may be incomplete and/or inconsistent), stored in its episodic memory.
  • Sensory information is also used to inform the mobile’s emotional system (which is defined by its personality, instincts etc.).
  • Inconsistencies in the world model (episodic memory) also inform the emotional system.
  • The emotional system generates goals: things that the planner wants to do. It inserts these at the head of a plan – a sequence of operations that the mobile intends to do.
  • The planner manipulates the plan, informed by its beliefs about the current and past states of the world, to generate a new plan that describes its intentions in more detail.
  • When the plan indicates instructions for the mobile to undertake some action in the game world, this is executed by the appropriate effector.
  • Occasionally the planner can get into a loop or a deep recursion, which the emotional system can detect and interrupt.
  • The planner stores plans and plan segments in its semantic memory, which it can later retrieve to re-use for other plans.

This all seems very complicated, especially given I said it was simple (!), but don’t panic. Next time, I’ll explain how in practical terms we don’t need it all for MUDs.

Recent Discussions on Notes from the Dawn of Time:

jump new