Artificial intelligence is a wide field and can mean many things to many people. Some might restrict the definition to passing the Turing test. Others may claim it is defined by artificial life, expert systems, fuzzy logic, genetic programming, natural language processing. neural nets, or robotics. Some exclude certain fields and include others not mentioned. I'm not going to define artificial intelligence at all here despite my title. I will define what I think AI is not. The mere presence of a scriptable or hard-code mechanism that can be accessed/executed by a computer controlled agent is not AI. That may well enable AI, but it is not AI nor more than the C language is AI. That opinion out of the way, instead what I am going to define are some areas and characteristics of artificial intelligence that I think are possible, practical and useful for muds. Specifically I'm going to discuss my own design and implementation.
Some examples of applications...
Tactics ... Assume dragons are highly intelligent beings, dragons as a class or generic type of game agent log player activities regarding themselves that is both successful and unsuccessful. Tactics are chosen based on a dragons perception of the make-up of an attacking party and according to their class (type or racial) memory. Survival tactics may generate short term goals or plans to acquire spells or items to increase power, or fight, flee and surrender behavior. Players obviously share information and rumor about the best way to kill dragons. AI game dragons also share information and rumor on players. By a wider contextual window, as I mentioned above, I mean a mechanism of acquiring accumulated historical data and using it in tactical decision activity. And by the term tactics, I don't mean to limit it to combat. Game Merchants engage in tactical trades based on rumor and local price fluctuations. Typically merchants really do not compete with players economically.
Strategies... Dragons have both long term plans or goals and short term goals. There are species plans common to all dragons, like acquiring gold and gems, maidens, etc. Goals that are intrinsic to dragons. There are individualized plans or short term goals that arise from local conditions. Not enough gold or maidens available locally and a dragon will abandon its lair and take up residence elsewhere. Too many high-level parties about and partial defeats, robberies, near-death experiences, and likewise a dragon will move elsewhere. Large amounts of nearby gold and gems, plump maidens, and absence of negative experiences may attract more dragons to an area. It might affect their reproduction rate, etc. The bottom line here is that NPCs don't just exist in a room and react to direct stimulations but acquire and use information outside in forming long term strategic goals and perform actions to attain those goals (aka procedural reasoning).
This sort of AI is quite common to other games from the simple AI of Mortal Kombat to the more complex AI of Age of Empires series. Not in many muds. I'm not sure why although I suspect some of the difficultly lay in designing goals that are not end-game strategies or winning-game positions but cyclical strategies.
Many AI games and implementations fail to recognize parasitical relationships, or goals held in common with other NPCs and players. Neutrality as a goal for example. Influence, bribery and favors. Memory of individuals and their behavior and the ability to sort it out, analyze it and assign a value to it over and above more generic behavior. I won't be talking much about this here, perhaps in later posts.
Story and Language... These are related to NLP (natural language processing). A great many player events involve communication amongst themselves and public communications in rooms. State objects should be able to monitor and parse this information and regurgitate it in rumor systems, or into a format that is suitable for agents goals to query. Players discuss robbing the local bank in the middle of the day in a busy town square, then proceed to the bank at some later point and find more town watch than usual in the area, or perhaps double or triple the guards inside the bank. Gathering public communications, parsing, and processing them and passing it to state monitoring objects is part of extracting data useful for AI agents. A bank object's agent register and listen to this particular state object, and make plans and undertake actions in order to assure its goal of not being robbed is not threatened. The converse of the ability to parse and process communications is the ability to generate communications. Suppose the players decide to rob the bank. Events like this generate story elements which may be turned into published accounts in newspapers or town criers. Which in turn become another source of information for AI agents that monitor local events. Perhaps these events and public player conversation becomes the main source for NPC banter. Perhaps each story event is assigned an importance value and logs are extracted monthly in order to generate web page town histories, national histories, world histories depending in addition or supplemental to game master and player bard generated material.
Agents should attempt goal-oriented communication with players. For example perhaps a plate-armored character wanders into a smithy in a village which is experiencing a severe shortage of steel. Typically this is a one-way interaction. The player wants something. Perhaps he wants a sword repaired. The merchant either can do it or doesn't. Now if merchants examined players for what the merchant has a compelling need, they might make use of language processing to make offers or suggest trades. Proactive goals. The Ultima series offered up conversation that was immediately relevant or leading to the player. Maybe too leading. Yet the principle is the same. Players might even visit certain agents shopping specifically for information.
Back to reality...
I've been working on an interface to a procedural reasoning system (PRS) for programming tactics and strategies into mud objects. I call the PRS language Apollo. Some samples of my MPL (mud programming language) Aphrodite appear in an earlier [article].
Integration is done through introduction of new variant types into the MPL language along with native library mechanisms to set and query, as well as execute predefined methods on these objects. The languages are similar, but developed independently, processed independently and have been converging over time and reiteration. I have the ability to register and log interest in other objects and update state on a given object's Apollo variables. However, at this time the rules governing action selection are static and programmed by the Apollo object creator. Inheritance in Apollo is non-existent although Apollo objects may be inherited.
Apollo objects come in three varieties, Models, Rules, and Plans...
Models are objects which define the interface between the mud environment and the PRS engine. They are basically interface definitions of all variables that will used by the PRS engine and the mud objects that provide the source. Expressions can be used in conversions and assignments.
cats := ithaca.cat.count;
rats := ithaca.rat.count;
critical_population := ithaca.rat.count < 50;
grain_stored := ithaca.grain.produced - ithaca.grain.sold;
Listeners are installed on the variables defined on the RHS of a model expression. When a value of an RHS variable is changed via regular MPL execution, evaluate() message is issued from the interface to all those Apollo objects interested in that refer to that particular model definition.
Rules objects contain definitions which list plans and priorities and the names of the plan definitions. There are local rules variables that keep track of the current state that are similar to static variables. Rules also contain some predefined methods, init, reset, and evaluate. The evaluate method processes the rules and selects the next plan. Rules definitions will not select plans that are not in context. Rules definitions select plans based on context and current conditions and may access model variables, and global variables of the active mud object via the this pointer. The plan selected by the rules set is the in context plan. The in context plan object will be executed by a call from the MPL at some appropriate mud time.
plan rats_feed priority 5;
plan rats_reproduce priority 5;
plan rats_flee priority 1;
plan rats_flee priority +5;
plan rats_defend priority +5;
if (critical_population) // see rats_model
plan rats_reproduce priority +2;
What is currently missing here is learning, as I mentioned above. A system of learning should be able to change the initial rules priorities as well as write new rules and plans.
Plan objects are definitions that include a series of procedures that are invoked to accomplish a goal along with the context in which that plan is valid. A plan procedure may set the context value back to that of a previously accomplished plan. Each plan definition is written in a procedural language that has the ability to execute other plan procedures or issue messages to mud objects (i.e. execution).
this.hungry = true; // Uses variable defined on object
this.attacked = false
execute this.find_food(); // message to self
if (this.hungry == false)
The plan language is able to access global variables defined on the local object via a this pointer. Plan state is set via a series of keywords achieved, continued, or failed. These determine plan context during rules evaluation. Setting the context status of the current plan also causes the evaluate loop to run on the rules definition. Note that continued is the same as temporarily suspending execution of the current plan to engage in rules evaluation (revaluation).
This is just a scratching the surface. No attempt has been made here to describe how the procedural reasoning engine is designed and works. Hopefully, if I get around to it, I will write a part 2 as well as some other AI related goodies.