Currently, the specification of directorial control goals are defined using the format shown in the example above. For authoring agents, Thespian provides 3 alternative approaches: a GUI, a spreadsheet approach or specification in a Python format. Note the GUI and spreadsheet approaches have both been used by non-technical users to create agents,. We start by discussing the third approach first.

Authoring Interface I:

For authors who are comfortable writing codes, the agents can be directly coded. For example, following is a definition of an agent, including its horizon and depth of reasoning, actions, state features, goals and beliefs.

classHierarchy['NormAgent'] = {
   ‘actions’: {’type’:'XOR’,
                 'base': {'type':'XOR',
                {'entity':    ['self'],
                                  ‘direction’: ‘max’,
                                  ‘type’:      ’state’,
                                  ‘key’:   ‘init-norm’,
                                  ‘weight’:    .5},

                 {’entity’:  ['self'],


Authoring Interface II:

Alternatively, the author can use a graphic interface for defining the agents.

All the components defined in the above example can be inputted through this interface.



Authoring Interface III:

Finally, project specific format for text input can be defined. Part of the agent modeling is done within Thespian (in the code that supports the interpretation of the text input), the author only need to change a few parameters for defining the agents.For example, below is a text file that define two agents who negociate with each other. The two agents have differnt goals in terms of how much they care about themselves vs. care about the other person’s experience. They may also have different beliefs about the other person’s goals.


To further faciliatate authoring, Thespian can simulate potential users’ behaviors and generate all the potential interaction paths.

An additional program is developped in Unity game engine to help the author view and edit the paths, as well as supply surface sentencs to the dialogue acts.


As with many other generative approaches to interactive storytelling, the stories that can be experienced or told with The Virtual Storyteller are part human-authored, part system-determined. In particular if stories are to emerge, it is impossible to determine beforehand what content and processes to write in order to come up with a satisfactory story domain.

Rather, we conceive of the creation process in The Virtual Storyteller as a continuous cycle of writing content, seeing what the system makes of this content, and coming up with new ideas. An authoring paradigma that has proven useful is to not only correct the system if it is not doing what was expected (’debugging’), but also to accept that the system may take the space of potential stories in directions not initially considered (’co-creation’) [1].

In this process, it is considered important to maintain a tight feedback loop between the steps, making small, incremental changes to the content. Furthermore, it is considered important to actively consider whether certain content can be reused in other situatons. This helps create density of the space of stories.

Content to be created

What needs to be created for a particular story domain? Let’s consider the simple case in which we re-use domain-independent parts of the character models (such as event appraisal and action planning) for a new story domain. In this case, authoring consists of a set of knowledge representations:

Type Explanation Example from LRRH domain
Setting Facts about the story world (e.g., a topology, location of characters and objects) The Wolf is in the forest, Red has a birthday cake
Ontology Gives semantics to these facts. Created in the Protégé tool [2] A forest is a type of location
Threads Characters and their initial goals, arranged for a specific purpose (e.g. to achieve a conflict) Wolf, Red and Grandma; Red wants to bring a cake to Grandma
Goals What does a character want to achieve, under which circumstances is this possible, and how important is it to achieve? Bring a cake to grandma, eat something, seek revenge
Goal selection rules When is a character motivated or caused to adopt a goal? If you are hungry, eat something
Actions What can a character do, under which circumstances is this possible, and what are the effects? Skip to somewhere, eat, steal something, cry
Action selection rules In which circumstances will a character select a certain action that was not planned? If you meet someone you haven’t met before, greet them
Events What can happen (unintentionally, e.g., a car accident or dropping a vase), under which circumstances is this possible?

Become hungry

Expectations What can a character reasonably expect to happen as a consequence of some event or action? If someone is offered a cake, they can be expected to eat it
Beliefs What possible inferences can be made given a certain context? If someone does not greet you back, they do not like you
Framing operators What aspects of the setting can be retroactively defined in service of the story? The Wolf in LRRH decides halfway that he is a mean character


This knowledge is currently created as a combination of RDF/Turtle syntax (setting), RDF/OWL knowledge edited in the Protégé tool (ontology) and Prolog files (goals, actions, etc). Eventually we would like to have a tool that supports authoring, for instance by presenting pre-defined fields, checking syntax and semantics, providing visual organisation of the content, and checking interrelatedness of content (e.g. actions that can be strung together in a plan but also, e.g., finding goals that can never occur).


Example setting

Example of the setting of the LRRH world (in RDF/Turtle):

# Little Red Riding Hood
    a   red:LittleGirl ;
    swc:at      :reds_house ; 
    swc:has     :birthday_cake ;
    swc:owns    :birthday_cake ;
    rdfs:label  "Little Red Riding Hood" .

# The forest   
    a   red:Forest ;
    rdfs:label "the forest" ;

# Path from Red’s house to forest
    a   swc:Path ;
    swc:from    :reds_house ;
    swc:to      :forest ;
    rdfs:label  "the path leading to the forest" .


Example action

Example action for taking something from someone (in Prolog): 

    arguments([agens(Agens), patiens(Patiens), target(Target)]),
        % Two different characters and a location
        condition(true, [
            rule(Agens, owlr:isNot, Target),
            rule(Agens, owlr:typeOrSubType, swc:'Character'),
            rule(Target, owlr:typeOrSubType, swc:'Character'),
            rule(Loc, owlr:typeOrSubType, swc:'Location')
        % Agens (who executes the action) is mean
        condition(true, [
            fact(Agens, swc:hasAttribute, red:mean)
        % Target has the thing
        condition(true, [
            fact(Target, swc:has, Patiens)
        % Agens does not own the thing (otherwise, this is TakeBack)
        condition(false, [
            fact(Agens, swc:owns, Patiens)
        % At the same location
        condition(true, [
            fact(Agens, swc:at, Loc),
            fact(Target, swc:at, Loc)           
        % Agens has the thing
        condition(true, [
            fact(Agens, swc:has, Patiens)
         % Target no longer has the thing
         condition(false, [
            fact(Target, swc:has, Patiens)

Creation Process with Linear Logic Champagnat November 2009

 An authoring tool gives the storyworld and the narrative program.

It is then transformed into a sequent of Linear Logic. An automatic translator transforms the sequent into a Petri net. Finally the Petri nets performs the execution of the model and can interact with the Interactive Storytelling Rendering.

How to Create Content in Scenejo Spierling November 2009

Creating content in Scenejo mainly consists of creating the ‘knowledge base’ for each virtual actor, in other words, a database of word patterns (utterances that the bot can ‘understand’ as a stimulus),  and corresponding verbal templates to be uttered as a response. All content is mapped to an XML structure, which is an extension of AIML (Artificial Intelligence Markup Language, see Therefore, it is an advantage if authors are knowledgeable in writing AIML. The XML structure can either be written in a text editor, or a graphical interface can be used.

In general, plain AIML can also be run with Scenejo, which means that it is possible to ignore the Scenejo authoring concepts and just feed a chatbot’s word pattern base (such as, an AIML file) into one or more ‘actors’. Using only one actor, this can result in a conventional chatbot interaction between a user and a bot, whereas using 2 or more actors leads to a conversation between the bots. Per se, this can entice one to experiment with emerging funny conversations that have not been foreseen by the authors of the individual pattern bases. However, in many cases it can also lead to rather meaningless chat, especially when non-sequitur answers occur. At the same time, building in some commonplace answers into your bots always helps to overcome the biggest challenge in this kind of interaction: having to react to some unforeseeable user input in a meaningful way.

The Scenejo authoring tools allow to model some structures based on this question/answer principle:

  • Abstraction of ‘direct discourse’-utterances into so-called dialogue acts: For example, the utterance “Hello” can be abstracted to the dialogue act “greeting”. On the other hand, the abstract act of greeting can be expressed in many ways of direct discourse, such as “Hi”, “Howdy”, “Good morning”, etc.
  • Connection of several single acts to dialogic pairs or sequences: For example, the act of greeting demands an answer from the conversational partners, for example a greeting in return, or in general, questions demand answers. Therefore, Scenejo allows the definition of short dialogue sequences, narrowing down the context for the pattern recognition of possible following acts.
  • Pre-conditions and effects: For example, giving the answer of a greeting in return can be conditioned by a predicate value that is checked before, such as “mood > 0”, and refusing a greeting in return can lead to the effect of increasing a resentment value. These changing states can be freely defined, and authors need to think of meaningful parameters for their stories.


We suggest the following steps of creation:

STEP 1: Think about three characters (one of which is human) and their potential dialogues or debates, and think about a main structure with possible outcomes of a discussion. Find a reason for the user/player actor to interact – either a reason to interrupt an ongoing conversation, or to contribute certain details! In the case of LRRH, we had the idea to let the user decide how to behave in an encounter with two foreign persons in an obscure setting: TheLoneWolf and LittleRedRidingHood.

STEP 2: It is then necessary to analyse the intended conversation in order to find typical interesting situations, which depend on certain states of affairs. Meaningful parameter states have to be identified early in the process – as well as critical incidents or events, which turn the ‘story’ into a different direction. In the small LRRH scene, we decided to work with the values of a so-called ‘chat-up’-level and a ‘danger’-level. If the user does not intervene with the flattery of TheLoneWolf, the chat-up-level increases and LittleRedRidingHood is more likely to follow the wolf to his cavern. This means, later dialogue acts have to be constrained to depend on the values of these levels.

STEP 3: A crucial part, of course, is to begin writing the dialogues for the bot actors. As a start, a linear dialogue script can be written. However, this soon has to be analysed and annotated, in order to be structured for interactivity. It is helpful if single utterances are then abstracted (generalised) into dialogue acts, and to think about what these acts can ‘do’ to the states of affairs. The more utterances really do affect some states, the more entertaining is the result of the interactive conversation.

STEP 4: A real challenge is then to think about possible utterances of user actors, as these can hardly be influenced, but only motivated with a strong mission for their interaction. Ideally, at each turn of the dialogue, some potential user utterance can have an effect – but that is lots of work. In our small example, we only provided for a few meaningful user interaction points (for example, intervening with the flattery of the wolf). It is recommended that many different potential user utterances (patterns containing wild cards) get generalised to some few dialogue acts.

STEP 5: Finally, the structured dialogue (from step 3, plus step 4) has to be implemented with the tools. The Scenejo graph structure helps visualising dialogue acts that are connected to sequences.
The Scenejo architecture, which is strictly ‘character-based’, requires that the word bases for each bot are entered and stored separately. According to the chatbot principle, as a first precondition for the successful utterance of a dialogue act, a word pattern has to be matched as an input. A linear and completely intertwined dialogue between two bots can be realised by exactly aligning these input patterns with the utterances of the partner bot.


Creation Process in PaSSAGE Thue November 2009


Story events for PaSSAGE are created using the Aurora Neverwinter Toolset, the content creation tool that is included with Neverwinter Nights, a computer role-playing game by BioWare Corp.  Required skills for authors include beginner to intermediate level programming skills, and familiarity with using dialogue trees and navigating 3-dimensional environments is beneficial.

Each event in PaSSAGE exists primarily as a collection of scripts and conversations. 


Scripts are pieces of program code that describe the courses of action, triggers, conditions for role passing, hints for player steering, and character actions for events. The following script controls the actions that the Wolf (in our case, a Troll) performs when Red provokes him into fighting her: we retrieve the actor playing the role of "guardian" in this event, cancel the player’s current conversation, make the Troll speak a few lines, cause the Troll to attack the player, and then set a flag to record that the combat has occurred.




Conversations are collections of lines of dialogue that are organized into a tree; every odd (red) layer of the tree (treating the Root as layer #0) is a non-player character line, while every even (blue) layer contains one or more lines for the player to choose from.  The figure below shows an section of the conversation tree for Red’s first encounter with the Wolf (in our case, a Troll).

Conversation Editor

Creating an Event

In addition to actor actions and lines of dialogue in conversations, events additionally need three more types of scripts: adjustments to the player model in response to player actions, annotations for each course of action stating which types of player will prefer each course, and specifications of what conditions to check before role passing occurs.

Adjustments to the Player Model

In the figure of the conversation tree above, the highlighted line has been associated with a script which increases the player’s preference toward combat (the variable PM_FIGHT is increased by a large amount).

Annotations for each Course of Action

The figure below shows how the two courses of action ("branches") in the example event are annotated.  In this case, the "Ingredient" branch has been annotated as being very good for players who are Tacticians (who enjoy solving puzzles) and Power Gamers (who enjoy increasing their character’s wealth and power).

Annotating Courses of Action


Conditions for Role Passing

This figure shows an example of the conditions which must be satisfied by an actor before it may play the role of Threshold Guardian in this event; in this case, it must be a creature in the class "guardian", and it must be closer than five meters to the player before any action will take place.

Conditions for Role Passing


Final Step: Add the Event to a Set

The final step in creating an event is to add it to one of the event sets that is associated with the phases of Campbell’s monomyth, as shown in the figure below.

Adding and Event to an Event Set

Creation Process with Enigma Kriegel November 2009

As mentioned in the Tool Architecture description, authoring work is split between a principal author (PA) and numerous contributors, who the PA invites. In general the system design assumes minimal technical knowledge from contributors, however the PA needs a deep technical understanding of how the system works.

Creation process for the principal author:

In order to collect red riding hood themed stories with the Enigma authoring client the principal author (PA) has to first create a story world, in which the invited contributors can create stories. This involves defining which characters, objects and locations are available and setting up an initial repertoire of actions, speech acts, properties and types (the contributors will be able to create more of those) to give the contributors a starting point. All of this is done by editing XML files. Also the PA should provide some backstory (such as in Story World Concept) for the contributors to read before they go about the authoring task.

Finally the PA also has to make sure that graphical resources, which allow a comics visualisation of characters, places and events, are in place. Using the comics system, this process can be much faster than it would be if 3D graphics were involved. Also the comics system is useful for rapid prototyping. The comics library for the Red Riding Hood Story World that was used to create the Example Scene was relatively easy to set up, it involved scanning in 4 hand drawn characters and finding some backgrounds images from the internet. Of course if it is a serious authoring project, this can and should be extended by for example creating more expressions for characters and creating original background images that fit perfectly with the style of the characters.

Creation process for invited contributors:

Contributors tell stories by creating events in the authoring client. First they select a subject, then an action. Depending on the action’s signature (number and type of parameters) they have to make additional selections in order to create the event. For example, if the author selects the action steal, they also will have to select an item to steal and a person to steal from.

Authors can also create new actions through a wizard. Since the graphics library will not contain any content for visualizing this action, the author can provide a narration text that will be used as a place holder for visualizing this action. The PA or an additional “artist in the loop” could later add visualisation for this action.

In the case of dialogue, contributors can reuse existing speech acts (units of dialogue) or create new ones. In the latter case they can not just enter a new line of dialogue; they also have to provide a name to identify the dialogue line as a speech act.

Finally authors can also control the narrative time and place and cause scene changes, character entries and exits, etc.


The user interface of the authoring tool will also ask contributors to annotate the stories they create in order to collect additional semantic context information for the processing of the stories by the Enigma Server. After creating a new event, authors can specify how characters’ emotions changed due to this event and which properties change. For example a character can have a property with the name “awake” and the data type boolean that is changed from true to false through the action “go to sleep”. A contributor specifies this by choosing properties and their values from lists. If a property that is needed to describe an event is not yet part of the domain model, the contributor can also define new properties at this stage (this might also involve defining new data types).

After finishing the story, authors will also be asked to specify which goals characters had during the story, at which point in the story they started, succeeded or failed and which events contributed to these goals. We will have to determine through user trials of the tool, whether most authors are willing to perform this annotation and whether the concepts involved in the annotation (properties, types, goals) etc are understandable for non-experts. In case they are not, the PA might have to perform the annotation himself. Alternatively we might adopt an intermediate solution where the annotations are written in natural language by the contributors and the PA has to translate them into machine readable format afterwards.

As illustrated in the figure below, the authoring approach supported by our system follows three main stages: knowledge acquisition, simulation and analysis, and finally story visualisation.


  • Knowledge Acquisition

The creation of a story world (labelled no. 1 in the figure) is the initial stage where drafts of story elements are created by the author. They describe diverse story elements (e.g. characters’ psychology, representative scenes or environment description). The next step (labelled no. 2 in the figure) corresponds to the elicitation of all knowledge required to describe the story world, such as the various states (i.e. initial and goal states) and the various actions described through their validity conditions and their consequences. In terms of Planning, this corresponds to domain implementation where each part of the planning domain is created (i.e. propositions, operators, states and goal). The domain description also includes a formalisation of the initial state and the goal state, which correspond to the scene’s or characters’ objectives. 

  • Simulation and Analysis

When solutions rely on a sophisticated plan, the various causal dependencies as generated by HSP planning may be difficult to recognise. Therefore, we wanted to explore whether the set of possible plans could be visually represented in order to control the unfolding of the generated content. Moreover, the combinatorial aspect of content generation can quickly overflow the amount of possible paths that can be exploited if done using a brute force approach. As an alternative to the offline generation of a complete narrative (formally a solution plan), an interactive mode allows a step-by-step generation of a solution including the visualisation of all possible outcomes (labelled no. 3 in the figure).

Starting from the initial state, the user can expand the plan at each step using a tree representation until the goal state is reached. After each action is selected by the user, the system automatically offers a list of possible subsequent actions. For instance, the system will only offer the solution of Emma accepting an invitation once Rodolphe would have proposed it. This simulation has both a formal component (access to the planning domain, inspection of operators and world states) and a visual component (a tree structure providing a natural visualisation of actions and their consequences). This dual visualisation is meant to support collaboration and explanation between system developers and content creators.

In addition, authors can interact with the solution generation process at any time (labelled no.4 in the figure). This module includes also a dynamic environment simulation feature. It allows reproducing changes in the world not triggered by the planning system that will normally occur within the story, without having to simulate this in the complete 3D environment. Then, several analysis tools (labelled no.5 and 6 in the figure) can allow the validation of the generated narrative content. For instance, the evolution of the world state through the development of the story plan is an effective way of ensuring its consistency with respect to the characters’ psychology by allowing the analysis of how emotions intensities vary along the plan evolution. 

  • Story Visualisation

Finally, when the result has been validated, the last stage is to visualise the final result using the run-time engine. We can observe that the early step of this production process is a specific case of knowledge engineering applied to planning formalisms (i.e. by integrating knowledge into computer systems in order to solve complex problems normally requiring a high level of human expertise).


Creation Process with Cyranus Iurgel October 2009

 Test Post

IDtension is an Interactive Drama engine without any authoring tool per se. So far, content entering has been performed by the engine’s designers himself (Nicolas Szilas). Here follows the successive tasks that must be performed by the author (AUT) and the IS engineer (ENG), when using the text-based version of IDtension.

Step 1: getting an idea of IDtension.

ENG must provide the author an in depth presentation of IDtension, or AUT must read IDtension ’s papers (Szilas, 2003; Szilas, 2007 – available on request). This introduction must cover the following concepts:

  • General goal of IDtension: highly interactive first person Interactive Drama - principle of reciprocity

  • Notion of generativity

  • algorithmic principle of IDtension: the simulation of narrative – the “mass-spring network” analogy

  • General model of narrative underlying IDtension

  • Narrative actions

  • Goal-task structure

  • Narrative effects

  • Example simulation

Finally, AUT must of course have played with IDtension as user (demo on "The Mutiny" scenario available on request).


Step 2: The setting.

As in many other narrative forms, AUT needs to define the context of the story:

  • Where does it take place: a limited number of different places is preferred, because displacements from place to place is not what IDtension is best at. A confined place, such as a train or an apartment is a good choice.

  • What are the characters: IDtension is worth handling several characters, like 5 or 6, because it allows more variability in the story. Stories with very few characters should be discarded.

  • What is at stake, what are the goals and values of the characters.

  • What problems (“conflicts”, to borrow from screenwriting vocabulary) will they encounter

These lines have precise equivalent in IDtension formalism, but in a first phase, especially for the new writer, it is better to start with free text, without formal constraints.


Step 3: Describe a scene

AUT should write a scene with characters, including their dialogs. This is a first draft, and probably many of the content will not be finally included, but it guides further authoring. The scene will be described with branching options, even if branching is not explicitely handled in IDtension.

The scene should include motivations of the characters (why they act and react this way).

When AUT will get accustomed to IDtension, s/he will describe the scene in a more structured way, by providing:

  • Values: Thematic axis along which each task is morally evaluated. Characters will be more or less attached to the values. Note that values are not simple characters’ attributes, because they allows some judgement.
  • Characters: After having decided their names, you must provide the values they will be attached to. Note that if a value exists in the story and a character has not been explicitely attached to this value, then it is considered that the character does not care about this value (attachement 0). Other properties might be useful in some contexts (for example, if being small is a condition of triggering an obstacle). Properties can be discrete or continuous (more or less courageous for example). You must also provide the objects that the character possesses (if relevant for the given scenario).
  • Goals: They are the concrete objectives the characters want to achieve in the story. This concept is classical in screenwriting. In IDtension, it is better to have a goal that can be carried out by several characters, to promote the diversity of stories.
  • Tasks: they are concrete acts that enable to reach a goal, the mean to reach an end. It is better to have several tasks for the same goal. Tasks must also been thought to illustrate and contrast the values of the story. Reminder: Actions in IDtension are made of tasks, for example encourage someone to perform a given task.
  • Obstacles: they are concrete events that make a task fail. They also trigger subgoals: because an obstacle is met, another branch of the story (a goal for the character to aim at) can dynamically open.

Step 4: sketching the goal-task structure

From the drafted scene, an initial and minimal structure must be designed. This is a difficult step, because it requires a deep knowledge of IDtension formalism. We strongly recommand this step to be performed by both AUT and ENG. The output is a drawing, such as:

Goal-task structure example

In this drawing is represented several IDtension structural elements organized around a single goal (here "have_food"). Several tasks enable whoever has the goal to reach the goal. But these tasks are hindered by obstacles (red diamonds). Each obstacle has a cause (condition capital letter): when the condition is true, the obstacle will trigger (and make the task fail), with the probability written in red; If the condition is false, it might still trigger, but with the probability written in green. Note that the classical case is "1/0". Note also that these probabilities might be bypassed by the narrative engine, for narrative reasons.

Values are represented by the scale at the left (there could be several values). Some tasks are more or less attached to values, always negatively, meaning that the task violates the value.

There are other data represented in the drawing above that will not be detailed because they are useful for advanced AUT.

There is also a limited amount of data that is not represented in the drawing.

Step 5: program the initial story

First ENG initializes a new story.

ENG, assisted by AUT enters the content into the system, which consists in:

  1. an XML file describing the structure. It is equivalent to the drawing mentioned above (but much less readable!). Examples (translated to English for readibility):

For a goal:

    <!–*** Have food ***–>

This XML declaration of a goal specifies the name of the goal, its general importance for the character who decides to target that goal, the various interest of each character towards that goal (if they are not the actor of the goal) and the fact that the goal is recurrent (once it is reached, it will come again after a certain delay).

For a task:


This specifies the name of the task, its parameters, its obstacles, how obstacles’ parameters are matched to tasks’ parameters, who initially knows the existence of the task (starter), its preconditions (beyond the fact of having the corresponding goal in mind), the consequences, the fact that other characters in the scene see the consequence of the task, the rules to attach (and instanciate) the task to the goal, and finally the goal targetted by the task.

The structure also contains the description of obstacles, objects, characters.


  1. complete the spreadsheet containing all text information, for text generation. Example:


    Eat [victim]
    eat me
    eat you
    eat you
    eating [victim]
    eating me
    eating you
    eating you



  1. Modifying other parameterization files, if needed (for example to add an introduction, change the menu labels, etc.)


Then, let’s run the story !

After some authoring bugs corrections, one should have the initial story. Ok, it is basic, but it it the starting point. Now AUT has a better understanding of the process of creation, and can write further.


Step 6: content entering

Step 3,4 and 5 can now be repeated. There are a lot of features in IDtension, which requires ENG to be part of the process, until a detailed manual is written.