Redcap

The Virtual Storyteller is a story generator based on principles of emergent narrative [1]. There is no predetermined plot; autonomous Character Agents are used to simulate the ‘lives’ of characters in a story world. This yields a particular event sequence (the fabula) that can then be used as a basis for generating and presenting a narrative text. A Character Agent can be replaced by an interface for human participation.

Architecture of the Virtual Storyteller

One of the goals of the Virtual Storyteller project is to serve as an experiment in emergent narrative authoring [2]. Authoring here means writing character models and supplying specific actions, goals etc. for a particular story ‘world’. We investigate how an author may think and work to end up with the content and processes that make up such a world.

What is particular about the Virtual Storyteller in comparison to other character-centric approaches, is that it also pays attention to the role that virtual characters can play as "drama managers" of their own stories, inspired by improvisational acting [3]. To this end, the Virtual Storyteller provides support for some out-of-character mechanisms (i.e., mechanisms at the story level). Most notably, the characters in the Virtual Storyteller can justify the adoption of new character goals and support the creation of plans for their goals, by selecting story world events (which are unintentional at the character level) and by filling in the details of the story world setting during execution [4].

The Virtual Storyteller source code is available from SourceForge. More information can be found on the Virtual Storyteller homepage.

Creating an interactive story about Little Red Riding Hood (LRRH) with The Virtual Storyteller means two things:

  1. Letting go of the idea to reproduce the original story.
  2. Trying to implement some of the physical and psychological processes that one can believe to drive the LRRH world and its characters.

The following describes the small story world that was created for the LRRH authoring workshop at ICIDS ‘08. We followed an iterative authoring cycle, using the LRRH story as inspiration for authoring decisions made along the way. The idea is also to use feedback of the simulation as further inspiration for new content.

We created three character agents to play the roles of Little Red Riding Hood (Red), Grandma, and Wolf. To enable a tight feedback loop between authoring and simulation outcomes, we started with a simple and minimal design of a story world (e.g., one goal and actions that can achieve it, so that the planner can turn these actions into a goal plan). The initial setup of the story world contained the goal (and accompanying actions) to bring a cake to Grandma. To enable the expression of this goal, a small story world geography was authored (a path from Red’s house to the forest, and from the forest to Grandma’s house) along with some actions (to skip from one location to another, and to give something to someone). This world is depicted here (images created by hand):

 


 

The following story world example was created by means of a series of authoring cycles. The creation process for this world took approximately 15 hours of work.

First cycle

A predictable story "emerged" during simulation: Red skipped to the forest, then to Grandma’s house, and gave her the cake. We chose to author content so that Grandma would eat the cake, and also wanted to enable Wolf to eat the cake, by stealing it. So we added a goal to eat something, and an action to take something from someone else. To enable justification of this goal, we added an event to become hungry.

Second cycle

In the simulation, both Grandma and Wolf could not adopt any goals: for them, the preconditions for available goals did not match the initial state of the story world. So they attempted to justify goals (out-of-character) by selecting the event to become hungry. On her way to Grandma, Red met Wolf, who had adopted the goal to eat something and took the cake from Red. However, since Red still wanted to bring the cake to Grandma, she took back the cake from Wolf. Wolf still wanted to eat the cake, so again took the cake from Red. This "cake fight" continued until Red skipped to Grandma’s house before Wolf managed to take the cake from her. Then, a similar situation occurred with Grandma who did not know that Red was going to give her the cake. Because she was hungry, Grandma took the cake from Red. This also happened to solve Reds goal that Grandma had the cake. Now Red had no more goals to adopt, so she selected the event to become hungry and adopted the goal to eat something. This caused Red to take back the cake from Grandma, and eat it herself.

 

Some of this behaviour was expected, some was surprising and inspiring (e.g., Red could be an assertive girl), and some was undesired, resulting from a domain underspecification: only mean persons take something from someone when it does not belong to them; nice people ask. Furthermore, if the cake can be taken away from Red without the possibility of taking it back (e.g., if Red is not mean), a response is needed to this event. We chose to have Red cry as a plausible dramatic response for little girls.

So in the implementation phase, we added a framing operator that can endow a character to be mean; we further chose that only one character in the story world can be mean. We added a "cry" reactive action, triggered by someone taking something from a character without the character’s consent. We chose to constrain the crying reactive action trigger so that it is only applicable to little girls (although it would be fun if the wolf would cry too if someone stole something from him).
 
Third cycle

In the simulation, Wolf framed himself to be mean (out-of-character, using the framing operator), so that he could plan to take away the cake from Red. After this mean action from Wolf, Red started crying. This was as expected. As authors, we considered how the simulation might continue at this point. Red might go to Grandma to seek support. In revenge, Grandma might poison a cake and feed it to Wolf. We considered how this might open up possibilities in the simulation for the cake to be poisoned by Red in an attempt to poison Grandma in case Red happens to be the mean one (an idea was to add a goal specifying that mean characters try to poison others). Wolf might be allowed to satisfy his hunger by eating Grandma, or by following Red and eating them both. Red might be given the option to be distrustful and avoid interaction with Wolf. 

We only implemented one of these ideas. We added a goal to seek support and added speech actions for Red to tell Grandma what happened, and to ask her what to do. We also added a goal for Grandma to avenge her granddaughter by poisoning the wrongdoer, and actions that allowed a goal plan. Grandma baked a cake, poisoned it, went up to the wolf and gave him the cake. The wolf, being hungry again, ate it and died. Red and Grandma lived happily ever after.

As with many other generative approaches to interactive storytelling, the stories that can be experienced or told with The Virtual Storyteller are part human-authored, part system-determined. In particular if stories are to emerge, it is impossible to determine beforehand what content and processes to write in order to come up with a satisfactory story domain.

Rather, we conceive of the creation process in The Virtual Storyteller as a continuous cycle of writing content, seeing what the system makes of this content, and coming up with new ideas. An authoring paradigma that has proven useful is to not only correct the system if it is not doing what was expected (’debugging’), but also to accept that the system may take the space of potential stories in directions not initially considered (’co-creation’) [1].

In this process, it is considered important to maintain a tight feedback loop between the steps, making small, incremental changes to the content. Furthermore, it is considered important to actively consider whether certain content can be reused in other situatons. This helps create density of the space of stories.

Content to be created

What needs to be created for a particular story domain? Let’s consider the simple case in which we re-use domain-independent parts of the character models (such as event appraisal and action planning) for a new story domain. In this case, authoring consists of a set of knowledge representations:

Type Explanation Example from LRRH domain
Setting Facts about the story world (e.g., a topology, location of characters and objects) The Wolf is in the forest, Red has a birthday cake
Ontology Gives semantics to these facts. Created in the Protégé tool [2] A forest is a type of location
Threads Characters and their initial goals, arranged for a specific purpose (e.g. to achieve a conflict) Wolf, Red and Grandma; Red wants to bring a cake to Grandma
Goals What does a character want to achieve, under which circumstances is this possible, and how important is it to achieve? Bring a cake to grandma, eat something, seek revenge
Goal selection rules When is a character motivated or caused to adopt a goal? If you are hungry, eat something
Actions What can a character do, under which circumstances is this possible, and what are the effects? Skip to somewhere, eat, steal something, cry
Action selection rules In which circumstances will a character select a certain action that was not planned? If you meet someone you haven’t met before, greet them
Events What can happen (unintentionally, e.g., a car accident or dropping a vase), under which circumstances is this possible?

Become hungry

Expectations What can a character reasonably expect to happen as a consequence of some event or action? If someone is offered a cake, they can be expected to eat it
Beliefs What possible inferences can be made given a certain context? If someone does not greet you back, they do not like you
Framing operators What aspects of the setting can be retroactively defined in service of the story? The Wolf in LRRH decides halfway that he is a mean character

 

This knowledge is currently created as a combination of RDF/Turtle syntax (setting), RDF/OWL knowledge edited in the Protégé tool (ontology) and Prolog files (goals, actions, etc). Eventually we would like to have a tool that supports authoring, for instance by presenting pre-defined fields, checking syntax and semantics, providing visual organisation of the content, and checking interrelatedness of content (e.g. actions that can be strung together in a plan but also, e.g., finding goals that can never occur).

 

Example setting

Example of the setting of the LRRH world (in RDF/Turtle):

# Little Red Riding Hood
:red
    a   red:LittleGirl ;
    swc:at      :reds_house ; 
    swc:has     :birthday_cake ;
    swc:owns    :birthday_cake ;
    rdfs:label  "Little Red Riding Hood" .

# The forest   
:forest
    a   red:Forest ;
    rdfs:label "the forest" ;
    .

# Path from Red’s house to forest
:forest_path1a
    a   swc:Path ;
    swc:from    :reds_house ;
    swc:to      :forest ;
    rdfs:label  "the path leading to the forest" .

 

Example action

Example action for taking something from someone (in Prolog): 

  action_schema([
    type(red:'TakeFrom'),
    arguments([agens(Agens), patiens(Patiens), target(Target)]),
    duration(1),
    preconditions([
        % Two different characters and a location
        condition(true, [
            rule(Agens, owlr:isNot, Target),
            rule(Agens, owlr:typeOrSubType, swc:'Character'),
            rule(Target, owlr:typeOrSubType, swc:'Character'),
            rule(Loc, owlr:typeOrSubType, swc:'Location')
        ]), 
        % Agens (who executes the action) is mean
        condition(true, [
            fact(Agens, swc:hasAttribute, red:mean)
        ]),       
        % Target has the thing
        condition(true, [
            fact(Target, swc:has, Patiens)
        ]),
        % Agens does not own the thing (otherwise, this is TakeBack)
        condition(false, [
            fact(Agens, swc:owns, Patiens)
        ]),
        % At the same location
        condition(true, [
            fact(Agens, swc:at, Loc),
            fact(Target, swc:at, Loc)           
        ])    
    ]),
    effects([
        % Agens has the thing
        condition(true, [
            fact(Agens, swc:has, Patiens)
         ]),
         % Target no longer has the thing
         condition(false, [
            fact(Target, swc:has, Patiens)
         ])
    ])
  ]).