Character Design
Decision-theoretic goal-based agents are used for modeling each character in the story, with the character’s motivations encoded as the agent’s goals. Each agent has multiple and potentially competing goals, e.g. keeping safe vs. keeping others safe, that can have different relative importance or preferences. Thespian agents have recursive beliefs about self and others, e.g. my belief about your belief about my goals, which forms a "Theory of Mind".
The "Theory of Mind" capacity enables the agents to reason about others when making their own decision, and thus makes them "social characters". To make the agents’ behaviors more human-like and socially aware, Thespian models social normative behaviors and emotion. By default, Thespian agents act following norms, unless they have other more impressing goals.
When deciding what to do, a bounded lookahead policy is used by the agents. They project limited steps into the future, considering not only their own actions, but also other characters’ responses using their mental models of other characters, and its responses in return. The agents choose the action that receives the highest expected reward to proceed. Thus, they act both true to their motivations and in reaction to the status of the interaction.
For example, in the scenario shown above, the wolf will react to Red differently depending on whether there is somebody else close by, and who is that. The wolf will choose different actions when the hunter is near and when the woodcutter is near, because the wolf has different mental models about these two characters.
The user is also modeled using a Thespian agent based on the character whom the user takes the role of. In modeling the user, not only the goals of the user’s character are considered, but also the goals associated with game play. This model allows other agents to form mental models about the user the same way as about other characters and the director agent to reason about the user’s beliefs and experience.
Plot Design
Thespian provides a proactive directorial control approach, which coordinates the agent’s behaviors without breaking their motivations for reaching the author’s desired affects. Its director agent projects into the future for detecting potential violations of the author’s directorial goals, which is expressed as partial order or temporal constraints on key events in the story. An event can be either an action from a character or a state of a character including the user. An example of directorial goal is given below.
orders = [[“Wolf knows Granny’s location”, “Wolf eats Granny”], [“Wolf eats Granny”, “Wolf eats Red”]]
earlierThan = [“Wolf knows Granny’s location”, 10]
laterThan2 = [“Wolf knows Granny’s location”, 10, “Wolf eats Granny”]
earlierThan2 = [“Wolf eats Granny”, 3, “Wolf eats Red”]
If the director agent detects a potential violation, it explores alternative methods for tweaking the characters’ behavior and to reach the directorial goals. The director agent takes a least commitment approach to coordinating the agent-characters. During the interaction, the director agent maintains a space of character configurations consistent with the characters’ prior behavior. Each of these configurations is equally valid in the sense that they will all drive the character to act in exactly the same way up to the current point of the interaction. When a future violation of the plot design is predicted by the director, it constrains that space so that the rest of the configurations will drive the agent to act in a way that eliminates the violation. In this way, from the user’s perspective the characters are always well-motivated and the user can interact freely with them.