Enigma is an experimental platform for collaborative authoring of the behaviour of autonomous virtual characters in interactive storytelling applications. It originated from our experience in creating character driven interactive storytelling applications like FearNot!  in the EU FP6 project ECircus without any supporting authoring technology. The main idea is to overcome the bottleneck of knowledge acquisition that exists in generative interactive storytelling systems through a combination of crowd-sourcing and machine learning.
A client application which can be run directly from the browser allows contributors, who are invited by a principal author (PA) to create/tell a little story within a predefined story universe, e.g. the Little Red Riding Hood Universe. Every story that gets created within this client application will be submitted to a server where many of these stories are collected and processed by machine learning algorithms to infer generative character models that can be used to drive a virtual actor within an interactive drama. These virtual characters can also run in the background while a story is created in the client. This allows the characters to make suggestions at certain points in the story (more about this in  and ).
User-friendliness of the authoring client is deemed very important if many contributors are expected to be mobilised. Although the end goal is the creation of a database of symbolic story knowledge, we assume that providing this knowledge ex- and implicitly in the process of telling stories will be easier for invited contributors than editing it directly. We however still have to verify this hypothesis through a series of experiments.
For the same reason (user friendliness and appeal) we have decided against a purely text based tool. Instead we opted for visualisation through a comics generation system . Originally we planned to use a 3D graphics authoring interface, but later found that the comics system is a better choice. Using it allows us to have a thin authoring client (The comics are generated on the server side, a URL to the generated picture is returned) that can be easily distributed. Also, we need to represent event sequences within the authoring tool. With the way they are structured (1 panel = 1 event), comics provide a good mechanism for doing that. Finally this form of visualisation allows for uncomplicated graphics content generation independent of any specific tools (characters are represented by a series of annotated head and body pictures, scenes are simply panoramic pictures). Below is a screen shot of the client’s main editing window: