I’m currently working on modeling stories through Answer Set Programming. My last research post was about using retroactive continuity in storytelling as rationalization mechanisms (defined by Abelson’s Goldwater Machine or my adviser’s Terminal Time) for story explanation. As more work goes into using logic programming in representing stories and characters, there are snags along with moments of small novel discoveries. Yesterday, Adam Smith was helping me work out a few snags in the event calculus for this story system. Overall, I want to anecdotally describe what working with believability in technology and expressive intelligence is like, along with giving some insights on formal models of story.
Earlier today, I was walking to the bus stop for a ride to work. As I walked up the sidewalk, I was audibly grooving to myself, “I’m into nuggets ya’ll,” finger snapping and head bobbing to the groove in my imagination. “What strange emergent behavior?,” I thought to myself. Even though there was no one there to notice, I laughed to myself, “what would God or some other omniscient being be thinking of these things we do in secret?” I suppose if there were some higher order of consciousness, they’d be thinking that it was some adorable, but yet human thing to do. Perhaps, these gods would be more amused by the things we do when we aren’t trying to make sense, than when we do.
Since the Foundation of Digital Games, Intelligent Narratives Workshop, I’ve been thinking hard about how we represent the requirement of “making sense” for believability. In our story space, we can create characters with traits, roles, and the potential to perform actions. We then create a dictionary of actions and rules/consequences of these actions. A major issue in our first system was the incoherence of why the characters choose the which actions to perform. Once we’ve declared that there exists human-like characters in this story, then it is expected that these characters act along the lines of:
- they abide by common human behavior
- their behavior results in dramatically compelling events
- these events are disclosed as if being presented by a great storyteller.
All three of these are separate approaches to generating stores (character-goal driven, story-goal driven, and author/audience-goal driven). As a research problem, I find that the separation of story and discourse should be investigated separately, albeit, they seem inseparable in practice. With this in mind, 1 and 2 are the current primary pursuit of this story generator. (In regards to 3, Peter, another lab mate, sent me this interesting link on fiction conventions.)
Adam, more seasoned in logic programming, had suggested early on that I make a variety of small systems, while I am prone to build the one system that does everything. You can imagine the snags I find myself in. Here’s a great pragmatic programming guide for Answer Set Programming that addresses aforementioned snags.
One of the high level snags in the story generator is how characters behave. They do things that may not break the rules of the delegated actions, but they do them in ways that are not interesting or unconventional.
So, yesterday, instead of refactoring the current system, Adam was able to recompose a simpler version of this storyteller to address issues of “making sense.” These smaller systems can be built to capture the essence of small story vignettes. For instance, there is a story space that involves the characters, Michael and Noah, where Noah is a vendor who has 1 coffee and Michael has 1 unit of cash. We recognize the pattern of selling to occur when an item is exchanged (or ptrans-ed) from a vendor. We, of course, want to see Noah sell Michael a cup of coffee. This story is possible with 1 timestep and we designate this an occurrence of the expectation of selling when money is given to a vendor and another item is given back. Without further constraints, the story generator gives a number of stories such as, “Michael gives Noah cash for nothing in return” or “Noah gives Michael coffee for free” or, perhaps, “Noah gives Michael coffee that Michael then gives back to Noah,” none of which would qualify as selling.
As we went along we’d think of new story elements to create. Suppose we have three characters: Sherol, Adam, and Peter. In this case, Adam is the vendor who has coffee and both Sherol and Peter have cash. Now we introduce a new action called propel which we designate to be the act of throwing an object or the transfer of an object that no one receives at another character. In wanting to create the experience of “premeditated attack,” we designate that if one character purchases a weapon (with cash) prior to the attack, then it was premeditated. For the weapon, we create a new object called a rock and the rock is carried by Adam (who also has coffee). You can imagine how amusing the output becomes as we create these story elements to give a sense of coherency but allow the characters to act freely within the rule system.
Within our output, we noticed that there were instances where one character would give another character the money to buy the rock that is then propelled. We designated this to be “paid assassination.” In another instance, we noticed that sometimes when one character threw something at another, the receiving character would throw something back. That was called “counter attack.” Imagine this: “Adam sells Sherol coffee. Adam then gives cash back to Sherol. Sherol uses the money to buy a rock from Adam. Sherol throws the rock at Peter. Peter throws cash at Sherol.” This story satisfies the expectations of: selling, premeditated attack, paid assassination, and counter attack.
You might ask, “well, why would Peter throw money?” First, because we hadn’t given these objects any value in relation to each other. Second, the act of throwing merely loses the object forever and doesn’t transfer it to Sherol. Regardless, in small rule sets these occurrences are easily handled, while in the bigger system, trying to reconcile or repair why Peter marries the hot dog can become a daunting challenge. By defining story patterns as we go, these higher level actions (such as selling or counter-attack) help to make sense of a character’s behavior. This reminds me of Mark Riedl’s implementation of motivation through frames of intention in IPOCL. These story patterns can be used to constrain (more informed) story-goals to direct a higher level behavior built off of primitive actions by either requiring the occurrence or omission of these patterns.
With 60 lines of story generation code, Adam was able to create a variety of short story vignettes with the ability to designate patterns of story.
As a researcher working in creating believable behavior, it’s my academic pursuit to recreate what we can already do with the benefits of technology. Answer set programming has turned out to be quite powerful in getting the job done, and I find that getting used to using it as being the greater challenge. Now that I’m beginning to understand how to use ASP, I’m able to experience more and more of these small novel discoveries. Watching characters interact with one another in a story space with no rule-sets would be nonsense, watching characters interact while abiding by the rules of primitive actions is somewhat amusing, but being able to detect and designate higher order patterns makes it far more emotionally engaging.
As gods of our virtual simulations, we can watch the characters that we design (somewhat in our own image), for the pursuit of believability, act on their own free will. Perhaps, that’s one of the main reasons I love what I’m doing. I get to figure out new ways to recreate myself by shaping the rule-sets that determine free will for the characters in my story space. As I get deeper into building these models, I find myself more and more amused by the sorts of emerging behaviors that virtual characters are capable of, sometimes because their behavior makes so much sense and sometimes because they don’t.
“McNuggets McNuggets WHAT?!”