In my last entry, I began exploring the topic of what makes an avatar believable: I considered abstraction as a piece of the equation. The responses I received to that post encouraged me to dive deeper into the subject, and I pulled out some research I did for believable non-player characters in computer game/simulation environments that I think is applicable to this discussion. My thinking now is that there is a threshold of believability and therein lies the key—abstraction and concreteness are secondary issues. So what is the threshold of believability, and how do we create avatars that pass it? Ultimately, it’s important to realize that if avatars are to effect the behavior of the people that drive them, how avatars interact with other avatars becomes important–and we can greatly enhance the virtual environment by populating it with autonomous (non-player) avatars. So I’ll spead to that here as well.
While creating avatars that pass a requisite threshold of believability will present multiple design and implementation hurdles to overcome, it is important to keep in mind that the threshold is one of subjective perception rather than objective reality. Pimentel and Texeira (1993) observe that the realism of created avatars does not have to appear as actual people in the physical world; rather the idea is to achieve just enough realism so that disbelief can be suspended for a period of time. They state, “This is the same mental shift that happens when you get wrapped up in a good novel” (p. 15). Loyall (1997) states, “Believability is similar to the intuitive notion of adequate characters in traditional non-interactive media such as animated films or books. In these traditional media, characters are adequate if they permit viewers to suspend their disbelief” (p. 113).
Reaching the threshold of believability will depend upon several factors including the subjective perception that an avatar’s behavior is independent of external directives (i.e., the avatar should not obviously be programmed if it is non-player driven), the avatar should be predictably rational (or justifiably irrational as appropriate), and the avatar should be able to communicate naturally with other avatars. Taken in combination, these factors establish intelligent behavior as a foundation for believability. In addition to the behavioral characteristics of believability, there are the physical characteristics of believability such as avatar appearance (including the level of animation realism within the simulated environment) and quality of voice synthesis (if voice synthesis is used rather than a text-based or live voice communication system), as well as unique cultural characteristics applicable to the avatars within the context of the simulation scenario.
An initial review of the literature indicates an innovative approach to modeling avatar behavior. In her 1998 text titled Affective Computing (p. 2), Rosalind Picard of the MIT Media Laboratory states “The evidence is mounting for an essential role of emotions in basic rational and intelligent behavior. Emotions not only contribute to a richer quality of interaction, but they directly impact a person’s ability to interact in an intelligent way. Emotional skills, especially the ability to recognize and express emotions, are essential for natural communications with humans.”
Picard goes on to create a framework that she terms “affective computing”; that is, a form of computing that relates to, derives from, or otherwise seeks to deliberately influence the emotional state of the user. In creating a system by which avatars may interact with users within certain emotional contexts, we address a critical component of the problem of making the avatars “personalized, intelligent, believable, and engaging” (p. 184). Loyall asserts that an avatar’s ability to problem solve intelligently and competently is not important as whether the avatar is “responsive, emotional, social, and in some sense complete” (p. 113). As described by Picard there are five emotion components of a completely affective computing system:
- Emotional behavior;
- Fast primary emotions;
- Cognitively generated emotions;
- Emotional experience: cognitive awareness, physiological awareness, and subjective feelings;
- Body-mind interactions.
The third emotion component, cognitively generated, is especially useful within the context of affective computing. The current state-of-the-art and experimental systems (including several popular computer entertainment RPGs and simulations) are based upon models that synthesize non-player avatar emotions through cognitive mechanisms. Computational methods facilitated through numerical analysis, database manipulation, and probability and statistics are well suited to negotiating the rule-based systems that are the most common functional inputs for cognitive emotion synthesis. It is therefore logical to conclude that emotion synthesis through “computationally friendly” cognitive mechanisms represents the best approach to implementing avatars that are capable of intelligent interaction with human-driven avatars. Specifically, there are two theoretical designs germane to computationally facilitated emotion synthesis: the Ortony Clore Collins (OCC) Cognitive Model and Roseman’s Cognitive Appraisal Model.
The original intent of Ortony, Clore, and Collins in publishing their 1988 book, The Cognitive Structure of Emotions, was to delineate a cognitive appraisal model of emotions. While they felt there was a necessity for AI systems to be able to reason about emotions, they never contended that machines would come to have or need to be able to programmatically represent emotions. Perhaps ironically, their model is ideal for programmatic synthesis of emotion and representing avatar emotional response; in fact the OCC model is considered the standard for synthesizing emotional responses in computers (Picard, 1998). They proposed that there are three aspects of the world that elicit either positive or negative emotional responses from people: events that are of concern to us, the actions of those individuals or entities that we perceive to be responsible for those events, and objects in the world around us. This structure is the basis for the specification of 22 emotional types as well as a rule-based system used to generate these types. Consider that once an emotion appropriate to a situation or in response to a player action is synthesized, the non-player avatar will be able to react believably in response to the emotional condition established.
Where does this all lead? I’m very interested in establishing what I call a “Believability Quotient” to measure avatar believability. I’m going to propose to do this within a Dungeons and Dragons-like point system where avatar characteristics are listed and ranked. I’ll thank my friend David Arneson for leading me down that path. Look for that in the next blog post.
Loyall, A. B. (1997). “Some Requirements and Approaches for Natural Language in a Believable Agent.” In Trappl, R. & Petta, P. (Ed.), Creating Personalities for Synthetic Actors: Towards Autonomous Personality Agents. Berlin: Springer.
Ortony, A., Clore, G. L., & Collins, A. (1988). The Cognitive Structure of Emotions. Cambridge, MA: Cambridge University Press.
Picard, R. (1998). Affective computing. Cambridge, MA: The MIT Press.
Pimentel, K. & Texeira, K. (1993). Virtual Reality: Through the New Looking-Glass. Intel/Windcrest McGraw Hill.