Category Archives: virtual worlds

Second Chance for Second Life?

thumbby Hap Aziz

Over at The Chronicle of Higher Education website, Jeffrey R. Young has an article titled, “Remember Second Life? Its Fans Hope to Bring VR Back to the Classroom.” I do remember Second Life, and I actually used in some college courses I taught about eight or nine years ago. It was primarily a tool where I could gather with students for additional lecture time outside of the classroom, and often it was a combination of socializing and course content Q&A. Fortunately, my students were comfortable with technology (the course was on the subject of digital design), otherwise I would not have been able to provide the technical support to get the students signed up, logged in, and comfortable in the environment. The technology is smoother now, but I wouldn’t recommend it for students not confident in their online computing skills.

The history of Second Life is interesting in that it began as a possible game world framework, but the development environment was so robust, SL morphed into an open-ended virtual space that really had no particular purpose. This was both its advantage and its curse, as enthusiastic users that saw potential in the technology worked at finding a purpose for the platform. Many higher education institutions acquired space in SL, and educators used it for lectures, office hours with remote students, and a variety of other activities somehow connected with learning. And while the individual users may have designed unique personal avatars, the education spaces, for the most part, were representation of real campus locations (or at least could have been real). There are a number of reasons SL was unable to sustain itself at its heyday level of engagement, and Young explores them in his article in connection with the latest tech wave of Virtual Reality innovation. Second Life, in fact, is looking to ride the new VR wave with its Project Sansar (indeed, if you go to the SL site, you’ll see that you can explore SL with the Oculus Rift, which is a step in that direction).

Will the addition of 3D VR breathe new life into Second Life? As a technology, there is no question that VR has great novelty out of the gate. But I still believe that without some sort of meta-narrative point to drive engagement, SL could go through another bubble-burst cycle. By “meta-narrative,” I mean that Second Life itself needs to have a point, rather than offer itself up as an environment where users can do anything they want. Why enter a virtually real world to “just hang out and look around” when we can much more easily accomplish that in the really real world?

Advertisements

1 Comment

Filed under avatars, colleges and universities, education, emerging technologies, future technology, games, Hap Aziz, higher education, higher education institutions, holograms, narrative, simulation, technology, virtual classrooms, virtual college, virtual reality, virtual worlds

The Significance of Experiencing Learning

thumbIn a previous blog entry, I wrote about the future of education as depicted in Science Fiction, realizing even that genre does not often share a vision of the learning enterprise. And when it does, the teaching and learning endeavor is protrayed most often as rather unchanged from the present day approach. Yes, there are exceptions such as the direct-to-brain information downloading technique utilized for skills training in The Matrix, but that’s rare. (Hogwarts from the fantasy world of the Harry Potter stories is an absolute disaster as an education model.)

If we’re going to imagine the future, it is the direct-to-brain (d2b) downloading process that seems to be most interesting as a truly new education paradigm. Not only would it effectively address learning outcomes achievement, it would dramatically reduce the time required to acquire knowledge and master skills (at least as the fictional process is defined). To be sure, there are obvious technology hurdles to be overcome: creating the brain-machine interface and determining how to encode information so that it can be accessed through the standard memory recollection process are two of the more obvious challenges. But let’s say we crack the technology. Could people actually learn that way and ultimately retain what they learned?

To run through this thought experiment, it would be helpful to use a fictional model that defines the process and provides a framework for our assumptions. While the concept of digital compression of information fed into the brain has been used several times in Science Fiction (Scalzi’s Old Man’s War series, Whedon’s Dollhouse, the Wachowskis’ Matrix trilogy), it is the Star Trek: The Next Generation episode “The Inner Light” that is based on the central theme of the digital information transfer and what actually takes place in the “learner’s” mind during the process.

Written by Morgan Gendel, “The Inner Light” is about remembering the experiences of a lifetime without having to live through that life in real time. Briefly, the technical scenario within the plot is this: an alien probe finds Captain Picard and creates a wireless link to his brain. Through the link, the probe downloads an entire lifetime’s worth of experiences into Picard’s brain. From his perspective, it is all completely real, and he thinks he is living that life: having children, learning to play the flute, suffering the death of his best friend, having grandchildren, and watching his wife grow old and eventually die). In real-time, however, only 25 minutes has elapsed. When the download is complete and the link is broken, Picard discovers the entire life he lived was just an interactive simulation of experiences placed in his memory… and that he now knows how to play the flute as he learned it in his simulated life.

What interests me about this particular concept of d2b downloading is that it addresses the context of experience in memory. Whatever a person learns, whether it is the alphabet, discrete facts such as names or dates, complex lines of reasoning, or sequenced physical skills like playing the flute, the act of learning is wrapped in a broader experience of what the person was doing during the learning activity. How important is this, especially when it comes to having the learning “stick”?

In 1890, Williams James noted that human consciousness appeared to be continuous. John Dewey observed much the same thing, and in 1932 wrote:

As an individual passes from one situation to another, his world, his environment, expands or contracts. He does not find himself living in another world but in a different part or aspect of one in the same world. What he has learned in the way of knowlege and skill in one situation becomes an instrument of understanding and dealing effectively with the situations which follow. The process goes on as long as life and learning continue.

Dewey is telling us that learning is a continuum, and lessons learned (formal or not) become the foundation for lessons yet to be learned. Certainly this makes sense to us intuitively, and there is research indicating pre-established schema expedites more rapid memory consolidation in the brain. Which is a way of saying that we learn things more quickly if we already have a context for understanding what we’re learning.

But what are the implications for d2b learning as Picard experienced? What Picard experienced, while not logically flowing from his past life (he was, after all, just “dropped” into a new life story), was a narrative built upon the concepts which he already understood: marriage, friendship, birth, death, and so on. And when he learned a particular skill–playing the flute–it made sense to him in that he already knew what a flute was, what playing a flute involved, and so on. There was not anything going on so “alien” that it would not fit into the pre-existing schema he had been constructing since his own birth.

Perhaps more significant is that the skills that Picard learned had a subjective real-time element even though the simulation was digitally compressed. In Picard’s mind, he learned to play the flute because he actually practiced playing the flute, over years in subjective time. Therefore, when he picked up the flute in the real world, he was drawing on the memories of his experience of practice. It wasn’t that he just woke up with a new skill that came out of nowhere.

Interestingly, there is evidence that mental practice can improve real-world performance at some activities such as sports or music. One study had participants mentally practice a sequence on an imaginary piano for some time daily, and the participants displayed the same neurological changes as those who practiced physically instead. It’s possible that mental practice and physical practice both activate the same brain regions involved in skills learning.

Experience, though, is multifaceted, and it is not simply a dispassionate sequence of events, recorded and played back in some documentary style. In learning, there is the idea of how engaged the learner is with the subject matter at hand, and again it doesn’t matter if the topic is the Pythagorean Theorem or Lord Byron’s poem “She Walks in Beauty.” Jennifer Fredricks talks about three types of engagement that may influence learning: cognitive engagement: what we are thinking about our learning; behavioral engagement: what we are doing while we’re learning; and emotional engagement: what our feelings are about our learning. It seems difficult to imagine that a simple d2b data dump would involve all three of those categories, unless the d2b transfer allowed a person to live what was being learned.

Admittedly, this is all conjecture over a Science Fiction idea, and for now, there is no way to run any actual tests. The potential for d2b learning is intriguing in that it may provide a solution for many of today’s education challenges, provided the technology is even possible. At the same time, it presents many questions regarding the true nature of the learning process. We are analog beings that make use of our senses in real-time to learn from the world around us. If we somehow could bypass our senses and compress years of experience into minutes of transfer time, how would we interpret the experience? How would we remember what we learned, and what would those memories feel like to us? Based on what we know today, I’d say that learning is not possible without experience. Whether it is real or virtual may not matter, but without an experiential framework, transfered information is just noise without meaning.

5 Comments

Filed under consciousness, direct-to-brain, education, experience, future technology, Hap Aziz, learning, narrative, neuroscience, Science Fiction, simulation, Star Trek, technology, virtual identity, virtual worlds

Skylanders and Online Education: Flying the Friendly SkyEd Skies

by Martin LaGrow

Recently I posted a blog called Reimagining Online Education in which I proposed that academic institutions should emulate social media games and take learning management systems in a more interactive direction (https://hapaziz.wordpress.com/2011/12/19/reimagining-online-education/). After writing the blog, I purchased the Activision game Skylanders: Spyro’s Adventure for my kids (OK, maybe for myself too), and quickly became enthralled. It is engaging, self-directed, self-paced, rewards mastery, and progressively scaffolds on previous achievements—everything online education should be. It begs the question, “Are there elements of Skylanders game play that would translate to a new online learning environment?”

First, a little background on the game. Skylanders is a first of its kind—a toy based role playing game that works across multiple platforms including an online component. What make Skylanders unique is the “Portal of Power” and action figure-type characters that bring life to the game.  The characters work through RFID. Each one stores in its memory statistics, points earned, unlocked features, etc. This enables the game player to use their figurines on multiple systems, all well progressively advancing its statistics and abilities. The character can be reset at any time if the user wishes to start over.

Aside from the portal and characters, game play is very much like any other RPG. The game plays out in a structured order—users must accomplish one chapter before proceeding to the next (but can repeat chapters at any time). The game player is accompanied by a number of additional characters that provide guidance and direction, even reminders if the user seems to lose focus on the objective of a challenge. And interaction consists of more stimulation than just arcade-game style action, though that is abundantly available. The user must complete several logic-based puzzles and solve problems along the way, keeping the game mentally stimulating. Various tokens, gems, and rewards push the gamer to travel every path, seek out and defeat every challenge, and ultimately provide a sense of achievement by rewarding mastery.

Finally, there is an online interactive piece that is separate from console play. By plugging your portal in to your Mac or PC, you can participate in a Sims-type world, where you develop your own living space and interact with other Skylanders in real time.  Challenges exist there as well, but the game play does not relate to the console version of the game.

The possibilities for leveraging this kind of interaction in online education are limitless. Imagine an online program where each course is a software ‘world,’ accessible via game console or live online environment. Each course world consists of chapters including content and application. Students demonstrate mastery of a level by completing quizzes and solving problems. Success and achievements are stored locally in the students RFID based avatar, which can be uploaded centrally at regular intervals. Interaction and guidance are provided by guide characters. In an Algebra course: “I see you’re having trouble solving the equation.  Why not try balancing before solving?”  “It looks as though you’ve mastered slope/intercept. Would you like to practice again or do you accept the final chapter challenge?”

For students requiring real interaction, an online commons area can provide as much or as little as they’d like. Different areas would be opened to students based on which courses they take. Instructors could host live office hours by meeting with students in pre-established meeting areas of the commons.

Today’s student is accustomed to interaction in virtual worlds for recreation. Menu and text presentation are not easily engaging them in course work. But from Mario Brothers to Zelda, they are no strangers to spending hour after hour mastering skills and garnering achievements on a console or computer. Present learning management systems are not tapping into this intrinsic trait. When they do, you may see a whole new level of achievement and mastery from students who just don’t want to turn off their Algebra course and go to bed

5 Comments

Filed under avatars, children, education, education technology, future technology, games, online education, simulation, technology, virtual worlds

Establishing Avatar Believability

by Hap Aziz

In my last entry, I began exploring the topic of what makes an avatar believable: I considered abstraction as a piece of the equation. The responses I received to that post encouraged me to dive deeper into the subject, and I pulled out some research I did for believable non-player characters in computer game/simulation environments that I think is applicable to this discussion. My thinking now is that there is a threshold of believability and therein lies the key—abstraction and concreteness are secondary issues. So what is the threshold of believability, and how do we create avatars that pass it? Ultimately, it’s important to realize that if avatars are to effect the behavior of the people that drive them, how avatars interact with other avatars becomes important–and we can greatly enhance the virtual environment by populating it with autonomous (non-player) avatars. So I’ll spead to that here as well.

While creating avatars that pass a requisite threshold of believability will present multiple design and implementation hurdles to overcome, it is important to keep in mind that the threshold is one of subjective perception rather than objective reality. Pimentel and Texeira (1993) observe that the realism of created avatars does not have to appear as actual people in the physical world; rather the idea is to achieve just enough realism so that disbelief can be suspended for a period of time. They state, “This is the same mental shift that happens when you get wrapped up in a good novel” (p. 15). Loyall (1997) states, “Believability is similar to the intuitive notion of adequate characters in traditional non-interactive media such as animated films or books. In these traditional media, characters are adequate if they permit viewers to suspend their disbelief” (p. 113).

Reaching the threshold of believability will depend upon several factors including the subjective perception that an avatar’s behavior is independent of external directives (i.e., the avatar should not obviously be programmed if it is non-player driven), the avatar should be predictably rational (or justifiably irrational as appropriate), and the avatar should be able to communicate naturally with other avatars. Taken in combination, these factors establish intelligent behavior as a foundation for believability. In addition to the behavioral characteristics of believability, there are the physical characteristics of believability such as avatar appearance (including the level of animation realism within the simulated environment) and quality of voice synthesis (if voice synthesis is used rather than a text-based or live voice communication system), as well as unique cultural characteristics applicable to the avatars within the context of the simulation scenario.

An initial review of the literature indicates an innovative approach to modeling avatar behavior. In her 1998 text titled Affective Computing (p. 2), Rosalind Picard of the MIT Media Laboratory states “The evidence is mounting for an essential role of emotions in basic rational and intelligent behavior. Emotions not only contribute to a richer quality of interaction, but they directly impact a person’s ability to interact in an intelligent way. Emotional skills, especially the ability to recognize and express emotions, are essential for natural communications with humans.”

Picard goes on to create a framework that she terms “affective computing”; that is, a form of computing that relates to, derives from, or otherwise seeks to deliberately influence the emotional state of the user. In creating a system by which avatars may interact with users within certain emotional contexts, we address a critical component of the problem of making the avatars “personalized, intelligent, believable, and engaging” (p. 184). Loyall asserts that an avatar’s ability to problem solve intelligently and competently is not important as whether the avatar is “responsive, emotional, social, and in some sense complete” (p. 113). As described by Picard there are five emotion components of a completely affective computing system:

  1. Emotional behavior;
  2. Fast primary emotions;
  3. Cognitively generated emotions;
  4. Emotional experience: cognitive awareness, physiological awareness, and subjective feelings;
  5. Body-mind interactions.

The third emotion component, cognitively generated, is especially useful within the context of affective computing. The current state-of-the-art and experimental systems (including several popular computer entertainment RPGs and simulations) are based upon models that synthesize non-player avatar emotions through cognitive mechanisms. Computational methods facilitated through numerical analysis, database manipulation, and probability and statistics are well suited to negotiating the rule-based systems that are the most common functional inputs for cognitive emotion synthesis. It is therefore logical to conclude that emotion synthesis through “computationally friendly” cognitive mechanisms represents the best approach to implementing avatars that are capable of intelligent interaction with human-driven avatars. Specifically, there are two theoretical designs germane to computationally facilitated emotion synthesis: the Ortony Clore Collins (OCC) Cognitive Model and Roseman’s Cognitive Appraisal Model.

The original intent of Ortony, Clore, and Collins in publishing their 1988 book, The Cognitive Structure of Emotions, was to delineate a cognitive appraisal model of emotions. While they felt there was a necessity for AI systems to be able to reason about emotions, they never contended that machines would come to have or need to be able to programmatically represent emotions. Perhaps ironically, their model is ideal for programmatic synthesis of emotion and representing avatar emotional response; in fact the OCC model is considered the standard for synthesizing emotional responses in computers (Picard, 1998). They proposed that there are three aspects of the world that elicit either positive or negative emotional responses from people: events that are of concern to us, the actions of those individuals or entities that we perceive to be responsible for those events, and objects in the world around us. This structure is the basis for the specification of 22 emotional types as well as a rule-based system used to generate these types. Consider that once an emotion appropriate to a situation or in response to a player action is synthesized, the non-player avatar will be able to react believably in response to the emotional condition established.

Where does this all lead? I’m very interested in establishing what I call a “Believability Quotient” to measure avatar believability. I’m going to propose to do this within a Dungeons and Dragons-like point system where avatar characteristics are listed and ranked. I’ll thank my friend David Arneson for leading me down that path. Look for that in the next blog post.

References:

Loyall, A. B. (1997). “Some Requirements and Approaches for Natural Language in a Believable Agent.” In Trappl, R. & Petta, P. (Ed.), Creating Personalities for Synthetic Actors: Towards Autonomous Personality Agents. Berlin: Springer.

Ortony, A., Clore, G. L., & Collins, A. (1988). The Cognitive Structure of Emotions. Cambridge, MA: Cambridge University Press.

Picard, R. (1998). Affective computing. Cambridge, MA: The MIT Press.

Pimentel, K. & Texeira, K. (1993). Virtual Reality: Through the New Looking-Glass. Intel/Windcrest McGraw Hill.

Leave a comment

Filed under avatars, believability quotient, ego-investment, games, simulation, virtual worlds

Further Examination of Avatar Ego Investment

by Hap Aziz

There is a desire in many of us to mold characters as we see fit: in literature it is what authors do with characters, in games it is what developers do with the player characters, and so on. Bailenson and Beall (2006) observe that the activity of extending their own identities was a common practice well before the development of the computer—in fact, this extension activity has always been a fundamental way in which people express themselves. People have been using both abstract and tangible ways to extend their identities. However, prior to the computer age, identity extension was a time-consuming and expensive activity, while it yielded very minor change results. It’s with the introduction of the computer into our playgrounds of personality that we see what significant identity extension techniques and technologies people can develop.

I asked the question in a recent entry, “how deep does ego-investing go?” with the eye toward understanding the types of boundaries there are in the human-avatar experience*. Bailenson and Beall discuss a phenomenon they term Transformed Social Interaction (TSI) which they define as being a mechanism to improve or decrease the quality of interpersonal reaction based on several characteristics. These communication characteristics appear to me to be strongly related to human components of ego, and thereby TSI becomes a proxy for understanding contributing factors to ego investment. These are the components making up TSI:

  • Sensory abilities – here we can augment the normal human senses in an avatar, or we can actually create new sensory abilities based on information gathering from within our virtual environment (for example, we could give an avatar a “weather sense”)
  • Situational context – this is where we can change the scale or point of view within a virtual environment, even to the point of viewing a scene from the perspective of someone else’s avatar
  • Self-representation – here we decouple the appearance of the avatar as well as the behavior of the avatar from the human connected to the avatar in order to make changes

I like the TSI mechanism because it gives me a frame of reference to break down the elements that contribute to ego investment; in my (admittedly limited) observations, people tend to build their avatars in these three areas (although the situational context category is an ongoing process rather than a “stop and change” action).

Of the three areas, I believe that ego investment resides most in self-representation: both the appearance and behavioral aspects. Bailenson and Beall go into greater detail about the decoupling of appearance and behavior, and I think that is worth a deeper look. I want to get my head wrapped around why the decoupling is important (I have the intellectual aesthetic sense that it is, but it is important to be able to articulate why). Also, there are likely ramifications to the lack of decoupling which I’m trying to grasp as well. I will follow on the topic of decoupling appearance and behavior up in an upcoming blog post.

*On a side note, I wonder what the relationship is between the human-avatar experience and the human-computer experience. The obvious premise is that those comfortable with computers are more likely to experience strong avatar reactions.

Resources

Bailenson, J.N., Beall, A. C. (2006). Transformed social interaction: Exploring the digital plasticity of avatars. Avatars at Work and Play, v. 34: 1-16.

4 Comments

Filed under avatars, ego-investment, games, simulation, virtual worlds

Welcome, 2012!

Another year over, and a new one about to begin! We wish you all the very best, and we look forward to sharing thoughts and ideas on education, technology, and playing games. And speaking of games, we highly recommend Skylanders by Activision. It’s a wonderful combination of avatars, virtual worlds, and in-game guided tutorials. This is the way online learning should be.

Leave a comment

Filed under avatars, games, simulation, virtual worlds