Category Archives: future technology

The Limits of Natural Intelligence: How Smart Might Aliens Be?

by Hap Aziz

If you consume a lot of Science Fiction (like I do), you will see a lot of depictions of alien life, whether it has come to Earth or we have encountered it in space or on other planets. There was the unstoppable Blob, consuming everything in it’s path. The very human-like Klaatu who came here with his interplanetary police robot, Gort. The terrifying Alien, with acid for blood and who planted its offspring inside of people. The cute and mysteriously powerful E.T., able to heal with a touch and carry bicycles through the sky. Master of logic and mental discipline, Mr. Spock from the planet Vulcan. Frail and spindly Martians (not to be confused with Uncle Martin), come to wage a War of the Worlds. Predators here to hunt humans for sport. A fallen to Earth David Bowie. Alf.

I’ve only named a few, but you get the point. There are thousands upon thousands of fictional aliens that we have encountered and gotten to know, quite well in some cases. Some have strange and unusual physical or mental abilities. Others are super-smart. And many are very much just like us, only with more advanced technology to make them appear much smarter than we are. But what about real aliens, if there are such things? How much different from us could they be? And more specifically, since this is a learning blog after all, how much smarter could they be? Obviously there’s no way to test or make measurements at this time. Perhaps some day in the future we’ll come across some real aliens and can ask them, but until then we’ll have to make some assumptions and try puzzling things out on our own.

The first assumption I’ll make has to do with the origins of life and subsequently intelligent life. Let’s break the possibilities into two potential origin stories: creation and evolution. If we go down the creation path, organic intelligence likely cannot be benchmarked. In other words, God (or whatever we want to designate as the creator) could give life any starting level of intelligence. In that case, it becomes impossible to come to any meaningful conclusions regarding the level of alien intelligence humans may encounter in the universe. So let’s put that possibility aside for this discussion. Now, if we establish a premise that says God created life, but God created the rules that life (and the universe) will abide by, then we are able to settle on the evolutionary premise both from the believer’s and atheist’s perspective, and then build from there.

So what is (or should be) the evolutionary premise regarding the development of intelligent life? If we think of evolutionary development, or more accurately natural selection, as incremental or “micro-adjustments” to external circumstances and stimulation, then it is reasonable to expect that these adjustments will rise to the level of response to these circumstances and stimulations but not much further. That’s a complex thought, but let’s think of it in terms of physical evolution specifically, which should provide a little more clarity:

A giraffe’s neck will evolve to the height of the leaves it wants to eat, but not higher.

If we’re talking about intelligence, what this means is that life will self-select to be just as smart as it needs to be in order to be successful in its environment. Now, if we think of natural laws as being the same throughout the universe (and for the most past we have no reason to believe otherwise), environments on planets that would support earth-like life are not likely to be wildly different than our own. Consider this as the first stage of intelligence setting for animal life. (Life that evolves in more challenging environments may be more capable than humans would be in those environments, but I won’t address that here.)

The second stage is not about the physical environment but rather survival competition from other animals. This is where things get interesting, as the most physically successful animal will not be the most successful regarding intelligence. It makes sense that an animal that can run down its prey, kill it with superior strength, and then tear it apart and eat it will not have to go to extreme lengths to outsmart it. And the animal that is on the top of the physical pyramid has the need for only as much intelligence as it takes to find and hunt its prey. It’s going to be those animals below the physical apex that will develop other mechanisms to either take down their prey hunting in packs. or defend themselves using tools or weapons from stronger and faster competition. (It is worth noting that the discovery of tools is also thought to have led to an increase in the size of the human brain. For a much more in-depth dive on this subject, look up Acheulian technology.)

Match up a lion and an unarmed human on the open plain, and we know who wins that fight. But try the same match-up when the human has tools or perhaps there are several humans working together, and the outcome is not assured in the lion’s favor. If there is competition between species of similar intelligence and physical prowess, there will be some differentiator that will ultimately give one species an advantage over the other. But again, it will be an incremental advantage (and possibly some luck) that leads one species to prevail. The intelligence competitors will be eliminated, while the physical competitors will be subjugated or kept at bay. At that point, the need for incremental improvements to intelligence are no longer being driven by environment and other species. The top dog is set, and barring a giant asteroid strike that triggers a reset, there will be little brain change moving forward. The refinements that come over the next tens of thousands of years are in the refinement of available tools, including the tool of language, after it is invented.

Going back to the original premise, given an earth-like environment the intelligent species that ultimately rises to the top should be in the neighborhood of human intelligence. It would not likely be too much more, as nature would not likely have created an apex predator of overwhelmingly great power beyond what is needed to overcome environmental conditions. And environmental conditions are set by the laws of nature.

Now, none of the above is to say that there are not environmentally harsh planets floating around with the potential for life unlike our own. There’s the planet K2-141b which has oceans of lava, rains rocks, and experiences supersonic winds bursting to over 3,000 miles per hour. Can it support intelligent life? Likely not, the way we understand it, but who is to say there aren’t lava people there? In any case, that’s beyond the conjectures I’m making here.

So barring life that evolved in conditions vastly different than those we consider favorable, is that it, then? Is there a cap to natural intelligence? Perhaps, but we need not stop the thought experiment. What about intelligence augmented by the inventions of intelligence? In other words, what about all the science-fiction constructs with which we’re familiar such as brain-computer connections? Enhancing the mind and thought processes with cybernetic implants that are tied into some 8G network of the future? Interestingly, that kind of augmentation may be necessitated by the environmental conditions that we humans are creating for ourselves and will need to overcome as a matter of survival.

Consider Artificial Intelligence. Without going into detail here, AI is being recognized as a potential threat to human life. Karen Hao writes about it in the MIT Technology Review here. Steven Hawking believed that AI’s impact could be cataclysmic unless it was strictly controlled, “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.” Elon Musk is convinced that AI is far more dangerous than nukes, and he’s told audiences, “it scares the hell out of me.”

Then there’s the “singleton hypothesis,” that predicts AI combined with a totalitarian government, able to control everything. And if you’re interested in a fictional take on what that could be like, check out the movie Colossus: The Forbin Project (released in 1970!). We won’t go any further down that path here.

It would seem, then, that humanity has created the need to go beyond “simple” evolutionary methods of enhancing intelligence to artificial methods of our own invention (bringing us to stage three). Again, Elon Musk has thoughts on the subject, and he proposes the idea that humans will need to merge with AI to develop a symbiotic super intelligence, preventing us from lagging behind our AI creations. At that point, a potential singularity-like inflection point for human intelligence, it becomes impossible for us to know how far intelligence might go.

But that does bring us back full circle to the original question of how smart aliens might be. My guess is we’ll see cybernetically enhanced biological intelligences that have solved the challenges of interstellar space flight. Imagine the Borg from Star Trek. Whether or not they’re friendly to purely biological intelligence (if we haven’t yet enhanced ourselves) is the big question. And if they’re not cybernetically enhanced, then they’ll probably be a lot like us.

2 Comments

Filed under aliens, artificial intelligence, Blob, Borg, brain, Civilization, consciousness, direct-to-brain, Elon Musk, evolution, future technology, God, Karen Hao, MIT, planets, Predator, robots, Science Fiction, singleton hypothesis, smart technologies, Star Trek, Steven Hawking, technology, Xbox

Imaginary Mice

by Hap Aziz

If you really want to innovate, don’t waste your time building a better mousetrap. Learn how to speak to the mouse.

In my previous blog post, I took a surface look at the condition of teaching and learning in the modern world, and how it hasn’t changed all that much through the years, despite all the research and the money poured into projects and products that were supposed transform the process and improve outcomes. I made the point that there should be a better way to leverage technology to improve the student experience by facilitating greater variability, and I listed three practices on which to focus: Competency Assessment, Curriculum Customization, and Calendar Adjustment. In this post, I will take some segments of a student’s typical academic journey (focusing on what happens in the United States) and provide examples of what greater variability would look like, asking some fundamental questions to drive my thinking forward.

If we think of a student’s academic journey, we need to consider how and when it begins. There are a few variables already here, but for the most part, there is a consistent time frame and age range. Typically, children will begin their student lives at about 5 years of age, entering kindergarten. Which brings me to my first question:

Why do we start nearly all children at about the 5 year-old age group? Interestingly, other countries do not all start kindergarten at this age. In Denmark, for example, they wait until the year in which the child is to turn six before enrollment. The study, “The Gift of Time? School Starting Age and Mental Health,” by Thomas Dee (a Stanford Graduate School of Education professor) and Hans Henrik Sievertsen (of the Danish National Centre for Social Research) demonstrated that inattention and hyperactivity was reduced by 73 percent by delaying kindergarten for one year–at the point when the child was 11 years old. While learning outcomes did not appear to be significantly different, there were some important findings related to mental health. And this raised some other questions. Did the later school start date allow for greater mental development through unstructured play (those of you familiar with my blog know my enthusiasm for play as a learning catalyst)? Play does, after all, aid in the emotional and intellectual maturation process for children.

Back to the question about at what age children should enter kindergarten, we may want to consider waiting longer… but not necessarily for all children. Children should begin school when they are emotionally and intellectually ready to begin, and that presents an opportunity for a technology-assisted assessment solution to help in making the determination of readiness.

Once we have made some decisions regarding age-appropriate school enrollment, we can turn our attention to the time of year that school starts. Under our current system, the start of the school year is the same for all students in all grade levels in a school district (and often across much broader geographic regions). Within the United States, the school year generally starts in the fall and ends in the spring. Which begs the question:

Why is the school year the same for everyone at every grade level? When you think about this question from the standpoint of children going to school for the first time, the developmental differences could be tremendous with just a few months’ differential in birth dates. But there is much more to this than the start date of the school year. Consider the length of the year along with the amount of subject matter covered during that time. Both of those elements are not subject to adjustment, which means that no matter the student’s situation or ability, every student has exactly and only the same amount of time to complete the materials. What happens if a student finishes (or masters) the material before the end of the year (or term/semester)? There are management techniques that can help here. Extra material can be assigned. The pace of the assignments can help prevent students from speeding through the material (which, as a practice, is not meeting the student’s needs). These are not ideal strategies, but they work to a degree. On the other hand, what if a student is unable to master all of the material in the allotted time? Here the solutions are far from ideal, and they are a form of lowered expectations and a willingness to accept (sometimes much) less than mastery of the material. In this case, we accept the lack of mastery and simply award lower grades all the way to “D,” while a grade of “F” will require repeating.

The logistics of managing individual calendars for students learning would become quite complex, but again this is an opportunity for an edtech tool to fill the need. Time on task for learning should not be dictated by a single, inflexible calendar for all students. Rather, there should be a “Goldilocks” solution where the amount of time a student spends on a subject is not too much or not too little, but just right.

There will be a cascade effect that pushes all subsequent grade levels “out of sync,” as it were. With students working on subjects for varying lengths of time, trying to line up grade levels and school years loses meaning. And that brings us to the next question:

Why do we have grade levels? (We could tie this question back to the previous two and ask why we have age-based grade levels.) This isn’t a completely new idea, as there are educators already testing the idea of school without grade levels. Perhaps the best example of this approach is the work being conducted at the Northern Cass School District in Fargo, North Dakota. The district began an experiment to provide students with “personalized learning” untethered from standard grade levels as implemented across the country. In an interview published November of last year, Jessica Stoen, Northern Cass’s Personalized Learning Coach had this to say:

We’re meeting kids where they’re at. In a traditional system, you typically would have all of your seven and eight-year-olds come to your room, because that’s the age they’re at. And you teach them those state standards. Although some of them may already know them or not need them. And others might not be ready for them. Now, it’s – well, if they can prove they’re proficiency, why are we going to make them sit in the class that they can already prove to us that they know that content or understand that?

There are benefits to grouping students of similar ages together, and Northern Cass does so for activities such as field trips and classes such as gym. This is a thoughtful approach, no doubt adding to the complexity of schedule management. (For a deeper read into the Northern Cass journey, read this article in The Hechinger Report.) The Northern Cass experiment was meant to run for three years through 2020. I don’t know if COVID-19 has caused any change or disruption to the work going on there, but I have contacted the district Superintendent in order to know the current status. I will provide an update when I learn more.

The idea of “personalized learning” as implemented at Northern Cass is definitely a big step in the direction of reinventing learning. It’s possible to go even further in this direction (resulting again in added complexity that will required additional edtech support and much greater curriculum development resources). Without getting too far into the weeds here, education research has revealed two related themes: 1) learning outcomes are at least in part dependent on student engagement with the content, and 2) engagement is enhanced when there is a narrative framework woven throughout the content. Much of the student experience currently is about the assimilation of data with minimal narrative “glue” to tell the underlying and engaging stories. This presentation by Hans Roling is a wonderful example of sharing data through storytelling.

This leads to the final question:

Why don’t we create educational materials specifically tailored to each individual student, wrapped inside stories that are relevant and resonate with the student’s life and place in the world? The answer to this question is simple: to do so would be too cost prohibitive, too resource intensive, and too time consuming. But the simple answer is not the right answer. That’s the answer for today, and not for tomorrow or perhaps the day after. And this is where Dr. Porcaro at the Chan Zuckerberg Initiative (referenced in my previous post) can step up and go big with edtech. Building curriculum with connections to the student’s own narrative could be accomplished through the use of interviews, assessments, and AI. Pre-developed curriculum templates based on different story scenarios can be populated with details of the student’s interests. The resulting curriculum path can map out a journey to a set of life circumstances that the student feels invested in and pursues based on a desire legitimately cultivated.

But that’s only possible if we skip mousetrap design and start speaking to the mouse.

Leave a comment

Filed under artificial intelligence, cost of education, creativity, edtech, education, education course content, education technology, future technology, Hap Aziz, learning, learning outcomes, narrative, storytelling, teaching, technology

A Failure of Imagination

by Hap Aziz

I strongly believe that most of the world’s challenges go unanswered not because of a deficiency of skills or lack of resources, but because of a failure of imagination. Clearly, I like to channel my inner optimist, but am I convinced that with enough brainwork and sufficient time, people are at least able to conceive of solutions to the problems they face. That’s not the same as implementing a solution, especially if there are significant risk barriers–for example, it’s not a stretch to think that a COVID-19 vaccine could be released in the next several months, yet the majority of the population might refuse to take it. But the larger points still stands, that solutions are there to be discovered or invented.

The challenge of effectively teaching something while also continually improving the teaching process is one of those challenges that seems to be stuck in neutral, spinning its wheels faster and faster and going nowhere, or at least not very far. For the past several years, standardized test scores have either stagnated or dropped in both the reading and mathematics domains. However, this isn’t an outcome issue as much as it is a process issue in the way we (society) approach the teaching and learning endeavor. If the definition of insanity is continually banging your head against a brick wall and expecting a different result, why should we continue educate future generations pretty much the same way year after year and expect improved results?

Perhaps that is a controversial question, but it shouldn’t be. For a bit of perspective, compress the timeline of learning history and look at the past 200 years, from 1820 to 2020. What has fundamentally changed? Smartboards instead of chalk boards. Computers instead of quills. Kindles instead of books. The Internet instead of libraries. It would seem that our new inventions are purposed to mimic our old tools, for the most part. Sure, there have been some “new” developments: virtual reality, computer simulations, and online learning. But they have not had any significant impact in either practice or outcomes. Consider the framework that hasn’t changed: Basing academic cycles on the calendar year. Grouping students of the same chronological age together for content mastery as though they are all at the same point of their cognitive development. Building one-size-fits-all curriculum for all students. Even with the forced changes as a result of COVID-19, at-home schooling for many students appears to be little more than a broken and sub-par classroom experience, with “teachers” often ill-equipped to provide any true learning support. Keep in mind that this comes after decades upon decades of education research and hundreds of billions of dollars spent. What has all the time, effort, and money bought?

Dr. David Porcaro, Director of Learning Engineering at the Chan Zuckerberg Initiative was recently interviewed on the topic of “Learning Science” and why investing in it is a good idea. If you’re wondering what learning science is, Dr. Porcaro says this:

Learning science research helps us to create tools and resources that do what we really hope they will-—improve students’ ability to learn. And in an increasingly noisy world of edtech, products that really help will be those that teachers, parents and students adopt.”

The premise behind learning science is to take the data from how learners interact with edtech products, and use that data to build better edtech products. That’s one way of operationalizing the CZI mission to help individual learners achieve their potential. As Dr. Pocaro frames it, learners vary, and that variability needs to be accounted for in edtech and in teaching practices. I emphasize the phrase and in teaching practices because Dr. Pocaro does not address what I consider the broader teaching practices which have gone fundamentally unchanged, as I mentioned above. And it is the broader teaching practices that that would significantly address student variability. I would suggest examine three particular practices if student variability is a consideration.

Competency Assessment
When the typical student begins kindergarten, the lesson plan is the same for all students, as though they all have the same or very similar academic skills. However, this is not at all the case–some students may be able to read while others cannot; some may know simple math facts while others do not; some may have difficulty recognizing colors and shapes while others do not. Yet all students start in almost the same place. A better practice would be to assess student competency, and then provide a custom curriculum from the beginning allowing for variation in student competency. (More on this below.)

Curriculum Customization
No matter the grade level and the class, the curriculum is the same for all students (with larger groupings for advanced or developmental classes, for example). Not only should curriculum be customized based on competency assessments, it should be customized based on student interests, longer-term goals, and so on.

Calendar Adjustment
The standard academic calendar is based on a school year, divided into semesters or terms. The model says that whatever subject matter mastery a student has after the allotted time, this is where the learning switches to the next topic and moves on. Rather, there should be mastery goals, and students should be able to stick with the subject matter until the desired mastery level is obtained. Some students will move quickly through content at some points, while other students may take more time. The result will be variation in what students learn over specific time intervals, but the interval is not the defining metric; mastery is.

These three practices would work in combination to provide a very rich and customized experience for each student. Competency assessments would take place frequently throughout a student’s academic career to ensure there is sufficient feedback on where the student is, and so that appropriate “course corrections” may be made during schooling. The type of edtech tools required in these situations would be able to customize the curriculum and the calendar individually. Some students may master the desired subject matter in 10 years. Other students may take 13 years. Some students may spend six months on Trigonometry, and other students may need a year to work through the same material. The idea is that at the end secondary education, students should all be closer to mastery on all the subject matter areas they covered.

There is actually quite a bit more space to explore in these three practices, and I may do a deeper dive in a future post. In the meantime, though, I wanted to present these ideas as some thoughts for consideration, and to touch on the idea that it’s the practice and not the education technology that will need to change if we are going to see real changes in the way students learn.

2 Comments

Filed under education, education course content, education funding, education technology, effective practices, future technology, Hap Aziz, high school, higher education, instructional design, learning, learning outcomes, Learning Science, teaching, technology, Uncategorized

Second Chance for Second Life?

thumbby Hap Aziz

Over at The Chronicle of Higher Education website, Jeffrey R. Young has an article titled, “Remember Second Life? Its Fans Hope to Bring VR Back to the Classroom.” I do remember Second Life, and I actually used in some college courses I taught about eight or nine years ago. It was primarily a tool where I could gather with students for additional lecture time outside of the classroom, and often it was a combination of socializing and course content Q&A. Fortunately, my students were comfortable with technology (the course was on the subject of digital design), otherwise I would not have been able to provide the technical support to get the students signed up, logged in, and comfortable in the environment. The technology is smoother now, but I wouldn’t recommend it for students not confident in their online computing skills.

The history of Second Life is interesting in that it began as a possible game world framework, but the development environment was so robust, SL morphed into an open-ended virtual space that really had no particular purpose. This was both its advantage and its curse, as enthusiastic users that saw potential in the technology worked at finding a purpose for the platform. Many higher education institutions acquired space in SL, and educators used it for lectures, office hours with remote students, and a variety of other activities somehow connected with learning. And while the individual users may have designed unique personal avatars, the education spaces, for the most part, were representation of real campus locations (or at least could have been real). There are a number of reasons SL was unable to sustain itself at its heyday level of engagement, and Young explores them in his article in connection with the latest tech wave of Virtual Reality innovation. Second Life, in fact, is looking to ride the new VR wave with its Project Sansar (indeed, if you go to the SL site, you’ll see that you can explore SL with the Oculus Rift, which is a step in that direction).

Will the addition of 3D VR breathe new life into Second Life? As a technology, there is no question that VR has great novelty out of the gate. But I still believe that without some sort of meta-narrative point to drive engagement, SL could go through another bubble-burst cycle. By “meta-narrative,” I mean that Second Life itself needs to have a point, rather than offer itself up as an environment where users can do anything they want. Why enter a virtually real world to “just hang out and look around” when we can much more easily accomplish that in the really real world?

1 Comment

Filed under avatars, colleges and universities, education, emerging technologies, future technology, games, Hap Aziz, higher education, higher education institutions, holograms, narrative, simulation, technology, virtual classrooms, virtual college, virtual reality, virtual worlds

The Seduction of the Senses

thumbBack in October of 2011, I wrote an almost tweet-length blog entry on the transformation of education through an accident of technology (read it here). While I didn’t provide any details regarding that particular technology, if you have heard me speak on the topic, you know that I’m referring to the invention of  the alphabet.

My basic premise is this: human beings evolved to learn a particular way, which is through the use of all our senses in combination with lived experiences and traditions passed down from generation to generation, usually in one-to-one (or one-to-few) relationships. There were natural limitations to that education paradigm regarding the storage of information, the ability to pass on information without personal presence, and the facilitation of one-to-many teaching and learning relationships. The invention of the alphabet (first hieroglyphic and then later phonetic) essentially removed those limitations over time; however, at the expense of introducing an entirely new barrier to learning content: the requirement to learn how to code and decode symbolic information–the requirement to learn how to read and write before learning actual content.

The invention of the alphabet changed the way in which humans learn, and our model of education reflects the necessary prerequisite of literacy before learning: the first years of schooling is focused on teaching our children how to code and decode the alphabet in order to unlock content stored and conveyed primarily through text. Ultimately, the way in which our civilization has set up the learning enterprise is not the way we humans are built to learn; yet here we are at a point in history where a convergence of modern technologies is dangling the promise of another possible transformation to education. The digital technologies that appeal to our dominant senses of sight and sound have become sophisticated enough to meaningfully engage and (apparently) facilitate learning without the need to code and decode the alphabet. Hand some iPads to a room full of three-year-olds and watch what they learn to do without having to read a word.

This phenomenon hasn’t been lost on educators. There are studies on the use of video games to enhance the education experience (“Effect of Computer-Based Video Games on Children: An Experimental Study” and “Digital Game-Based Learning in high school Computer Science education: Impact on educational effectiveness and student motivation“); there are books and articles published on the subject (What Video Games Have to Teach Us About Learning and Literacy and “4 Innovative Ways to Teach with Video Games: Educators from around the Country Share Their Best Practices for Using Educational and Consumer Games to Improve Students’ Engagement and Performance“); organizations have been created and conferences are held to share the latest best practices and even how to secure grant and investment funding for new and innovative learning video games (Higher Education Video Game Alliance and GDC Education Summit); and there are even education games being produced by Nobel Laureates (Nobelprize.org). Intuitively this seems to make sense, and I’m not going to present or argue data here. At the very least there are the educators who feel it might be beneficial to have learners as engaged in course content as players are in their game content.

Several questions come to mind when we consider the use of video games in education. How do we align gameplay with course learning objectives? What technology is required to play games, and how to we ensure access across the digital divide? What is the time commitment necessary to play the game to the point of content relevancy? Perhaps one of the most important questions to answer relates to the cost of game production. The new generation of computer games that is so attractive to so many educators and education policy makers is very expensive to produce in terms of time, development personnel, and funding. Everone seems to want to build the AAA game title in order to excite students about the history of English literature, but who can realistically hire dozens of developers and pay millions of dollars over the course of a year or more to produce that game? How did we get to the point where this is a serious question?

This is all a result of the seduction of our senses when it comes to modern video games. Everyone loves the breathtakingly realistic game visuals and film-like quality. And just like a blockbuster motion picture, the soundtrack and voice talent can tremendously enhance the experience. Make no mistake: these are characteristics that draw in game players, and educators see these as the same characteristics that will draw in learners. However, these characteristics aren’t what make games effective for either entertainment or education.

When imagination is combined with the power of abstraction, the artifact used to engage players (or learners) is a secondary consideration. That’s why a person is able to get as much enjoyment out of reading The Lord of the Rings trilogy as from seeing the films. Or why the same person can play either Call of Duty or chess and enjoy them both as games of war. The power of abstraction is amazingly effective when it comes to experiential engagement.

And it’s that power of abstraction that may allow us to “dial back” on the need for the AAA educational game with the AAA development requirements. As much as I welcome the digital media revolution that is poised to re-engage all of our senses in learning, I would suggest a more technologically humble approach to educational game design that would leverage less resource-hungry production models and recommit to the process of coding and decoding symbolic information: the old-school text adventure game from the genre of Interactive Fiction computer games.

What makes Interactive Fiction (IF) so appealing in the context of education are the same things that are problematic in using more multisensory intense simulation-like games. IF games are less difficult, resource intensive, and costly to develop. As a result, they can be customized for specific learning scenarios, and it is conceivable that micro-teams of instructors and storytellers might build IF game scenarios for individual assignments, tightly aligned with course learning objectives. There is existing research that addresses the learning efficacy of IF games (much of it dated from the mid- to late-1980s mainly because that was when IF games peaked in popularity), and the findings are largely positive regarding learner engagement.

While the traditional IF game was truly a text-only experience, the genre has expanded to include simple illustrations that supplement the narrative experience. In this way, a visual component is added, and the development effort remains low. The result is something that might be more akin to an Interactive Graphic Novel (IGN) rather than the traditional IF game. Consider the IF game 80 Days, designed by Inkle Studios. In a field of games dominated by 3D simulations and fast-paced shooters and RPGs, 80 Days is a testament to the power of abstraction and solid narrative. In a review of the game published in PC Gamer magazine, the reviewer (Andy Kelly) wrote the following:

80 Days can be funny, poignant, and bittersweet. It can be sad, scary, exciting, and sentimental. It all depends on the path you take and the choices you make. The story deals with issues like racism and colonialism far more intelligently than most games manage. Every trip is a whirlwind of emotions, and by the end you feel like you’ve gone on a personal, as well as a physical, journey.

And because there are so many branching paths, it’s extremely replayable. I’ve gone around the world seven times now, and every journey has felt like a new experience. Every time you complete a circumnavigation, additional stories and events unlock, giving you even more incentive to try again. It’s also brilliantly accessible and easy to play, making it the perfect game to share with someone who never, or rarely, plays them.

In other words, this IF game is exactly what we look for in an engaging game experience. What’s interesting to note is that the game was widely praised and recognized for the quality of gameplay. The New Yorker magazine listed it as one of The Best Video Games of 2014. Not only did 80 Days make Time magazine’s Top 10 list, but it it was ranked as the number 1 game for 2014. The fact that 80 Days garnered so many awards and accolades is a strong indicator that the IF genre doesn’t need to take a backseat to AAA titles.

I am not advocating an abandonment of the use of AAA games in education. Rather, it’s important that we use development resources wisely, matching gameplay to learning outcomes. It may make complete sense to pair robust multimedia experiences with particular capstone courses, for example, or in classroom settings that ultimately touch a large number of students. And as the cost in time and development declines while the capability of the production technology improves, we’ll no doubt see more opportunities to integrate AAA games into curriculum. In the meantime, graphically-enhanced Interactive Fiction is a tool that can help educators provide engaging and pedagogically relevant gameplay learning experiences to their students in relatively short order at relatively low cost.

3 Comments

Filed under computer games, digital divide, education, education course content, education funding, education technology, future technology, games, gamification, government funding, grant funding, Hap Aziz, higher education, instructional design, Interactive Fiction, Interactive Graphic Novels, learning, learning outcomes, narrative, play, simulation, text adventure, Text Adventure Development System, text adventure games, Uncategorized, video games

The Significance of Experiencing Learning

thumbIn a previous blog entry, I wrote about the future of education as depicted in Science Fiction, realizing even that genre does not often share a vision of the learning enterprise. And when it does, the teaching and learning endeavor is protrayed most often as rather unchanged from the present day approach. Yes, there are exceptions such as the direct-to-brain information downloading technique utilized for skills training in The Matrix, but that’s rare. (Hogwarts from the fantasy world of the Harry Potter stories is an absolute disaster as an education model.)

If we’re going to imagine the future, it is the direct-to-brain (d2b) downloading process that seems to be most interesting as a truly new education paradigm. Not only would it effectively address learning outcomes achievement, it would dramatically reduce the time required to acquire knowledge and master skills (at least as the fictional process is defined). To be sure, there are obvious technology hurdles to be overcome: creating the brain-machine interface and determining how to encode information so that it can be accessed through the standard memory recollection process are two of the more obvious challenges. But let’s say we crack the technology. Could people actually learn that way and ultimately retain what they learned?

To run through this thought experiment, it would be helpful to use a fictional model that defines the process and provides a framework for our assumptions. While the concept of digital compression of information fed into the brain has been used several times in Science Fiction (Scalzi’s Old Man’s War series, Whedon’s Dollhouse, the Wachowskis’ Matrix trilogy), it is the Star Trek: The Next Generation episode “The Inner Light” that is based on the central theme of the digital information transfer and what actually takes place in the “learner’s” mind during the process.

Written by Morgan Gendel, “The Inner Light” is about remembering the experiences of a lifetime without having to live through that life in real time. Briefly, the technical scenario within the plot is this: an alien probe finds Captain Picard and creates a wireless link to his brain. Through the link, the probe downloads an entire lifetime’s worth of experiences into Picard’s brain. From his perspective, it is all completely real, and he thinks he is living that life: having children, learning to play the flute, suffering the death of his best friend, having grandchildren, and watching his wife grow old and eventually die). In real-time, however, only 25 minutes has elapsed. When the download is complete and the link is broken, Picard discovers the entire life he lived was just an interactive simulation of experiences placed in his memory… and that he now knows how to play the flute as he learned it in his simulated life.

What interests me about this particular concept of d2b downloading is that it addresses the context of experience in memory. Whatever a person learns, whether it is the alphabet, discrete facts such as names or dates, complex lines of reasoning, or sequenced physical skills like playing the flute, the act of learning is wrapped in a broader experience of what the person was doing during the learning activity. How important is this, especially when it comes to having the learning “stick”?

In 1890, Williams James noted that human consciousness appeared to be continuous. John Dewey observed much the same thing, and in 1932 wrote:

As an individual passes from one situation to another, his world, his environment, expands or contracts. He does not find himself living in another world but in a different part or aspect of one in the same world. What he has learned in the way of knowlege and skill in one situation becomes an instrument of understanding and dealing effectively with the situations which follow. The process goes on as long as life and learning continue.

Dewey is telling us that learning is a continuum, and lessons learned (formal or not) become the foundation for lessons yet to be learned. Certainly this makes sense to us intuitively, and there is research indicating pre-established schema expedites more rapid memory consolidation in the brain. Which is a way of saying that we learn things more quickly if we already have a context for understanding what we’re learning.

But what are the implications for d2b learning as Picard experienced? What Picard experienced, while not logically flowing from his past life (he was, after all, just “dropped” into a new life story), was a narrative built upon the concepts which he already understood: marriage, friendship, birth, death, and so on. And when he learned a particular skill–playing the flute–it made sense to him in that he already knew what a flute was, what playing a flute involved, and so on. There was not anything going on so “alien” that it would not fit into the pre-existing schema he had been constructing since his own birth.

Perhaps more significant is that the skills that Picard learned had a subjective real-time element even though the simulation was digitally compressed. In Picard’s mind, he learned to play the flute because he actually practiced playing the flute, over years in subjective time. Therefore, when he picked up the flute in the real world, he was drawing on the memories of his experience of practice. It wasn’t that he just woke up with a new skill that came out of nowhere.

Interestingly, there is evidence that mental practice can improve real-world performance at some activities such as sports or music. One study had participants mentally practice a sequence on an imaginary piano for some time daily, and the participants displayed the same neurological changes as those who practiced physically instead. It’s possible that mental practice and physical practice both activate the same brain regions involved in skills learning.

Experience, though, is multifaceted, and it is not simply a dispassionate sequence of events, recorded and played back in some documentary style. In learning, there is the idea of how engaged the learner is with the subject matter at hand, and again it doesn’t matter if the topic is the Pythagorean Theorem or Lord Byron’s poem “She Walks in Beauty.” Jennifer Fredricks talks about three types of engagement that may influence learning: cognitive engagement: what we are thinking about our learning; behavioral engagement: what we are doing while we’re learning; and emotional engagement: what our feelings are about our learning. It seems difficult to imagine that a simple d2b data dump would involve all three of those categories, unless the d2b transfer allowed a person to live what was being learned.

Admittedly, this is all conjecture over a Science Fiction idea, and for now, there is no way to run any actual tests. The potential for d2b learning is intriguing in that it may provide a solution for many of today’s education challenges, provided the technology is even possible. At the same time, it presents many questions regarding the true nature of the learning process. We are analog beings that make use of our senses in real-time to learn from the world around us. If we somehow could bypass our senses and compress years of experience into minutes of transfer time, how would we interpret the experience? How would we remember what we learned, and what would those memories feel like to us? Based on what we know today, I’d say that learning is not possible without experience. Whether it is real or virtual may not matter, but without an experiential framework, transfered information is just noise without meaning.

5 Comments

Filed under consciousness, direct-to-brain, education, experience, future technology, Hap Aziz, learning, narrative, neuroscience, Science Fiction, simulation, Star Trek, technology, virtual identity, virtual worlds

Preventing Credit Card Fraud

Today (June 2, 2014) on Fox 35 Good Day Orlando, the topic of discussion was how credit card companies and cell phone service providers will be teaming up to prevent credit card fraud. There are several ways this could be accomplished, most commonly by linking the location of the credit card with the location of the credit card holder’s cell phone.

Leave a comment

Filed under cell phone, consumer protection, credit card fraud, future technology, Hap Aziz

The Future of Shopping

Today’s topic on Fox 35 Good Day Orlando was “The Future of Shopping.” It was a quick look at how technology is changing the way we shop, and what retailers are doing to motivate people to look away from the Internet long enough to come into an actual shop location. Technologies such as holography, 3D printing, and even good old bluetooth connectivity to your cell phone are all part of the story.

1 Comment

Filed under 3D printing, future technology, Hap Aziz, holograms, Internet, mobile technologies, Science Fiction, shopping, smartphones, technology

Imagining the Future of Education through Science Fiction

by Hap Aziz

Readers of Science Fiction are quite often drawn to the predictive capacity of the genre. From rockets to robots to nanotechnology to cyborg implants to virtual reality… these things and more have been the domain of Science Fiction literature since early in the 20th century, and concepts like these are the foundation of the genre moving forward. It’s not difficult to see the seeds of our current technology in the story lines from past works by authors such as Robert Heinlein, Isaac Asimov, and Arthur C. Clarke. But Science Fiction has never been only about the technology. Indeed, Science Fiction has always asked the big “What If?” questions on topics such as social customs and norms, political systems, cultural conflicts, and the concept of identity that transcends gender, race, and even species. Consider novels such as Stranger in a Strange Land and Fahrenheit 451; television programs such as The Twilight Zone and Star Trek; movies such as Blade Runner and Planet of the Apes–Science Fiction has always captured our collective imagination with the Big Idea.

Given the breadth of Big Ideas in the body of Science Fiction literature, it’s rather surprising that the topic of education has not received a more robust treatment, other than mention as supporting plot elements, for the most part. And it the majority of those mentions, the format of education isn’t that much different than the model in place today: the interaction between a student and teacher, often within a cohort of students, usually in a face-to-face technology mediated environment. In episodes of Star Trek, set hundreds of years into the future, there are scenes of young children in what appears to be fairly standard-looking classrooms (with more tech hardware). Consider Yoda teaching the Jedi younglings like an elementary school teacher from the 19th century. Battle School in Orson Scott Card’s novel Ender’s Game is basically a military boarding academy with video games and zero gravity gymnasiums. Even in Flowers for Algernon, a story in which the main character’s IQ is dramatically improved through a surgical procedure performed on his brain, Charlie still learns primarily by reading books. In the majority of these stories, while the human capacity to learn or the actual learning process is enhanced by technology, the act of learning is fundamentally unchanged from the way in which people have learned since the beginning of time.

There are, however, a few notable exceptions. In John Scalzi’s novel Old Man’s War, soldiers’ learning is significantly enhanced through the use of the BrainPal, a neural implant that can download information directly into the human brain at a tremendous rate. Similarly, in the movie The Matrix, people can acquire new skills simply by downloading the appropriate data file. This is also quite like the technology used in Joss Whedon’s television series Dollhouse, in which the brain is literally a blank slate ready for a completely different mind (with it’s own set of memories and skills) to be imprinted. In the episode of Star Trek: The Next Generation titled “The Inner Light,” an entire lifetime of events is loaded into Captain Picard’s brain in 20 minutes–with an artifact of that experience being the ability to play an instrument he never saw before he “lived” his alternate life.

What all those exceptions have in common is that they fundamentally alter the method by which information is loaded into the human brain, and they do so in a digital rather than analog fashion. The result is that the time required to load the desired information is much reduced from the traditional input methods of using our own analog senses to acquire knowledge, then disciplining the mind to retain that knowledge and training the body to function appropriately (memorization and practice). All other methods of instruction, no matter how we reinvent them or try to integrate assistive technology, still encounter the analog gateway (and in some cases, barrier) of our senses. The “data transfer rate” effectively comes down to the learner’s ability to effectively absorb what’s coming through that gateway. I remember when I was in high school and I wanted to record songs from my record albums onto cassette tape so that I could take them with me to play on my Walkman. I had a cassette recording deck connected to my record turntable, but I could only record in real time–I could only record at the actual speed that the records played across that analog gateway.

If I’m imagining the future of education as a storyline in Science Fiction, I see the need for a digital-to-analog converter that serves as a high-speed interface to the brain. That’s what would enable the story examples I cited above, facilitating the speedy transfer of knowledge and possibly eliminating (or minimizing) the need to practice for skills mastery. Right now it takes a lifetime to acquire a lifetime’s worth of knowledge, and even then there is no guarantee that we can successfully access more than a fraction of what we have acquired. Now when I want to digitize my CD collection so I can store it on my portable MP3 player, the ripping process takes a fraction of the time as playing all the songs.

Perhaps I’ve planted the seeds for a Science Fiction story I should write: What would it be like if several lifetimes flashed before our eyes at the moment of death? Somehow we’d have to experience all those lifetimes… and that’s just another way of saying we’d need to figure out how to become life-long learners several times over.

2 Comments

Filed under education, education technology, future technology, Hap Aziz, life-long learning, Science Fiction

Legacy Systems and the Anchors that Work Against Change

by Hap Aziz

Back in October, 1995, a small computer company called Be, Inc. (founded by former Apple executives), released a new computer into the marketplace. This machine was called the BeBox, and from 1995 to 1997, less than 2000 of these computers were produced for developers. The BeBox had a lot of unique features going for it such as dual CPUs, built-in MIDI ports, and something called a GeekPort that allowed hardware experimenters both digital and analog access directly to the system bus. One of my personal favorite features of the BeBox was the pair of “Blinkenlight” stacks on both sides of the front bezel. Functioning like a graphic equalizer, they depicted the real-time load of each of the CPUs in the machine.

But as exciting as the hardware was to computer geeks like me, the real revolution was in the Be Operating System, or the BeOS, as it was called. Written specifically to run on the BeBox hardware, BeOS was optimized for digital media applications, and it actually took full advantage of modern computer hardware with its capabilities of symmetric multiprocessing, true preemptive multitasking, and a 64-bit journaling file system (which for practical purposes meant you could shut off power at any time without going through a shut-down process, and when you turned the machine back on, you would find that your open files were still intact).

BeOS was able to accomplish all sorts of things that Windows, the Mac OS, and Linux could not by shedding nearly all of the legacy “baggage” that the other operating systems continued to carry. The Be team was free to rethink the underlying software systems paradigm at the very deepest levels, and the results were truly astounding to those that saw the BeBox in operation.

The moral of the story is that the greatest transformation is possible when we rethink processes and technologies that have been in place for years, decades, and even generations. This is significant when we think of education, because the majority of our education systems are indeed legacy systems, designed and implemented to facilitate processes that were put into practice over a century ago. Even our “modern” Student Information Systems and Learning Management Systems are limited by the “legacy anchor,” and as a result, we see little true transformation in the teaching and learning space. Education timelines are based on year “blocks” of content, and each block is targeted to a particular age group of student (why is every student of approximately the same age grouped in the same grade?). The foundation of the classroom experience is still the lecture, and with online courses we work to “fit the lecture” into an asynchronous mode. Assessment and evaluation processes are, well, pretty much the same as they have been, only with more variation in execution. Schools and institutions of learning are hardly any different than they were in the 1700s–a group of students go to a building where they meet in a room and work with a single instructor. Even in the online environment, we build virtual analogs to the physical world: a group of students go to a URL where they meet in discussion forums and still work with a single instructor.

What would true transformation look like, given the technologies that are available now? How would we write a new, legacy-free education operating system for the 21st century? Those are two very big questions that could spawn a series of lengthy discussions (and, frankly, I need to write a book about it), but I have a few principles that I would offer up:

  • Education should be non-linear from the perspective of time spent on task. That is to say, a concept such as “4th Grade Mathematics” where all 9 year old children are expected to learn the same content over the same amount of time should go away. Little Julie may master fractions and long division in three months while little Stanley may take half a year. At the same time, little Stanley might be happily absorbing 18th century American literature, while little Julie is still working on more basic reading comprehension skills.
  • Places of education should be built to meet specific learner needs rather than be built around the same specifications of classroom space, administration space, cafeterias, gymnasiums, and so on. Why does every elementary school look like every other elementary school, and not just across stretches of geography, but across time as well? The elementary school I attended in the 1960s would function with little modification for my daughter who is in elementary school now. Surely learners (at any age group) are not a monolithic group with singular needs, yet we build places of education as though they are.
  • Education should offer multiple pathways forward rather than a single path that results in matriculation to the “next grade” or failure and repetition of the previous grade. In the world of computer game design, multiple pathways forward is commonplace, allowing players with various skills to progress according to his or her particular strengths–and in making progress, the player is often able to “circle back” and solve particular challenges that he or she was unable to complete earlier in the game. In the same way, a learner may bypass a particularly challenging content area, yet come back with greater skills acquired in a different “track” better able to solve the original challenge.
  • In fact, the idea of “grade levels” is in many respects antithetical to the concept of the lifelong learner. Why measure start points and end points as set dates on a calendar? Rather, education milestones should be set and achieved on a skills-mastery framework, and this process is ongoing for the true lifelong learner. The ramifications of this would be profound on a social level (the singular graduation moment may no longer exist), but from the perspective of personal growth and fulfillment, the benefits could be tremendous, and there will certainly be just as many–if not more–opportunities for celebrations of achievement.

Ultimately, bringing significant transformative change to the education-industrial complex will require rethinking of almost every segment of the teaching and learning process, including the manner in which we engage technologies to support that process. Being willing to discard our legacy baggage will be extremely difficult for many. Yet doing so will be the only way in which we might remix our 21st century technologies of smart devices, mobile connectivity, social media, the Internet, and more into an educational system that meets the diverse needs of our 21st century learners.

1 Comment

Filed under children, colleges and universities, computer games, creativity, education, education technology, effective practices, emerging technologies, face-to-face instruction, future technology, games, Hap Aziz, higher education, Internet, Learning Management Systems, learning outcomes, legacy systems, online education, smartphones, social media, Student Information System, tablets, technology, virtual college