Eric Klopfer, Professor and Director of the Scheller Teacher Education Program and The Education Arcade, MIT
If you are as old as I am, or at least familiar with games as old as I am, you’ll know that many of the early computer games were text adventures. These games involved setting you on a narrative drive quest, with the ability to provide two-word commands on how to interact with the environment. Commands like “GO WEST” and “LIFT TABLE” were quite common. To me, these worlds were equally fascinating and frustrating. These were whole worlds contained on a floppy disk. But unlocking certain parts of that world with just the right two-word command (and no Internet to search for help) was occasionally infuriating. What I wouldn’t have done at the time to have the ability to interact with that world in a more human-centric language.
Fast forward to today, and Artificial Intelligence (AI) provides a pathway to natural language interactions with computers. No longer constrained to short or even pre-determined commands, AI opens up tremendous opportunities for more natural interactions in games.
However, much of the current focus on AI in education is “solving the two sigma problem” with AI tutors. For those unfamiliar with the two-sigma problem, it is based on the idea that a good human tutor can make a two standard deviation in learning for a student. However, the approach that many technology companies are taking to this problem oversimplifies what a “tutor” is. I was looking for a graph to illustrate the “two sigma problem” for an upcoming talk, to explain the problem of this problem. I found an illustration accompanying an article entitled something like, “Technology solves the two sigma problem.”. This was just what I was looking for. Though this headline was from an article more than half a dozen years ago about how online human tutors would solve this problem. Long story short - it didn’t.
What many of the AI tutors miss is that learning does not happen in isolation. Learning happens through connections to people and meaningful context. This is what good games do - they connect learning to meaningful problems, and they connect people, sometimes in the game and sometimes around the game. They aren’t about rewards or superficial fun. They are about creating problem spaces that learners care about enough to struggle through challenges that they want to overcome.
In our book, Resonant Games (available for free online at - https://www.resonant.games), we outline a series of design principles for learning games that are all about connecting learners with each other, and with the world around them. If we honor those principles, there are some interesting opportunities ahead for the overlapping space of AI and games. For example, we’ve been developing some recent prototypes that take advantage of multimodal AI (AI that can interpret and generate text and images) to allow students to use a hand-drawn graph as an input to a game.
One of our key design principles is that we have designed games to be part of Kolb’s Experiential Learning cycle. Games are a concrete experience for learners. However, learning happens when that concrete experience is combined with resources and reflection. With the ability to analyze and interact with data from in-game actions, we create the opportunity to more tightly integrate that challenging action with reflection, as game characters can react and respond to the actions and words of players.
I am excited about this potential in the learning games space, though we have many challenges ahead. And a lot of opportunities for this community to engage.