Open Doors
  Who Is Who  

Amsterdam - 14, 15, 16 November 2002

and the art of innovation

JC Herz

[Note: we've added, as an addendum to this transcript, another paper by JC Herz: Gaming the System: What Higher Education Can Learn from Multiplayer Online Worlds].

(silence, tape hickup: sorry, ed)... but you would have to have to work three times as hard , and be discriminated against. And you know, this was the design consensus. It was really the designers giving it up to the citizens. There are tons of stories like this - even to the point where now every almost every pc-game that ships comes with tools, you know "mods": make modifications, make characters, make skins.

This makes a lot of sense. Players extend the value of the game by evolving it into the directions that they like. It also results in some very significant technological achievements. Because if you think about it, if you sell about a million copies of the game, and one per cent of those people start to tinker with it, that is an unpaid R&D staff of 10.000! And not only that but the network finds the right people to work on the right problems.

So Quake2 comes along and there is a guy, called Steven Palt, who's a Motorola engineer and hard-core gamer (engineers and hard core gamers overlap to a considerable degree). Now, A.I. (artificial intelligence) is a really difficult problem in general, but in games in particular because it is not enough to just craft an A.I. that plays well. You have to programme one that kind of could fool you into possibly thinking it was human. And this is a tough problem.

This guy, Steven Palt (?ed.), programmed a plug-in for Quake2 called the reeper-bot, with the most cunning A.I. that anyone had ever seen. This was instantly adopted by the entire community. In fact, this man was hired by the rival software developer.

The point is not that Quake has great A.I. - because A.I. still sucks everywhere. The point is that this system of design found the needle in the haystack. This guy, that software developers would never ordinarily find, came out of the woodwork.

This has extended even further in things like Half-Life, a first-version shooter that was entirely re-designed by the fan community. It was every man for himself. The mod community basically starts re-engineering the genetic material. (You can read about this in the essay which follows: ed).

So from this primordial pool of amateur mods emerged a few masterpieces that arguably surpass the original game. The first to surface was Counterstrike which converted Half Life's "every man for himself " multi-player death match, into a squad-based combat game that cast players as members as either a terrorist or a counter-terrorist team, each with unique weapons and capabilities.

The game is played in a variety of maps and scenarios with varying objectives -including hostage rescue, assassinations, terrorist escapes, and bomb diffusing missions. Originally envisioned by a player in Vancouver, Counterstrike, like all mods, was a labour of love.

I quote: "My initial motivation was probably the same as everyone else in the mod-scene", he explained, on one of the many mod fan-sites; "I just wanted to customize the game to fit my vision of what a game should be. First and foremost it is my vision, not anyone else's. I don’t spend ten-plus hours a week working on a mod for free just to make a mod that satisfies everyone. I made a mod I am happy with, and if someone else happens to like it too, than that is a bonus".

It turned out that lots of people in fact did like it very much. Leo pulled together a team of fellow players, happy to contribute time and energy to soup up the hottest mod on the net. His first recruit - and project co-leader - lives in Blacksburg, Virginia. Counterstrike map makers had help from England (one was studying geography at Cambridge University) Germany, South Africa, New Jersey, Colorado, and Irvine, California.

Up to this point the Counterstrike story is similar to many Open Source development sagas. There is a lot of congruence between gamers and Open Source coders - with the former applying their talents to the entertainers experiences rather then server protocols or operating systems.

But there are also a couple of critical differences. One is that unlike most Open Source software communities - where forking the proliferation of different versions is a real concern - the mod community thrives on its ability to introduce the maximum numbers of mutations in parallel. And they have little concern about whether these mutations inter-operate. There is a lot more speciation. And because this sort of modular proliferation is built into the evolutionary process, the time overhead of project management is radically reduced. Cats are difficult to herd - but they breed like crazy just fine on their own.

The second difference, from a business perspective, lies in the game companies' ability not only to cultivate this elite unpaid R&D community, but also to capture the best mutations of its product for direct commercial gain and not alienate the player community by doing so.

"That is a challenge for us " says Valve CEO Gave Newal of the Half Life company. "These teams get formed in funny ways. It is not that they are setting out to do anything like this, so ownership and peoples' roles can be kind of vague. Sometimes you just have to give them advice", he says; "they ask questions like, 'should we get a lawyer?', and you say 'yes, you should have a lawyer"; or, 'should we incorporate?'and we say, 'well, depends on where you are'. A lot of times these are multi-national groups, people working in different countries with different legal issues and taxation issues so a lot of times we are just trying to help them understand what is going on. I mean for us it is a long term investment because it is a very social community. What goes around comes around. Everything you do follows you for a long time. So we try hard to be helpful to these people. In the Half-Life community everything we do is very visible, we try to be careful that everyone in the half-life community isn’t suddenly going to turn on us and savage us for something that we do."

So you have all this feeding back into the core technology. And one of the arguments about this stuff is, "oh well, that is just the hard core, these guys they've been modding and mocking up, and they're motivators and programmers and they would have done this stuff anyway".

But I think that the interesting counter argument to that, is to look at The Sims. I am sure that a lot of you guys have seen or played with The Sims. It is a sort of neighbourhood level virtual doll house. And The Sims is noteworthy because it illustrates the level of engagement a game can achieve when its designers incorporate crafting into the culture of the game.

Four months before the game shipped, the developers released tools that allowed players to create custom objects for the game's environment: architecture, props, and custom characters. These tools were rapidly disseminated amongst Sims players who created custom content, immediately, for this world that they couldn’t yet play in.

In the months leading up to the game's release, a network of player-run websites sprang up to showcase and exchange handcrafted Sims objects and custom characters. By the time the game was released, there were 50 Sims fan sites, 40 artists pumping content into the pipeline, and 50,000 people collecting that content. One quarter of a million boxes flew off the shelves in the first week.

A year later, there were dozens of people programming tools for Sims content creators, 150 independent content creators, half a million collectors, and millions of players reading 200 fan sites in fourteen languages. While most of these sites are labours of love, a few are profitable as well.

At this point, 90 per cent of the Sims' content is produced by the player population, which achieved an overwhelming amount of collective expertise in all things Sim.

It feeds on itself. It is completely bottom-up in a distributed, self organising, way. None of these people are on the Maxis payroll: these people aren’t being paid by the game developers,. In fact it's the reverse.

So why do they invest hundreds of thousands of hours (fiddling with) 3D models and maps?

Among hard core gamers, there's an element of competition, and wanting to be noticed on a global scale. But for casual gamers - those who furnish the Sims virtual doll house, and for the level swapping and map-making community - the practice of creating skins and custom object is a kind of twenty-first century folk art - a form of self-expression for the benefit of themselves and their immediate community.

It sounds odd to put start craft maps and (Sims Die nets) in the same category as fiddle music, and quilting - but socially, they are congruent. Hence the appeal of sites like Mole of the Sims as showcase for items such as the following: "Mermaid's cave rug decorating pack! Now you can turn your favourite floor tiles into rugs, or bath mats, or welcome mats, throw rugs, or anything else you can dream of! Just place floor tiles as usual in any size/shape desired, indoors or out, then place these colourful rug edgings around the outside, pulling away from the rug as you go! Brought to you by the hare-fish mermaids Cave Store G16 level 2, here at The Mall of The Sims."

Some people simply like to make virtual rug edgings, and there's a 125k craft tool that lets them do that. Almost 400,000 downloads at the last count!

Unlike the R&D being done in the Mod community, this doesn’t have a thing to do with making the most un-precedented, kick-ass, Formula 1 game experience that blows people away. It is a form of social expression not unlike swapping MP3 play lists and recipes. Games which are object-oriented, at every level of experience, provide a substrate for personal construction projects which are all too rare in the current landscape of corporate capitalism.

In some sense these mass market digital crafts fairs are an anthropological throw- back. It is more like web surfing. Yet if you look forward - to a network where every object is live, the much-trumpeted world of web services - that experience will be closer to The Sims than to the current generation of client-server browsers.

The Sims objects are not self-contained executable programs. And they are not static data either.They function in prescribed ways, interact semi-autonomously, and exhibit behaviours within a dynamic framework. New objects contain behaviours that reconfigure the local environment.The Sims don’t know how to play soccer for instance. But if a soccer ball - a software object, containing all the rules for playing soccer - is dropped into their midst, they will form teams and start playing soccer. Player-created plug-ins and mods intersect with game engines in a similar fashion.

There are two quantum leaps here. One is implementing a technology platform that allows this to happen. The other is the idea that end-users, rather than professional coders, are equipped to design these objects - and that there is social ecology that supports the making and trading of such objects among ordinary people.

Amidst all the corporate prognostications about object-oriented code for the rest of us, it is online games that furthers a tangible vision of that future.

Thank you


JT: This all obviously raises a lot of questions. My first immediate one is: are there examples of The Sims community meeting in real life, as a result of these online lives?

JCH: Yes they do. They start to creep up, like the E-bay phenomenon where people are starting to meet in real life. But in fact there is a physical dimension in that The Sims in general happen in families. Not just like, what does this person in my town make, it is look at what did my nine year-old daughter make and look how proud I make.

JT: We can probably make connections between what you told us about, and the maintenance and continuous design of software environments. How much success have you had in persuading companies to model their behaviours on this kind of example?

JCH: More now than in the dotcom absurdity. Because there's a sort of ecological efficiency when you say, "over 90% of the content is created by the users". Or that The Sims has sold 17 million copies over 400 million dollars in sales - that it is bigger than Star Wars. Then their ears start to perk up a little and they say, "oh well, let's check this out, it might be a good idea".

JT: But then they will ask: how can we get some of this? Do they get it that the actual dynamics are precisely against a centralised form of design?

JCH: I think they do - but the people that most need it are the most resistant. It is the record companies that really need this, and they are the most defensive.

Gaming the System: What Higher Education Can Learn from Multiplayer Online Worlds
by J.C. Herz
Computer games and higher education are species that share an ancestor, but have diverged wildly in their evolution. The earliest computer games were part and parcel of academic computer science departments in the ‘60s – one year after the first PDP-11 was delivered to MIT in 1961, the first computer game, Spacewar, had been written by a young graduate student named Steve Russell. The game was, in fact, their way of learning to use the computational behemoth:

“I thought it was this great thing, and I was itching to get my hands on it,” Russell recalls. “And so a bunch of us started talking about how you could really do a lot more with the computer and the display. Space was very hot at the time – it was just when satellites were getting up and we were talking about putting a man on the moon…So I wrote a demo program that had two spaceships that were controlled by the switches on the computer. They were different shapes. They could fire torpedoes at one another, and they could navigate around the screen with the sort of physics you find in space.

“And then Pete Samson wrote a program which displayed the star map sort of as you’d see it looking out the window, and I incorporated that as a background. And then Dan Edwards looked at my code for displaying outlines and figured out a way to speed it up by a factor of two or three, which game him enough time to compute the effect of gravity on the two spaceships. And that made it a much better game, because with the stars in the background, you could estimate the motion of the ships much better than they were just on a dead black background. And with the spaceships affected by gravity, it made it a bit of a challenge, and you got to try to do orbital mechanics – there was the star in the center of the screen, and it attracted them just as the sun would.”

Even in this first incarnation, computer games exhibited all their signature qualities as a learning experience: All the knowledge and skills acquired in the process of creating Spacewar were a means to an end: programming physics simulations, allocating resources, representing scale and perspective – all of these were necessary to make the game better and lo, they were mastered. All of this learning and teaching occurred in a collaborative, highly social context, another hallmark of computer games. And the benefits, social and technological, were shared – Spacewar was made available to anyone who wanted a copy. Within a year of its completion, there was a copy of the game on every research computer in America.

For years, computer games flourished in academic computer labs. Ironically, although they were never sanctioned activities, games provided a social nexus for undergraduates and graduate students to cluster, discussing thorny problems while waiting their turn to go head-to-head in Spacewar, or collaboratively figuring out how to better allocate network resources so they could play early online games (and later, stage multiplayer Doom marathons) with minimal disruption to the network as a whole (it’s amazing how innovative groups of students become when cherished activities are on the verge of being banned).

As computers moved out of the lab into the living room, these budding programmers dedicated their time (and sometimes dropped out of school) to create games for a burgeoning class of enthusiasts. Their products were fly-by-night affairs – labors of love, stored on floppy disks, packaged in Ziploc bags. They were programmed quickly, played enthusiastically, then deleted (there was never enough room on the hard drive, then as now). Because games were processor intensive and consumer PC’s were so slow, game designers had to be resourceful, using every known loophole to squeeze extra processing cycles out of putt-putt computers like the TRS-80 and Commodore Amiga. These pokey machines were inferior to academic mainframes in every respect. But because they were accessible, they enabled a growing population of hobbyists to hack away for fun and profit, sharing expertise, if only to show off.

Over the years, a community took root and flourished, informally and organically. When the Internet became accessible to non-academics in the early ‘90s, the computer game community (all early adopters) embraced it, and exploded their already-robust bulletin-board, magazine and modem culture onto ftp sites, and later, the web. After Id Software open sourced the Doom level editor in 1994, there was an explosion of player modifications, as gamers took 3D engines and editing tools into their own hands. As in any Darwinian environment, the fittest creations survived, garnering fame (and gainful employment) for their authors along the way.

By the end of the millennium, nearly every strategy and combat game on the market came with a built-in level editor and tools to create custom characters or scenarios. Nourished by the flexibility of these tools and the innate human desire to compete and collaborate, a dynamic, distributed ecosystem of official game sites, fan pages, player matching services and infomediaries flourished – and continues to grow in an unrestrained fashion, on a global basis. As the player population expands, so does game industry, which now rivals the Hollywood box office, exceeding $7 billion in annual sales.

Meanwhile, the computers keep getting faster. As Moore’s law kicks in and hard drives grow in size and shrink in price, commercial games get better looking and more sophisticated. Graphic accelerators smooth out the edges and goose the frame rate. Faster chips process real-world physics. High-bandwidth connections throw distant opponents into virtual arenas. At every step along the way, gamers have embraced the many-to-many potential of computer networks, not just to compete, but to collaborate, invent, and construct a networked model for learning and teaching.

If a gamer doesn’t understand something, there is a continuously updated, distributed knowledge base maintained by a sprawling community of players from whom he can learn. Newbies are schooled by more skilled and experienced players. Far from being every man for himself, multiplayer online games actively foster the formation of teams, clans, guilds, and other self-organizing groups. The constructive capabilities built into games allow players to stretch the experience in new and unexpected directions, to extend the play value of the game, and in so doing garner status – custom maps, levels, characters, and game modifications are all forms of social currency that accrue to the creators of custom content, as they are shared among players.

In terms of the speed and volume of learning – the rate at which information is assimilated into knowledge, and knowledge is synthesized into new forms, the networked ecosystem of online gaming is vastly more dimensional than the 19th-century paradigm of classroom instruction. Primarily, this is because games fully leverage technology to facilitate “edge” activities – the interaction that happens through and around games as players critique, rebuild, and add onto them, teaching each other in the process. Players learn through active engagement, not only with software but with each other.

In universities, it is widely accepted that much of the learning happens outside the classroom. But universities have no coherent strategy for leveraging that “edge” activity online. There are online syllabi and course catalogs, threaded discussions that graft section discussions onto threaded message boards, and e-mail between students (and sometimes even between students and teachers). But these activities are not integrated in a constructive way – they don’t comprise the kind socially contextualized learning to which young people weaned on Playstations are increasingly accustomed.

It’s not a question of whether such learning will happen – the current generation of students is notoriously good at “getting around” institutions that fail to address their needs. The question is whether the university will assume leadership in the innovation process, or whether the standard applications and conventions will be rigged together without faculty oversight and disseminated by undergraduates who may or may not share the institution’s pedagogical agenda. Perhaps it is better if students evolve their own best practices in cyberspace despite their universities, with no regard to disciplinary boundaries or departmental turf, in the cool shade of institutional ignorance. There is, in fact, a good case to be made for this scenario.

But regardless of whether university administrations choose to assume an attitude of benign neglect or take an active role, it is necessary to understand that the dynamics of networked learning differ fundamentally from classroom instruction, and from traditional notions of distance learning. Where classroom instruction is one-to-many, and traditional distance learning (i.e. correspondence schools and most online “courses”) are one-to-one, networked learning environments have their own design principles, criteria by which people and their projects are evaluated.

Online games are an object lesson for academia, not because universities need to be making games, but because online games illustrate the learning potential of a network, and the social ecology that unlocks that potential. As higher education strives to transform itself via information technology, it must examine not only the hardware and software necessary to achieve that transformation, but also the cultural infrastructure necessary to leverage those resources. To this end, it is useful to examine the knowledge economy that drives networked games, and derive lessons where appropriate


The development cycle for a computer game, circa 2001, is 18 months, from the generation of the design specification to the release of the product (production typically involves 12-20 people, with costs ranging from $5-$7 million dollars). But for many games, and particularly the stronger-selling PC titles, that process begins before the “official” development period, and extends afterwards, with a continuous stream of two-way feedback between the developers and players.

Perhaps the most salient example of this phenomenon is in-game artificial intelligence, one of the great engineering hurdles in any game. In first-person combat games, there is a marked difference between real and computer-generated opponents – human opponents are invariably smarter, less predictable, and more challenging to play against. There is no comparison between a multiplayer deathmatch (elimination combat with up to eight people on the same 3D map) and a single-player game with AI opponents. Because of this discrepancy, first-person shooters are, de facto, online multiplayer games; several have dispensed with single-player mode altogether.

AI, however, like all engineering challenges, is subject to the million monkeys syndrome: put a million gamers into a room with an open, extensible game engine, and sooner or later, one of them will come up with the first-person shooter equivalent of Hamlet. In the case of Id Software’s Quake II, it was a plug-in called the ReaperBot, a fiendishly clever and intelligent AI opponent written by a die-hard gamer named Steven Polge (who was subsequently employed by Id’s main rival, Epic Games, to write AI for the Unreal engine). Polge’s Reaperbot was far-and-away the best Quake opponent anyone (inside or outside Id Software) had ever seen, and the plug-in rapidly disseminated within the million-strong player population, who quickly began hacking away at its bugs, even though such modifications were technically illegal. Needless to say, these improvements in game AI were incorporated into the core technology of first-person shooters, to everyone’s benefit, not least the game companies.

The salient point here is not that Quake has great AI, but that its architecture, the very nature of the product, enables distributed innovation to happen in a parallel, decentralized fashion. Of course, not all players roll up their sleeves and write plug-ins. But if even 1% contribute to the innovation of the product, even if they are only making minor, incremental improvements or subtle tweaks, that’s ten thousand people in research and development.

Most of the players who tinker with combat games aren’t programmers. They don’t have to be, because the editing and customization tools in today’s games require no programming skill whatsoever. Levels of combat games can be constructed in a couple of hours by anyone familiar with basic game play. Real-time strategy games offer similar capabilities. New maps, with custom constellations of opposing forces, can be generated with a graphical user interface.

Notably, historical and quasi-historical simulations like Sid Meier’s Gettysburg allow gamers to replay military conflicts under different conditions (“What if General Lee had been there,” “What if Pickett hadn’t charged.”) Which is not to say that the software delivers any definitive answer that a military tactician could not. The point is, the flexibility of the framework allows and encourages non-expert, individual players to ask the questions, explore the solution space around a particular scenario, and create new scenarios that might not have occurred to the game’s designers.

In a commercial context, this tool-based, user-driven activity has several important functions. It extends the life of the game, which both enhances the value of the product (at no incremental cost) and increases sales: the longer people play the game, the longer they talk about it, effectively marketing it to their friends and acquaintances. Will Wright, author of the best-selling Sim City series, compares the spread of a product in this fashion to a virus: “Double the contagious period,” he says, “and the size of the epidemic goes up by an order of magnitude. If I can get people to play for twice as long, I sell ten times as many copies.” Wright’s formula bears out on the bottom line – his latest game, The Sims, has spawned two expansion packs and racked up $340 million in sales since its 1998 release

The Sims, which scales Wright’s SimCity down to the neighborhood level, is noteworthy because it illustrates the level of engagement a game can achieve when its designers incorporate player feedback and collaboration before, during, and after the product is released. Four months before the game shipped, its developers released tools that allowed players to create custom objects for the game’s virtual environment: architecture, props, and custom characters.

These tools were rapidly disseminated among Sim City players, who began creating custom content immediately. In the months leading up to the game’s released, a network of player-run web sites sprung up to showcase and exchange “handcrafted” Sims objects and custom characters. By the time the game was released, there were 50 Sims fan sites, 40 artists pumping content into the pipeline, and 50,000 people collecting that content. A quarter million boxes flew off the shelves in the first week. A year later, there are dozens of people programming tools for Sims content creators, 150 independent content producers, half a million collectors, and millions of players reading 200 fan sites in 14 languages.

At this point, more than 90% of The Sims’ content is produced by the player population, which has achieved an overwhelming amount of collective expertise in all things Sim. The player population systematically trains itself, generating more sophisticated content as it learns. This is a completely bottom-up, distributed, self-organizing process.

The relevance of this, for institutions of higher learning, is not that students should create courses. Rather, it is that online content needs to leverage the social ecology that drives networked interaction, in order to become meaningful. In an online learning environment, whether it is an Internet-only experience or the complement to an offline course, must give participants the tools to actively engage in the construction of the experience. It is not enough to absorb the content, and then sit around gabbing so your section leader can see that you’ve absorbed the content. There has to be a way for students to take the content and “run with it,” such that their fellow students, in section and across the class, can immediately use and benefit from that effort.

Moreover, the system must acknowledge that contribution. In the world of online games, that acknowledgment is quantified in various ways – players know how many times their contribution has been downloaded and how it’s been rated by the community. Even if a player’s contribution isn’t very good, he or she still has some concrete acknowledgement that it has been used, if only by 44 people out of a population of millions (and to that player, 44 people seems like a lot). This acknowledgement fuels participation, and invests the player in the experience, because it transforms knowledge into social capital. Not only does the player “own” their learning (because they’ve had a hand in its construction), but that ownership is worth something in a social context where one’s status derives from peer acknowledgement – an incentive more powerful than grade point average or teacher approval.

One might say, “Oh well, it’s easy to talk about constructive participation and peer-to-peer learning in games, they’re full of digital objects you can map and sculpt and hone – classes are verbal, and you can’t evaluate verbal contributions the same way.” But in fact, you can – look at Slashdot (www.slashdot.org), a site dedicated to technology news and discussion. Instead of the standard magazine format (staff writers generate articles, readers discuss them) or conventional online communities (loudmouths talk, lurkers read, and the occasional flame war flares up), Slashdot’s architecture harnesses the collective intelligence of the network to drive discussion.

Any registered Slashdotter can submit a mini-article or comment – these often point to outside sources like newspapers and magazines, which are hyperlinked when possible so that readers can check out the evidence for themselves. These submissions are filtered by moderators, and rated on a scale of 1-5. Readers can then designate their “threshold,” the minimum score that a comment needs to have in order to be displayed. So, for instance, if you set your threshold at 2, any comments with scores of 2 or above appear on your screen.

As an individual, higher cumulative point scores (the way your comments have been rated by the community) corresponds to “karma” that makes you eligible to moderate. Moderation privileges are doled out on a continuous basis – every 30 minutes the system checks the number of comments that have been posted, and gives eligible users “tokens.” When a user acquires a certain number of tokens, he or she becomes a moderator, and is given a number of points of influence to play with . Each comment they moderate deducts a point. When they run out of points (or when their points expire, after three days), they are done serving until the next time it is their turn.

What this system does is a) reward people who make verbal contributions valuable to the group as a whole, b) prevent the discourse from being dominated by people who simply like to hear themselves talk and c) give listeners a larger influence, and a greater sense of involvement in the discussion. If you are designated as a moderator, you have to read more closely than you otherwise would (at least for three days), in order to determine what arguments are worth exerting your influence.

The complex exchange of social capital is what differentiates this networked experience from a non-networked one. In order to “network” a course, the question is not “how can the content be delivered digitally,” but “what are the students getting out of this experience that they wouldn’t be getting in the classroom, or in a library – how does the structure of the experience make them useful to each other.” How can the collective consciousness of 20, or 100, or 600 students be brought to bear on the learning process.


The dynamics that drive mastery and knowledge exchange in and around computer games derive from the social ecology of computer games – the conventions of interpersonal interaction that define status, identity, and affiliation both within the games and in the virtual communities that surround them. Commercial game culture is structured to harness innate human behavior: competition, collaboration, hunger for status, the tendency to cluster, and the appetite for peer acknowledgement. In other words, the forces that hone games, and gamers, have more to do with anthropology than code.

Beyond the technological infrastructure, there is a cultural infrastructure in place to leverage these interpersonal dynamics. As discussed above, tools and editing modes allow players to create assets (levels, modifications, skins) to extend the game experience. But more important than the stand-alone benefit of these assets is their value as social currency. The creator of a popular level, object, or plug-in may not receive monetary remuneration. But he garners notice, and even acclaim, from his fellow gamers.

Game modifications, or “mods” are reviewed on thousands of game sites, from fan pages to high-traffic news destinations like GameSpy. These rotating showcases serve dual functions in computer gaming’s attention economy. For gamers looking to download new content, they sift for quality. For content creators, they offer widespread exposure. Because game culture is global, well-designed mods are lauded by an international array of web sites in half a dozen languages. Even game levels and character models (a.k.a. “skins”) , which require less time and skill, are circulated on six continents (probably seven – field researchers in Antarctica have satellite web access, and a lot of time on their hands).

But even on a more local, limited basis, player-generated content circulates among peer groups, particularly among high-school and college students, for whom games are a nexus of friendly rivalry and bragging rights. New levels, skins, and modifications provide social fodder, and bring novelty to the networked game marathons that are now ubiquitous in college dorms, high school computer labs, and offices populated by tech-savvy twenty-somethings.

These group dynamics are best represented by the vast network of self-organized combat clans that vie for dominance on the Internet. No game company told players to form clans – they just emerged, in the beta test for Quake, and have persisted for years. There are thousands of them. The smallest have five members; the largest have hundreds, and have developed their own politics, hierarchies, and systems of governance. They are essentially tribal – each has a name , its own history, monikers, signs of identification (logos and team graphics). Clans do occasionally cluster into trans-national organizations, adopting a shared moniker across national boundaries and adopting a loose federalist structure. Generally, however, clans are comprised of players in the same country, because proximity reduces network lag. In games that require quick responses, this is a real factor.

The clan network may seem anarchic – it is fiercely competitive and has no centralized authority. But beneath the gruesome aesthetics and inter-mural bravado, it is a highly cooperative system that runs far more efficiently than any “official” organization of similar scale, because clans, and the players that comprise them, have a clear set of shared goals. Regardless of who wins or loses, they are mutually dependent on the shared spaces where gaming occurs, whether those spaces are maintained by gamers for gamers, like ClanBase (www.clanbase.com/faq.php#what), or owned and operated by game publishers, like Sony, Electronic Arts, or Blizzard Entertainment, the developer of hit games like StarCraft, WarCraft, and Diablo II.

In an educational context, the salient lesson here is that the vibrancy of these shared spaces stems from the relationships, not only between individuals but between the individual and the group, and between groups. Individuals do not view themselves merely as sole participants, even within games where they are competitors, because the game establishes ongoing relationships on many levels. Between players, obviously, there is rivalry. But gamers also consider themselves to be part of a group – their pack or clan or loose amalgamation of friends that gets together, online or offline, to play. There is a sense of common identity, and shared goals, to which the individual brings all of his knowledge, tactical skills and constructive abilities.

“Mastering the game” in an online, networked environment is a team sport. There are ways for groups to form, bond, and to collectively succeed. There are almost no such mechanisms in the academy. Even the message boards associated with class sections, a natural group division, don’t give the section any reason to band together. There is the usual vocal minority of know-it-all show-offs, the big middle group, who pipe up when they need clarification, and the inveterate lurkers. Course after course, semester after semester. It’s interesting to ponder what would happen if students’ individual grades were affected by the performance of their section. Graft that collaborative activity onto the ethernet, and you would have online learning in turbodrive.


Underlying the dynamics of networked environments is the process whereby individuals are evaluated and rewarded by the system itself, rather than by a specific individual. This process is perhaps most evident in massively multiplayer role-playing games like Everquest, Ultima Online, or Asheron’s call (maintained by Sony, Electronic Arts, and Microsoft, respectively). Unlike most games, whose playing fields exist only while participants are actively engaged, these online worlds persist, whether or not an individual player is logged on at any given time. This sense of persistence gives the game depth, and is psychologically magnetic: the player is compelled to return habitually (even compulsively) to the environment, lest some new opportunity or crisis arise.

The persistence of the virtual environment allows players to build value, according to the standard conventions of role-playing games (RPG’s). In an RPG, a player’s progress is represented not by geographical movement (as in console adventure games like Mario or Tomb Raider, where the object is to get from point A to point B, defeating enemies along the way), but by the development of his character, who earns experience points by overcoming in-game challenges. At certain milestone point-tallies, the character is promoted to a new experience level, gaining access to new tactics and resources – but also attracting more powerful enemies. The better the player becomes, the more daunting his challenges become. Thus, the player scales a well-constructed learning curve over several months as he builds his level one character into a highly-skilled, fully equipped level 50 powerhouse.

These characters embody not only skills and resources acquired in the course of play, but also reputations and connections formed and nurtured when the player joins a band of fellow adventurers, or a larger clan, guild, or tribe. Over the course of several years (Ultima Online is in its fourth year), much of the player’s learning is concretized, qualitatively and quantitatively, in that character’s profile – how they rate in various attributes (strength, speed, dexterity, physical resilience, intelligence, charisma), what their affiliations are, and what sort of combat skills and arcane spells they have at their disposal – as well as where they fall on the good-to-evil continuum.

The character is a reflection of every action a player has taken in the virtual environment – kind of an existential self-portrait. Not surprisingly, players are emotionally invested in the statistical profiles of these characters, far more so than they would be in a simple score tally (or grade point average). In a sense, the RPG game persona is the most fully dimensional representation of a person’s accumulated knowledge and experience in the months and years they spend in an online environment.

In a deeply networked learning environment, it’s not unreasonable to imagine the mechanisms of evaluation shifting to this model, which in some ways mirrors the principles of a liberal education – that students should, in the course of their undergraduate education, apprehend the modes of thinking inherent in physical and social sciences, history, literature, philosophy, logic (in its contemporary designation as “quantitative reasoning”), and the arts. Instead of a binary framework where those requirements are either met or not met, they might be considered attributes that are continuously strengthened, concentrating in the student’s field of study, just as an RPG character’s experience heightens the attributes specific to the in-game profession he has chosen.

In this framework, courses, projects, and extracurricular activities are all experiences which allow a student to incrementally progress along a number of axes, from quantitative analysis, fluency in a foreign language, and aesthetic knowledge, to leadership and communications skill. Depending on the type and difficulty of the challenges a student assumes, and how well they acquit themselves, experience points accrue along these axes (i.e. multivariable calculus allows a student to earn up to four points of quantitative reasoning experience, which could map to conventional grades; directing a play might translate into one point of leadership experience).

Leveling up from year to year reflects more, in this context, than a certain number of hours of class and a certain assortment of grades (which tend to lose meaning outside the context of a particular course, given that grading scales vary between professors and between departments). Unlike a transcript, this persona-based representation of individual performance gets close to representing the sum of a student’s experience, along a variety of axes, and who they are on the day they graduate, rather than what they were doing in the spring semester of their sophomore year. It gives them a way to understand their development as a continuum, and how their cumulative achievement reflects both their strengths and the gaps in their development. This sense of actualized knowledge which is the most powerful convention that higher education can borrow from persistent multiplayer online worlds.

Because life, for a 21st century undergraduate, is a persistent multiplayer online world.

updated Monday 31 March 2003
Address: Wibauthuis, Wibautstraat 3 • 1091 GH, Amsterdam
The Netherlands • T +31 20 596 3220 • F +31 20 596 3202
Doors of Perception 2002. We are happy for this text to be copied and distributed, as long as you include this credit: "From Doors of Perception: www.doorsofperception.com".
Want to send us your comments? Email desk@doorsofperception.com