(This is Part One of a multi-part article on computer games as pioneering attempts at virtual reality, all as seen from the viewpoint of a young and then not-so-young gaming fanatic from 1981 through the present.)
In 1981, one day before my first microcomputer was delivered to my apartment by a kindly salesperson, I bought a game for it. It was called Morloc’s Tower and, though I didn’t know it at the time, it was closely related to Temple of Apshai, one of the most successful early computer role-playing games, first published in 1979. The picture on the back of the box showed a simple silhouette of a warrior traversing a two-dimensional maze and facing down an equally simple silhouette of some sort of monster. I was disappointed to discover that this image was from the Apple ][ version, while my computer was a TRS-80 Model III, where everything on the screen was reduced to an even simpler collection of giant pixels. But it really didn’t matter. Just looking at that picture, as crude as it was even in the Apple version, had caused me to have an epiphany.
A warrior, a monster, a maze: We were easily entertained in 1981.
I’m sure everyone who follows technology and has even a rudimentary knowledge of the principles that underlie it has experienced a moment when they’ve looked at some revolutionary new piece of consumer tech, as home computers still were in the summer of 1981, and caught a glimpse of where that technology would eventually lead. I had already spent a decade playing arcade games, from Pong to Space Invaders to that spanking new sensation Donkey Kong, but it wasn’t until I saw that screenshot of a role-playing game being played out on a microcomputer display that I realized these things we called “computer games” or “video games” could be more than just games. They could be worlds.
By the time my computer arrived the next afternoon, I had already worked out a full-blown vision of what I thought computers would be capable of in 20 years, a period of time that turned out to be fairly close in some ways and completely off in others. We would have wraparound computer screens as tall as we were (wrong, but it turned out not to matter) that would offer us a window into a virtual universe, though I probably didn’t think of it quite in those words, because the term “virtual reality” hadn’t been popularized yet. This universe would be rendered graphically based on a mathematical model stored in the computer and it would be populated by artificial intelligences with whom — I was already thinking of them as beings, not things — we could interact through speech and body movements. The graphics would be so realistic that the AIs would look like real people and the landscape so intricately depicted that you would be able to see individual blades of grass waving in the wind. So realistic would this computer-generated world be that you wouldn’t even feel like you were playing a game, per se. You would be having a full-blown experience, spending a few hours vacationing in a world that just didn’t happen to be your own. The components of the world would be so fully realized and the characters so artificially intelligent that the game designer wouldn’t even need to provide a story. You would create one yourself through the ways in which you interacted with the elements of this virtual world.
Dungeon Master: The most realistic computer role-playing game of the 1980s.
I was so taken with this vision of an electronic universe constructed inside a computer that I started collecting advanced texts on 3D graphics and artificial intelligence. The ones on graphics were intimidating and made me wish I hadn’t slept through high school trigonometry. The ones on AI were less intimidating, but also made the basic problems sound a great deal more difficult.
I obviously wasn’t the only person thinking along these lines, because I quickly began to discover that Morloc’s Tower and its cousin Temple of Apshai weren’t the only games that were attempting to realize some version of that vision. Somewhere in the 80s the term “virtual reality” became common and it was clear to me that it was the goal that a lot of programmers were working toward in the guise of computer games. Over the next decade, games like the Ultima series, the Infocom adventures, the Microsoft, AKA subLOGIC, Flight Simulator, and the undeservedly obscure but extremely influential Dungeon Master were all, in one way or another, doing exactly what I hoped computer games would do: creating worlds. By late in that decade Origin Systems, the company that published the Ultima games, even adopted “We Create Worlds” as its motto. And these worlds that they were creating were ones that I, the person sitting in front of the computer, could in one way or another move around through and interact with, worlds with genuine inhabitants, never mind that those inhabitants were quite simple-minded compared to actual human beings. The fact that these worlds were depicted on a 14″ computer monitor and that their inhabitants couldn’t pass a Turing Test if they had a cheat sheet scribbled on the backs of their virtual hands didn’t matter. A perfect simulation of reality turned out to be far less important than I had thought.
In fact, even perfectly rendered graphics — or graphics at all — turned out not to be essential. The first game where I really felt that I’d fallen through the computer display and into the world of the game like Alice tumbling down the rabbit hole into Wonderland was Infocom’s 1982 adventure game Deadline, where the player took the part of a detective with 12 hours to investigate a locked-room murder and to interrogate the occupants of the mansion where it had taken place. Your interface into that world was purely through text. You typed commands using the computer’s keyboard; the game described your environment by printing text back at you on the computer’s screen. The sheer sense of verisimilitude I felt while playing Deadline was a revelation. That mansion was alive! People moved through it on their own schedules and could be observed by the player either openly or, if a suitable hiding place was available, covertly, and sometimes they would behave differently depending on whether they were aware of your presence. You could collect clues, explore hallways, unlocked rooms and the grounds inside the mansion’s fence. You could stop characters in their tracks, talk to them, and ask them questions, which had to be phrased properly before the character would understand them but would often yield interesting (if frequently and deliberately misleading) answers. By the end of the 12 hours, you either had to accuse someone of the crime or get thrown out of the house. Fail to find the correct culprit and you could either revert to a saved game position or start the game over, trying new tactics.
Deadline: Virtual reality with no graphics in sight.
Deadline fascinated me and I still think it’s the best game Infocom ever published, even better than the more famous Zork series, but some players found talking to the characters boring and later detective adventures from Infocom were far less ambitious. I enjoyed the company’s science fiction and fantasy games too, but none had the phenomenal sense of reality that Deadline exuded. In fact, Deadline might have been a peak moment in the use of artificial intelligence in games. Even today, the only interactions you’re likely to have with computer-controlled characters either involve fighting them or selecting conversational gambits from onscreen menus.
It was, however, text adventures such as this (and simpler ones being produced by a programmer named Scott Adams, no relation to the creator of Dilbert) that inspired me to learn to program. In late 1981 I ordered a book of adventure games written in the BASIC program language, all of which had been published commercially in the late 1970s but were now sufficiently dated that the authors had released the source code to be used for educational purposes. I typed one into my computer and, once I’d finished combing out all the typos, was astonished at how vivid a world it created, all based on typed instructions and fairly simple data structures. I was so excited by this that I stayed up almost 48 hours straight, dissecting the program so that I knew exactly how it worked and then writing a similar program of my own. It turned out to be remarkably easy to create something out of variables and computer data structures like arrays that felt very much like a real world, one with which the player could interact freely.
Still, the reality quotient of games continued to increase and 3D graphics, which were a tougher nut for me to crack intellectually, became rapidly more important. The subLOGIC Flight Simulator, the second edition of which was published the same year as Deadline, was another early milestone in virtual verisimilitude, a stealth attempt to create virtual reality in the guise of a computer game. Even though it ran at about one frame per second on my Commodore 64, I was startled by the sheer volume of the world it depicted. You viewed that world entirely from the cockpit of a small plane and it largely consisted of lines representing roads and rivers, with the occasional wireframe building or bridge and the even more occasional texture-mapped surface. But the fact that I had hyper-realistic control of the way that Piper Cherokee Archer moved through the skies of the game’s four areas (Seattle, New York, Chicago and Los Angeles, if I’m remembering correctly) made it easy to believe to believe that there was a world inside my computer. And in a sense there was, except that instead of being made of atoms and molecules, it was made of patterns of electrons stored in a matrix of silicon.
The subLOGIC Flight Simulator: A world made of electrons and silicon.
But what really made the subLOGIC Flight Simulator so astonishing was the sense that every experience you or anyone else had in it was unique, just as every experience you have in the real world is unique. Almost every game of Donkey Kong was identical to every other and there were no doubt other players who typed and were told the same things in Deadline as I was. But the subLOGIC Flight Simulator offered a nearly infinite variety of possible game sequences and it was very likely that the one you were experiencing was different, in at least small ways, from the ones experienced by other players. Although I’m pretty sure the terms hadn’t been coined yet, the subLOGIC Flight Simulator was probably the first example of what later came to be called either an “open world” or a “sandbox” game. You could go anywhere within the game’s database of maps and you could choose to do just about anything your plane was capable of, including crash into the ground like a bug hitting the windshield of a race car. There was no real goal to the game except the ones you made up for yourself. You were like a child playing in a very big sandbox for the sheer joy of it.
In the 1990s the development of realistic computer-generated worlds really began to take off (no flight-simulator pun intended). For me, the turning point came when the game development firm Blue Sky Productions (later renamed Looking Glass Studios) joined forces with game publisher Origin Systems to create Ultima Underworld: The Stygian Abyss, published in 1992. Earlier Ultima games had given the player a top-down view of the imaginary land of Britannia, with pre-drawn animated characters traveling from city to city fighting pre-drawn animated monsters. But Ultima Underworld (which was only loosely related to the mainline Ultima games) gave you a first-person three-dimensional look at its underground universe, a 10-level dungeon illuminated by flickering torchlight and populated by three-dimensional humanoids who were sometimes your friends and sometimes your enemies, but who were rendered in real-time with surprising realism given that the game came out in an era when computer CPUs rarely ran faster than 33-mhz. The wonderful game Dungeon Master, which had debuted on the 8-mhz Atari ST in 1987, had tried something similar and succeeded extraordinarily well by the standards of its time, but Ultima Underworld was the first time I really felt like I was entering that graphically vivid universe I had envisioned when I bought Morloc’s Tower back in 1981. Blue Sky Productions couldn’t make their characters look quite like real people or show individual blades of grass waving in the breeze (not that there were any breezes to be found in its underground environment), but the game was such a stunning leap toward the type of world-building I had longed for that just the playable demo of the first level of the dungeon made my head spin.
Welcome to the Stygian Abyss!
As revolutionary as it was, Ultima Underworld was not the most influential worldbuilding game of 1992. That role fell to an unexpected candidate, Wolfenstein 3D from Id Software, an attempt to remake a popular Apple ][ game from the 1980s called Escape from Castle Wolfenstein into a high-speed three-dimensional experience. At the time I was working as a moderator on the old Compuserve Information Service, the sort of proprietary online service we hung out on back in the days before the Internet invaded the homes of ordinary people. I found the game in the file upload area of a forum dedicated to PCs (when we were still in the transitional phase between MS-DOS and Windows as the operating system of choice). It ran under DOS and, despite having less realistic (and technically less sophisticated) graphics than Ultima Underworld, the programmers at Id had come up with a way of making Wolfenstein 3D rocket along at high speeds while by comparison Ultima Underworld merely crawled. Programming purists complained that it wasn’t using true 3D graphics, which was true — you could not, for instance, look up and down or climb up to a higher level that looked down on a lower one — but the level design was so clever that you barely noticed. Wolfenstein 3D was addictive in a way that few games had ever been and it spawned a brand-new game genre: the first-person shooter.
Wolfenstein 3D: The first first-person shooter.
When Wolfenstein 3D came out in July 1992, I had, ironically, just finished writing a book called Flights of Fantasy that used my newfound knowledge of computer programming and 3D graphics to explain how to program three-dimensional animation of the type found in flight simulators. (It was published in 1993 by Waite Group Press and came with a working flight simulator on disk that I co-wrote with my friend Mark Betz. I was in charge of writing the graphics animation code and Mark wrote the flight-simulation mechanics. The book spent several weeks on computer book bestseller lists and I’d like to think it taught a generation of young programmers how to write both 2D and 3D games.) The moment I saw Wolf 3D, as it was affectionately known, I proposed to Waite Group Press that my follow-up book, Gardens of Imagination, be about Wolfenstein 3D-style graphics. The contract was in the mail almost immediately.
The book that (I hope) launched a thousand careers.
The programmers at Id Software, meanwhile, weren’t resting on their laurels. Although dozens of Wolf 3D clones began to appear, Id was already at work on the next generation of the Wolfenstein graphics engine, one that came even closer to true 3D graphics. They used it to create the revolutionary game Doom, which appeared in December of 1993. Doom made Wolf 3D look like ancient technology and it deservedly became one of the most popular computer games ever published.
Doom: The game that revolutionized 3D gameplay.
While I’ve probably played Doom more than any other game I’ve ever owned — hell, I still play it, albeit in versions with revamped graphics engines that keep it from looking atrociously dated on widescreen monitors — it was really Ultima Underworld that came closest to my 1981 vision of computer games as worlds inside the computer. And while Doom and its follow-up Quake were the games that were really shaking up the gaming industry in the mid-1990s, another company, Bethesda Softworks, was quietly reinventing the Ultima Underworld model and creating the game series that would eventually go on to become probably the most influential in gaming history: the Elder Scrolls. I was lucky enough to live about two miles from Bethesda’s offices while they were developing the first game in the series, Arena, and either because I wrote Flights of Fantasy or because I was a moderator on one of Compuserve’s gaming-related forums — I never was quite sure — I wangled an invitation to see the game while it was still in development. I was stunned by what I saw. Although the graphics look crude now, largely because they were designed for much smaller monitors than we have today and only used 256 different colors, it was the greatest leap yet in the direction of my 1981 vision. The designers at Bethesda were creating a genuine virtual world, one that was vast, detailed and alive.
A glimpse of the immense virtual world of The Elder Scrolls: Arena.
Those of you who follow the computer gaming world know that Bethesda is still creating such games today and each one — the latest is The Elder Scrolls: Skyrim — comes closer to that vision of a perfect virtual world that I had 33 years ago. Skyrim may be the most successful game that Bethesda has yet published, though their Fallout games, which create their own virtual post-apocalyptic United States, are probably close in terms of sales.
In 1997 and 1998, a revolution in computer graphics occurred, one that raised the realism component of computer games to new heights and made virtual reality of the kind I had envisioned in 1981 genuinely attainable. But this post is getting too long and I’ll be back to talk about it later.
In Part Two (and possibly Part Three) of this article, I’ll talk about the revolution in gaming brought about in the late 1990s by graphics accelerator boards and how 3D virtual reality games have essentially split into three types — those that tell stories, those that create worlds and those that do both. Stay tuned.