Authors: Tom Bissell
One designer told me that the idea of designing a game with any lasting emotional power was unimaginable to him only a decade ago: “We didn’t have the ability to render characters, we didn’t know how to direct the voice acting—all these things that Hollywood does on a regular basis—because we were too busy figuring out how to make a rocket launcher.” After decades of shooting sprees, the video game has shaved, combed its hair, and made itself as culturally presentable as possible. The sorts of fundamental questions posed by Aristotle (what is dramatic motivation? what is character? what does story
mean?)
may have come to the video game as a kind of reverse novelty, but at least they had finally come.
At DICE, one did not look at a room inhabited by video-game luminaries and think,
Artists
. One did not even necessarily think,
Creative types
. They looked nothing like gathered musicians or writers or filmmakers, who, having freshly carved from their tender hides another album or book or movie, move woundedly about the room. But what does a “game developer” even look like? I had no idea. The “game moguls” I believed I could recognize, but only because moguls tend to resemble other moguls: human stallions of groomed, striding calm. A number of DICE’s hungrier attendees wore plush velvet dinner jackets over
Warcraft
T-shirts, looking like youthful businessmen employed by some disreputably
edgy company. There was a lot of vaguely embarrassing sartorial showboating going on, but it was hard to begrudge anyone that. Most of these people sit in cubicle hives for months, if not years, staring at their computer screens, their medium’s governing language—with its “engines” and “builds” and “patches”—more akin to the terminology of auto manufacture than a product with any flashy cultural cachet. (In actual fact, the auto and game industries have quite a bit in common. Both were the unintended result of technological breakthroughs, both made a product with unforeseen military applications, and both have been viewed as a public safety hazard.)
There was another kind of DICE attendee, however, and he was older, grayer, and ponytailed—a living reminder of the video game’s homely origins, a man made phantom by decades of cultural indifference. An industry launched by burrito-fueled grad school dropouts with wallets of maxed-out credit cards now had groupies and hemispherical influence and commanded at least fiduciary respect. Was this man relieved his medium’s day had come or sad that it had come now, so distant from the blossom of his youth? It was surely a bitter pill: The thing to which he had dedicated his life was, at long last, cool, though he himself was not, and never would be.
Like any complicated thing, however, video games are “cool” only in sum. Again and again at DICE, I struck up a conversation with someone, learned what game they had done, told them I loved that game, asked what they had worked on, and been told something along the lines of, “I did the smoke for
Call of Duty: World at War.”
Statements such as this tended to freeze my conversational motor about as definitively as, “I was a concentration camp guard.”
Make no mistake: Individuals do not make games;
guilds
make
games.
Technology
literally means “knowledge of a skill,” and a forbidding number of them are required in modern game design. An average game today is likely to have as much writing as it does sculpture, as much probability analysis as it does resource management, as much architecture as it does music, as much physics as it does cinematography. The more technical aspects of game design are frequently done by smaller, specialist companies: I shook hands with the CEO of the company who did the lighting in
Mass Effect
and chatted with another man responsible for the facial animation in
Grand Theft Auto IV
.
“Games have gotten a lot more glamorous in the last twenty years,” one elder statesman told me ruefully. Older industry expos, he said, usually involved four hundred men, all of whom took turns unsuccessfully propositioning the one woman. At DICE there were quite a few women, all of whom,
mirabile dictu
, appeared fully engaged with rampant game talk. At the bar I heard the following: Man: “It’s not your typical World War II game. It’s not storming the beaches.” Woman: “Is it a stealth game, then?” Man: “More of a run-and-gun game.” Woman: “There’s stealth elements?” The industry’s woes often came up. When one man mentioned to another a mutual friend who had recently lost his job, his compeer looked down into his Pinot Noir. “Lot of movement this year,” he said grimly. Fallen comrades, imploded studios, and gobbled developers were invoked with a kind of there-but-for-the-power-up-of-God-go-we sadness.
Many had harsh words for the games press. “They don’t review for anyone but themselves,” one man told me. “Game reviewers have a huge responsibility, and they abuse it.” This man designed what are called “casual games,” which are typically released for handheld systems such as the Nintendo DS or PSP. In many cases developer royalties are attached to their reviewer-dependent Meta-critic
scores, and because game journalists can be generally relied upon to overpraise the industry’s attention-hoarding AAA titles (shooters, RPGs, fighting games, and everything else aimed at the eighteen-to-thirty-four male demographic—a lot of which games I myself admire), the anger from developers who worked on smaller games was understandable. Another man introduced himself to tell me that, in four months, his company would release its first game on Xbox Live Arcade, the online service that allows Xbox 360 owners access to a growing library of digitally downloaded titles. This, he argued, is the best and most sustainable model for the industry: small games, developed by a small group of people, that have a lot of replay value, and, above all, are
fun
. According to him, pouring tens of millions into developing AAA retail titles is part of the reason why the EAs of the world are bleeding profits. The concentration on hideously expensive titles, he said, was “wrong for the industry.” (For one brief moment I thought I had wandered into a book publishing party.)
Eventually I found myself beside Nick Ahrens, a choirboy-faced editor for
Game Informer
, which is one of the sharpest and most cogent magazines covering the industry. “These guys,” Ahrens said, motioning around the room, “are using their childhoods to create a
business.”
The strip-mining of childhood had taken video games surprisingly far, but childhood, like every natural resource, is exhaustible.
DICE’s first panel addressed the tricksy matter of “Believable Characters in Games.” As someone whose palm frequently seeks his forehead whenever video-game characters have conversations longer than eight seconds, I eagerly took my seat in the Red Rock’s Pavilion Ballroom long before the room had reached even 10 percent occupancy. The night before there had been a poker tournament, after which a good number of DICE attendees had
carousingly traversed Vegas’s great indoors. Two of my three morning conversations had been like standing at the mouth of a cave filled with a three-hundred-year-old stash of whiskey, boar meat, and cigarettes.
“Believable characters” was an admirable goal for this industry to discuss publicly. It was also problematical. For one thing, the topic presupposed that “believability” was quantifiable. I wondered what, in the mind of the average game designer, believability actually amounted to. Oskar Schindler? Chewbacca? Bugs Bunny? Because video-game characters are still largely incapable of actorly nuance, they frequently resemble cartoon characters. Both are designed, animated, and artisanal—the exact sum of their many parts. But games, while often cartoonish, are not cartoons. In a cartoon, realism is not the problem because it is not the goal. In a game, frequently, the opposite is true. In a cartoon, a character is brought to life independent of the viewer. The viewer may judge it, but he or she cannot affect it. In a game, a character is more golemlike, brought to life first with the incantation of code and then by the gamer him-or herself. Unlike a cartoon character, a video-game character does not inhabit closed space; a video-game character inhabits
open situations
. For the situations to remain compelling, some strain of realism—however stylized, however qualified—must be in evidence. The modern video game has generally elected to submit such evidence in the form of graphical photorealism, which is a method rather than a guarantee. By mistaking realism for believability, video games have given us an interesting paradox: the so-called Uncanny Valley Problem, wherein the more lifelike nonliving things appear to be, the more cognitively unsettling they become.
The panel opened with a short presentation by Greg Short, the co-founder of Electronic Entertainment Design and Research. What EEDAR does is track industry trends, and according to
Short he and his team have spent the last three years researching video games. (At this, a man sitting next to me turned to his colleague and muttered, “This can’t be a good thing.”) Short’s researchers identified fifteen thousand attributes for around eight thousand different video-game titles, a task that made the lot of Tantalus sound comparatively paradisaical. Short’s first Power-Point slide listed the lead personas, as delineated by species. “The majority of video games,” Short said soberly, “deal with human lead characters.” (Other popular leads included “robot,” “mythical creature,” and “animal.”) In addition, the vast majority of leading characters are between the ages of eighteen and thirty-four. Not a single game EEDAR researched provided an elderly lead character, with the exception of those games that allowed variable age as part of in-game character customization, which in any event accounted for 12 percent of researched games. Short went on to explain the meaning of all this, but his point was made: (a) People like playing as people, and (b) They like playing as people that almost precisely resemble themselves. I was reminded of Anthony Burgess’s joke about his ideal reader as “a lapsed Catholic and failed musician, short-sighted, color-blind, auditorily biased, who has read the books that I have read.” Burgess was kidding. Mr. Short was not, and his presentation left something ozonically scorched in the air. I thought of all the games I had played in which I had run some twenty-something masculine nonentity through his paces. Apparently I had even more such experiences to look forward to, all thanks to EEDAR’s findings. Never in my life had I felt more depressed about the democracy of garbage that games were at their worst.
The panel moderator, Chris Kohler, from
Wired
magazine, introduced himself next. His goal was to walk the audience through the evolution of the video-game character, from the australopithecine
attempts
(Pong
’s roving rectangle,
Tank
’s tank) to the always-interesting Pac-Man, who, in Kohler’s words, was “an abstraction between a human and symbol.” Pac-Man, Kohler explained, “had a life. He had a wife. He had children.” Pac-Man’s titular Namco game also boasted some of the medium’s first cut scenes, which by the time of the game’s sequel,
Ms. Pac-Man
, had become more elaborate by inches, showing, among other things, how Mr. and Ms. Pac-Man met. “It was not a narrative,” Kohler pointed out, “but it was giving life to these characters.” Then came Nintendo’s
Donkey Kong
. While there was no character development to speak of in
Donkey Kong
(“It’s not Mario’s journey of personal discovery”), it became a prototype of the modern video-game narrative. In short, someone wanted something, he would go through a lot to get it, and his attempts would take place within chapters or levels. By taking that conceit and bottlenecking it with the complications of “story,” the modern video-game narrative was born.
How exactly this happened, in Kohler’s admitted simplification, concerns the split between Japanese and American gaming in the 1980s. American gaming went to the personal computer, while Japanese gaming retreated largely to the console. Suddenly there were all sorts of games: platformers, flight simulators, text-based adventures, role-playing games. The last two were supreme early examples of games that, as Kohler put it, have “human drama in which a character goes through experiences and comes out different in the end.” The Japanese made story a focus in their growingly elaborate RPGs by expanding the length and moment of the in-game cut scene. American games used story more literarily, particularly in what became known as “point-and-click” games, such as Sierra Entertainment’s
King’s Quest
and
Leisure Suit Larry
, which are “played” by moving the cursor to various points around the
screen and clicking to the result of story-furthering text. These were separate attempts to provide games with a narrative foundation, and because narratives do not work without characters, a hitherto incidental focus of the video game gradually became a primary focus. With Square’s RPG–
cum–
soap opera
Final Fantasy VII
in 1997, the American and Japanese styles began to converge. A smash in both countries,
Final Fantasy VII
awoke American gaming to the possibilities of narrative dynamism and the importance of relatively developed characters—no small inspiration to take from a series whose beautifully androgynous male characters often appear to be some kind of heterosexual stress test.
With that, Kohler introduced the panel’s “creative visionaries”: Henry LaBounta, the director of art for Electronic Arts; Michael Boon, the lead artist of Infinity Ward, creators of the
Call of Duty
games; Patrick Murphy, lead character artist for Sony Computer Entertainment, creators of the
God of War
series; and Steve Preeg, an artist at Digital Domain, a Hollywood computer animation studio. The game industry is still popularly imagined as a People’s Republic of Nerds, but these men were visual representations of its diversity. LaBounta could have been (and probably was) a suburban dad. The T-shirted Boon could have passed as the bassist for Fall Out Boy. Murphy had the horn-rimmed, ineradicably disgruntled presence of a graduate student in comparative literature. As for the interloping Preeg, he would look more incandescent four nights later while accepting an Academy Award for his work on the reverse-aging drama
The Curious Case of Benjamin Button
.