At first glance, sound design for film and television and sound design for video games seem to be very similar. Both aim to immerse audiences in their constructed worlds. Whether you're watching a Star Wars movie or playing a Star Wars game, the trademark whoosh of a T.I.E. fighter flying by instantly transports you into Star Wars' sci-fi realm. The audio in both film and video games also heightens and supports the emotional state and excitement of viewers and players. When the evil breathing of Darth Vader creeps into the soundscape, even if he's not on screen, a fantastic chill runs up our spines, because we know the Dark Side is afoot!
However, there are several ways in which audio for games is markedly different from that of film. First and most importantly, film sound design is temporally scripted down to the last floor squeak. When the director creates a scene, he knows exactly how many gunshots will have been fired in that scene, and when they occur. He also knows where in "space" those gunshots occurred (one back on the balcony, two off to the left, etc.). Because everything is predetermined, the sound effects team can script a perfect timeline of effects for each scene.
In video games, on the other hand, no one knows how many sound events are going to occur at any one point -- this is why audio for games is called "adaptive audio."
If a sound designer makes an absolutely humongous laser blast for his top-down shooting game, and the first thing the player does is mash the fire button as quickly as possible, then there may be 15 sudden "gunfire" events happening in three seconds... all from the player's gun alone! God forbid the player should decide to simultaneously activate and detonate 10 mines at the same time.
The difference between knowing the events of a sequence and not knowing them makes designing the audio of video games a much more tightly pre-visualized craft than doing it for film or television. It completely changes the strategies an audio designer uses to create sounds.
Also, because the narrative in a game is often driven by the player, sound effects often cue the player to an advancement in the story. For example, there will often be an ambience shift when a player moves from a regular area to a "mini-boss" area. Maybe the cries of the damned get louder, and the sound of chains and whips enter the ambiance. In that respect, a sound designer is actually helping to convey the story of the game, alerting the player to a different chapter in the game.
Finally, sound design helps the player quickly identify state changes, especially in casual games. A classic example is the mushroom power-up sound in Super Mario Bros. As the player is often focused on looking ahead, the sound cue that he has picked up a power-up is often all he needs to know that his game state has changed.
Newcomers to game audio development should keep these strategies in mind as they start designing sounds. They need to be aware of how their sound designs will support the emotional state of the gameplay, and how it will help to create the "world reality" of the game as it's being played. At the same time, it's important to be aware of the non-linear, non-narrative aspects of game sound effects, and how the sound design will work to enhance the gameplay of the game, which does not necessarily fall into a linear timeline.
Audio Gets Organized
When creating sound effects, knowing how to organize the work -- and knowing exactly what to track -- can easily make the difference between a successful, creative, and compelling outcome, and a failed project.
Audio designers absolutely need to record and track each version of their sound effects: when they were submitted, when they were reviewed, and what feedback the client or game development team had about them.
For smaller games, a simple Google Documents spreadsheet will suffice. But for larger projects, audio designers typically use a database or project management application like Basecamp to track which effects have been done, and what stage of the pipeline they are in.
A word of advice: Back up everything! We have two external hard drives, and we back up my project folder to both of them each day. Our team even intentionally uses two different brands of hard drive. You can never be too careful.
Having a consistent naming convention for all audio files is another critical way audio designers keep their work organized, and therefore usable. (We'll discuss naming conventions in more detail in the Pre-Production section.)
An important part of sound creation is the concept of layering. One audio file does not have to represent an entire sound effect. For example, in Jurassic Park, the T. rex roar used was a blend of elephant, lion, alligator, and a couple of other animal sounds. As a game example, when we designed one of the laser power-ups in Fittest, we used two shotgun blasts mixed together to handle the beginning of the laser sound, then a sampled laser mixed with an analog synth to handle the "pyew" decay of the sound.
Audio designers build sounds in this way using a multi-track digital audio workstation (DAW), which allows them to mix and equalize (EQ) multiple sound files together. Ambiences almost have to be created this way. A city ambience might be three layered ambient sounds, people walking and talking, traffic, and some industrial hums, for example. The summation of layered sounds gives the fullness that audio designers are looking to achieve in their sound effects.
For an excellent example of layered ambiences, see Brad Meyer's "Sound Concepting: Selling the Game, Creating its Auditory Style" (Gamasutra.com, December 16, 2008) about creating the sound design in Spider-Man: Web of Shadows.
Not every sound effect requires layering. Many sounds are represented by just one audio file, and there are several ways to acquire those sounds. One option is to use synthesizers to create those sounds. This works especially well for alien sounds. There's nothing better than a top-end synth to make a growly, indistinct, crackling effect to represent alien mind control. If the sound is more natural, another option is to record it, either in a studio or in the field, using a portable hard disk recorder. Another option is to purchase top-notch samples of audio through any number of sound effect companies and web sites.
For student audio designers and other beginners, we highly recommend purchasing audio files. It's easier, cheaper, and less risky (both in terms of taking chances with the game and personal safety) to pay $2 for a shotgun blast than to try to record one -- trust us on this!
Qualities of Great Game Sounds
After the raw sounds have been created or collected, they need to clean up and produced. The most important things to know about the final sounds are the following two things: they should be crystal clear sonic impact, and they should require as little memory space as possible.
Sounds should be as short as possible, too, while still having the emotional impact necessary to enhance the gameplay. Audio designers use effects such as compression and limiting to make their sounds both more uniform and louder. Sound effects tend to be pretty loud and "shiny," and this fits the aesthetic of a great game. The goal is for a game's sound effects to emotionally influence and excite the player, and clean, crisp, and loud sound effects are a strong step in the right direction. For a great example of clean sound design, listen to the phenomenal audio in BioShock.
Finally, sound designers use a two-track audio editing program to trim off any unused portion of the audio file. Every tenth of a second shaved off saves memory, and being nice to game programmers by saving them space is a tenet of being a successful game audio designer.
Where sound design in games shapes the ambient soundscape for the player to live in, music for games creates an emotional context for the player to be in.
Again, while there are similarities between music for film and games, stylistically their function is quite different. Game music's function is specifically geared to create and reflect the emotional state of the player, whereas in film, the music reflects the emotional state of the characters on screen and gives the audience a visceral response to that state.
How an audio designer approaches each game scene must always be seen through the lens of the player and the environment she is in. The music must immerse the player in the gameplay and not simply describe the scene.
Where Stems Divide
One great departure between music for games and music for films is the use of stems. In film, a stem is a sub-mix of each element of the final sound mix for the film, for example, dialogue, sound effects, foley sounds (the post-production sounds that were created, as opposed to the field-recorded ones), and music. In games, a musical score is broken up into sub mixes, or stems, of the score itself. In other words, a composer might write a full orchestral piece for a game, but because of the way the character interacts with the environment, only parts of the score will play at certain points in the character's exploration of the surroundings.
For instance, there may be a low bass ostinato going on as the character explores a cave, but as soon as he turns a corner and finds a pulsating jewel, the music cross-fades to violins and a harp playing the same cue. Troels Folmann, the composer of Tomb Raider as well as countless movie and game trailers, has developed a methodology called micro-scoring that perfectly describes this procedure. Micro-scoring breaks the score into a variety of smaller parts that are then assembled by the game engine in real time, depending on the player's actions and interactions with the environment. (See the interview with Troels B. Folmann on GSoundtracks.com, for more information.)
Any current DAW will be able to break out the respective parts of a composition by either bouncing or sub mixing them. Game audio designers might also wind up slicing up these mixes into REX or other similar sliced sample formats so they can control the tempo of the pieces depending, for instance, on whether the character is running, walking, or slowing down.
Pre-production work on a video game's musical score is necessary because as the audio specialists are composing, they need to be able to anticipate which part of the score they will want to break up into smaller but complementary components. Good communication with the rest of the game development team is paramount in order to be clear about what will be needed from the score. And throughout all of this, the audio specialists must adhere to a logical and consistent naming scheme for all the cues.
One of the most common and egregious errors that audio designers make is not having a way of knowing which take is which of a particular cue -- or for that matter, not being able to tell which cue is which at all! If neither the audio designer nor the game development team can tell which files relate to which sounds, the audio work is nearly useless.
One thing we do is to make our own time stamp on a file so we know when the cue was done. The file name "cue1_010809" shows the date January 8, 2009 for cue1. Whatever naming conventions are used, the most important consideration is that it be consistent. It doesn't help anyone if the naming and dating conventions change from day to day.
Being organized to this degree is crucial when handing off a project to the development team regardless of how big or small the project is.
And again, another make-it-or-break-it point of organization is for work to always be backed up. This can't be stressed enough. Again, we recommend backing up to two different locations on a daily basis -- a hard drive and a DVD, or an online storage space and a server. This may seem like overkill but the last thing anyone wants is for two weeks of work disappear without a trace. And it does happen!
Game audio designers who are just starting out with orchestral scores will undoubtedly be doing them with one or a combination of the many sample libraries currently available. Some of these include Vienna Symphonic Library, East West Quantum Symphonic Library, Sonic Implants, Garritan, and ProjectSam Symphobia.
For percussion, Stylus RMX is another virtual instrument that's used (some might say overused) by many composers for percussion loops and for those big booms that are heard throughout so many trailers as well as games themselves. Both East West Storm Drum2 and ProjectSAM True Strike are reliable. As with any tool, it takes a little time to learn how to manipulate them in order to make them sound as realistic as possible.
One of the most important tricks audio designers can learn is to use either controller 7 (volume) or 11 (expression) to shape the attack and release of the instruments in a score. For example, a legato string line should swell into the line as well as fade, rather than abruptly begin and end. This behavior can be modified by judicial use of controllers 7 and 11 in DAW.
Another trick of experienced game audio designers is that they become intimately familiar with all the articulations at their disposal. Rather than use one legato string patch for all string lines, and experienced audio professional will listen to how different articulations are used in the different instruments for greater effect in orchestral writing.
What's Yours Is Mine
Audio pros aren't shy about co-opting another composer's gestures (yes, I mean stealing). Sites like Audio G.A.N.G. and I.A.S.I.G., as well as the sites of individual composers, offer samples; I highly recommend that both professionals and aspiring audio designers go to these sites and listen to what other composers are doing and how they are doing it.
Finally, good game audio designers take the time to mix their work as well as they possibly can. Less experienced people who don't have the inclination or ability should let someone else who is capable mix it for them. However, our advice to aspiring audio designers is to learn how to mix well. It is its own reward. Having music mixed well is as important, or nearly so, as the writing.
Comparing their mixes to other composers' lets audio designers hear how well they are emulating sounds and emotional responses. It's perfectly acceptable to use other people's tricks and ideas. They probably stole it from someone else themselves!
Dialogue and Voice Acting
With games becoming more story-driven and immersive, vocal recording and production is becoming a very important part of game development. And, just as in composing and sound design, organization is the key to survival. A simple spreadsheet is an audio designer's best friend for keeping track of what dialogue needs to be recorded, the naming convention of the dialogue, and any additional performance notes you need. For a great example of a dialogue spreadsheet, see the work done on Prince of Persia: The Sands of Time.
Oftentimes, the audio designer is also the additional dialogue recording (often called ADR) supervisor. It's the ADR supervisor's job to get a clean recording, and a good emotional (or unemotional, if need be) take from each recording.
The best and fastest way to get a clean recording is to book a recording studio, with a vocal isolation booth and state-the-art microphones and preamps. For students who don't have access to a studio and anyone else making a low-budget project, an effective way to fake a vocal booth is to lay a mattress against the wall, take two boom mic stands and put them at 90 degrees to the wall, and hang heavy blankets over the mic stands. Presto! Instant isolation booth!
In any professional recording session, time is money. The ADR supervisor needs to work efficiently and clearly with the vocal talent. Here are some tactics used to help guide and explain the vision to the talent: