By Sander Huiberts – Creative Design Practices – HKU University of the Arts Utrecht – School of Music & Technology
This article addresses the findings of a practice-based investigation into the process of game music creation throughout the past forty years. In this research project, four composition students of the Utrecht School of the Arts created game music restricted by the limitations that counted for early game composers and reflected upon the implications for their design process. By doing this, we hoped to get an understanding of the way the creative design process of a game composer is influenced by such limitations, point out the challenges that arise when composing with these drastic restrictions and discern the techniques that could still be relevant for game composers.
The project was initiated by the Creative Design Practices research programme at the Utrecht School of the Arts, which researches creative design processes and investigates how designers collaborate. Four students subscribed to the project that started in spring of 2012: Alexander Wttewaall, Pablo Ham, Stijn Frishert and Yme de Jong.
This investigation of the design processes of game music formed part of a larger research project on the history of game music that was started in 2012 in cooperation with Muziekinstituut Multimedia (MiMM) and Utrecht University.
In this project we focused on the process of creating game music across several phases of game audio history. The resources we find in game music history mainly consist of the artifacts and their descriptions (games and the corresponding soundtracks) as well as resources concerning the technology of platforms (Collins 2008, McDonald n.d.) while information about the actual creation of music for these early platforms is more difficult to find . These three types of resources have been displayed in the figure below.
By doing this project from a designer’s perspective we wanted to unravel the design properties of several phases and how these can influence the work of a game composer.
Based on general resources (Collins 2008, McDonald n.d., Huiberts 2010) we defined 5 key intermediate phases that are relevant for the game composition: monophonic chiptunes, polyphonic chiptunes, MIDI, MOD, and Red Book Audio.
The students composed music for each stage and the results were discussed in weekly peer feedback discussions. For the final phase, Red Book Audio, they composed a song after which fragments were recomposed using same techniques and properties of the earlier stages. The final result is a ‘medley’ that demonstrates the composition techniques of all phases in one song.
Red Book Audio
5 phases of game music production
In this project, we focused on practice-based research and thus took the approach of researching by making, in contrast to researching in advance to creating. Our main objective was to gain insight in how the design process of a composer is influenced by the limitations and the characteristics of the workflow and this information can therefore be seen as an addition to the theory in books or articles on the history of game music (Collins 2008, McDonald n.d.). We didn’t require the composers to work with the original hardware, as it would not be feasible to accomplish and compare all phases within the short timespan of the project/ Therefore, we decided to emulate the result with modern day DAWs, with MOD as exception. Furthermore, we only focused on the game music as a linear noninteractive artifact, so the music has not been integrated in a game engine. Yet, we hope that the results are useful as an additional perspective on the history of game music creation processes for composers and educators. Furthermore, we questioned if there are techniques that can still have value for modern composition.
1. Monophonic Chiptunes
For this phase, a synthesizer capable of producing only one single square waveform without (dynamic) envelopes or filtering was chosen as playback target (Collins 2008). It comes to no surprise that the limitations of these early game consoles can be regarded as quite restricting compared to modern day standards: how does composing a tune with only one note at a time tune compare to a modern recording session on an iPad using 48 mono audio tracks ? For the composer, this means that only one note at the same time can be played and usually, ‘rhythmic creativity’ is used to create ‘ghost tracks’, for instance by playing low notes during the absence of notes in a melody which suggests the presence of a bass track.
Mono Adventure Scroller:
In this phase, the most apparent characteristic of the design process was prioritization. The composer continuously has to decide which function of a track the single voice to assign to at a certain moment. This asks for a very programmatic approach to functionally use notes to express a polyphonic idea into a monophonic composition, especially when functional tracks as melody, harmony, rhythm or bass are used at the same time. In the peer discussion we referred to the techniques that are found in classical solo pieces such as J.S Bach ‘s Violin Partita No. 2 in D minor (BWV 1004). We compared these with for instance monophonic game music, such as seen for instance in Ski or Die (1990). Although one note at a time might seem extremely limited for composition, all four students were surprised by the possibilities of this challenging phase.
The students mentioned that maintaining the musical pulse was their most important goal. If this pulse is not kept intact, the structure of the song can easily become unclear. In most compositions this was done by placing a bass note on the first count. After that, the melody had the highest priority. The snare was placed on the second count unless there was a more important note in the lead melody to overrule the snare. The melody was often simplified and by using very fast rhythmic patterns (e.g. 1/32nd notes) an accompaniment could be suggested without taking up the other costly positions in timing. A mentioned pitfall is the thought that fast arpeggios are necessarily as these can clog the composition and be hard to listen to. This can be heard in the following illustration:
An example how fast arpeggios and rests can be combined in a more musical way:
A consequence of all the rhythmic tricks that are involved in this phase is the fact that long note durations are in a sense ‘expensive’ as they do not allow other notes to be played at the same time. One of the students mentioned that during monophonic composition you gradually develop some kind of system that can be used to arrange the notes and keep the composition consistent.
Because of the displaced alignment of note timing, several other musical styles were examined, such as Karawitan, Indonesian Gamelan music, and Salsa.
8 Bit Salsa:
There’s another reason why long notes aren’t used frequently in monophonic game compositions. The absence of amplitude envelopes (the volume curve) makes the sound of the pulse wave rather harsh and especially long notes are difficult to ‘digest’ for a listener. Therefore it’s very tempting to use short, staccato notes. One of the students exemplified this with the following statement referring to the non-subtlety of the early chip synthesizers:
“It’s almost like composing for fire trucks.”
As timbre and amplitude are fixed, the frequency of notes is the only way to make a distinction between the tracks and frequency ranges could thus be regarded as instruments. Because of the nostalgic association connected to the non-modulated pulse wave, the compositions all clearly refer to retro game music, which inherently has its effect on the mood of the composition.
2. Polyphonic Chiptunes
For the second phase, polyphonic chiptunes, the students were asked to compose for a small amount of voices and they chose to adopt the specifications of the Nintendo NES . The NES sports a total number of five sound channels with pulse waveforms with ‘variable duty cycles’, triangle waveforms and a noise channel. Compared to the previous phase, this phase offers a more diverse sound character, and the availability of volume control allows the composer to incorporate dynamics in a more flexible way.
Compared to monophonic composition, this phase offers more freedom and the presence of various channels make the placement of notes less rigid. Within the channels, however, the monophonic restrictions still count. One of the students stated: “Polyphonic composition is not a substitute for monophonic composition, but encloses it. Every voice does in that sense consist of a monophonic composition.”
Although each channel can only play one note at the same time, different timbres can be used and of course the different channels can be used in contrast to or in harmony with each other, for instance in a contrapuntal way. The composers don’t have to prioritize and choose between several functions at the same moment, kick drum and bass can be played at the same time. Furthermore, the melody can be truly separated from the rhythm and the accompaniment tracks so the composer is more likely to use longer notes in the melody track.
Because it’s possible to use separate tracks there are more ways of creating contrast between several melodies, such as long notes versus short notes and monophonic usage of notes versus a chord. Because of the presence of a noise generator, the contrast between melody and percussion is a possible option. Doubling two voices can make a phrase more dynamic and tracks can be muted for even more contrast.
SpolyFarers, a track with more contrast between tracks and parts:
A dictating sound
Very logical but it must be mentioned: the characteristic sound of the sound generators still makes every composition inherently sound like a chiptune (Collins 2008). The following tracks all share the properties of chiptunes.
Breakfast at Koopa Kastle:
To Victor goed the spoils:
3. MOD trackers
Music made in MOD or music trackers combines control data such as note information with samples or short waveforms. Therefore, MODs sound the same on every platform they are played. This phase specifically required the composers to work in a MOD tracker instead of their usual tool Logic and all four composers chose to work with ‘Milky Tracker’ for Mac OSX.
Orchestrating vs. Programming
None of the students had worked with a tracker before and the rather ‘non-intuitive’ interface of MOD-trackers is not quite inviting for new users. One composer stated:
“The MOD tracker approaches composition with the jargon of a programmer”
Reading the manual from A to Z showed to be fruitful for one of the composers, as there are MOD-specific control commands which ease the creation and add characteristic control information for MOD-files such as arpeggios, glissandi and delay effects, which can be heard in the following tracks.
Hello Milky World:
The Straight and Narrow:
For a composer, there’s still overlap with the way used in the above two techniques: a channel can only play one note at a time. Fundamental differences are the use of samples and techniques for controlling the sound via the interface of the tracker. In addition to only note composition, we see music production-related tasks as well, such as mixing and effects. New is the ability to use samples using pre-recorded material, which offers a wide range of sounds to be used within MODs.
Compared to modern DAWs (Digital Audio Workstations), making MODs was found to be rather cumbersome due to the diminished level of overview of the composition structure. A logical consequence that was mentioned is that it is very tempting to start thinking in patterns, resulting in musical structures that are often seen in dance music. The hexadecimal values that are used to enter data values aren’t exactly intuitive and arranging notes from top to bottom instead of left to right of the screen was completely new for the composers. As one of them said:
“Composing in a MOD-tracker feels like you’re working with Teletext!”
According to the students, there are many specific things a composer should be aware of. Layering square waves might cause the sound to clip, creating custom samples should be done with a power-of-two number of duty-cycles (otherwise an instrument might sound out-of-tune) and the creation of some effects (such as phasing) has to be done manually by layering sounds that are out of tune. Furthermore, due to the downsampling of sound samples, most MODs have the tendency of sounding ‘dull’ when compared to the result of other phases, which of course can be compensated.
Been there done that?
In the above it was postulated that working in a MOD tracker is a rather non-musical tool for composing music. Yet, the composers found that the tool itself also influenced their creations in a positive way. Especially for creating chiptunes, a tracker can be a great tool for creating the very specific sound that is associated with certain types of chip music. Especially the addition of certain pitch and modulation control information (such as for instance ‘sweep’ or ‘arpeggio’-effects) was found to be defining this character.
For the MIDI phase, the students delivered a standard MIDI file which could then be played back through a random General MIDI compatible device just like early PC games or consoles did (cf. Collins 2008). Unlike MODs, the General MIDI file only contains the control data, not the sounds.
This phase offers polyphony within one track, which changes the way one regards polyphonic chords. This makes it for instance possible to effectively score a piano in one track, instead of splitting the notes into several tracks. Working in a General MIDI compatible sequencer was easy for all the composers as it’s similar to the way they are used to produce musical tracks (most DAWs still use or support MIDI). To quote a student’s evaluation:
“Working in MIDI is way more musical than the hexadecimal hell called Mod tracker.”
Compared to MOD trackers, the way of working was found to be preferable: a piano roll or score window gives a better overview than the ‘event’-based overview in a MOD tracker that represents notes with numbers, simply because it is more musical. Yet, there are some issues that can make working with General MIDI rather unrewarding.
The students experienced the following limitations: in their ears, some instruments sound appalling on certain playback platforms and there’s no real control over the way a MIDI-file is going to sound on a different device. Even though control messages are standardized, these are interpreted differently across platforms and the students found that the suitability of the scale of most controller data on many synthesizer instruments seemed to be having a sweet spot around 60-90 within the 0-127 scale.
The frustration that arises from the sound character of the MIDI sound banks is new in contrast to all the other phases. MOD sounds differ that much from real instruments that it is still rewarding to make a composition. In General MIDI, a violin sounds like a bad interpretation of a violin, no matter what. Because of the nasal character it’s difficult to get to pleasantly sounding results. As one student said: “A good caricature is better than a pale imitation”. Still, the same student also mentioned that not having to focus on the way things sound (the music production) rather than which notes are played (the music composition) can be liberating as one is fully able to focus on the structure of his composition. It comes to no surprise that the use of MIDI controllers is a very important element to make the instruments come alive.
Sega FightingSpirit, using an OPL3 FM Midi player:
5. Red Book and Medleys!
For the final phase, the students chose a fictional game genre or setting and composed a song in their current workflow using Logic. Then, they recomposed fragments of the composition for all the other phases, creating a medley that illustrates and simulates how the same composition might have sounded throughout history.
The medleys and their corresponding reference Red Book Audio track can be heard below. A short description is provided as well.
Medley 1: Yme de Jong
A heroic piece of music about a young hero about to explore the world.
The Brass Knight, Red Book:
The Brass Knight, complete medley:
Medley 2: Stijn Frishert
Adventurous, promising and arousing music for the intro of a game.
A Journey Begins, Red Book:
A Journey Begins, Medley:
Medley 3: Pablo Ham
Music for a science fiction RealTime Strategy game (like Starcraft). In the first piece of the intro, space ships take off, in the second piece a space battle occurs and during the last piece, the enemy is beaten.
Space Farers, Red Book:
Space Farers, complete medley:
Medley 4: Alexander Wttewaall
Music for an epic drama game, like Zelda/Skyrim but more serious.
By composing for past game technology we became aware of the implications of the restrictions of various phases of game composition. When comparing the earlier phases with modern music production techniques, we see a more integrated production phase whereas the older phases have a composition phase where less production tasks are performed. All students became aware of this difference and appreciated the focus that could be achieved when production was something for later. According to them, it can be rather tempting in most modern technological composition workflows to start right away with production-related tasks, for example starting with searching the perfect sample before finding the notes.
In the first four phases we see all kinds of creativity to get to special effects such as the creation of delays by using delayed notes, as opposed to (VST) effects in modern production techniques. The MOD controlling commands are interesting and integrating parts made with a MOD tracker into modern compositions or adapting techniques (for example the arpeggios) into modern software could be valuable, even at modern times. Exploring these techniques could be useful for recreating a retro sound in modern software but also for other types of music.
The older techniques have rather constricting technological limitations while in modern composition tools very little concessions have to be made. The composers usually work without artistic limitations because of the endless possibilities of their DAWs but said that the restrictions did benefit to their creativity.
Monophonic composition was evaluated as an interesting challenge for composition students as there’s a lot to be learnt from such an exercise. The composer needs to be creative and learns a lot about the song structure and functions of various voices within the composition. The semi polyphonic techniques that were used in the monophonic phase are still relevant for composing for only one instrument or voice and could be worth exploring for composition in regular polyphonic composition. When the students had less notes at their disposal, finding the correct notes and refining them became important, as there was no masking of wrong notes taking place. Still, they found that the limited resources (voices) stimulated creativity. And one of them mentioned:
“Even though the synthesizer might sound harsh, a good composition will always work.”
The author wishes to thank Alexander Wttewaall, Pablo Ham, Stijn Frishert and Yme de Jong for their enthusiasm during the work sessions and sharing their great creations, Rens Machielse, Jaap van der Velden, Than van Nispen, Loek Dikker, MiMM and Utrecht University.
 NES specifications can be found in Collins (2008) and on http://nintendo.wikia.com/wiki/Nintendo_Entertainment_System and
NES Audio: Brief Explanation of Sound Channels http://www.youtube.com/watch?v=la3coK5pq5w.
 An example of a designer of retro computer game music can be found on the ‘Boing Boing’ website:
Suzanne Ciani: music of Atari, pinball, and Star Wars Disco. Boingboing, last accessed 01-07-2013. http://boingboing.net/2012/03/30/suzanne-ciani-music-of-atari.html
 Ski or Die has 1,5 minute guitar solos, which can be heard in for instance:
Collins, K. (2008). From PacMan to Pop Music. Ashgate Publishing, UK.
Super Nintendo Technical Documents:
McDonald, G. (n.d.). A History of Video Game Music. Gamespot. http://www.gamespot.com/features/a-history-of-video-game-music-6092391/ (Last accessed 04-05-2010)
- Logic Studio and Ableton Live
- 8-bit sounds – YMCK Magical 8-bit plug-in
- Nes emulation (4-voices) MFB-Synth2
- MOD tracks MilkyTracker
- Logic Pro
- YMCK Magical 8bit Plug
- MilkyTracker, Guitar Pro
- Orkestrale Sample Libraries
- ProTools LE8
- Logic Pro 7
- YMCK (8bit plug in)
- Yamaha PSR-280 MIDI-keyboard for playback
- Logic Pro
- YMCK Magical 8bit Plug
- MIDI to GM QuickTime player