It’s a Jungle out there… An AudioJungle

As the Trimester comes to a close and it doesn’t look like any of the game developers I worked with have the energy left to even think about publishing their game, I’m left a bunch of sound effects and a couple of music tracks I want the world to see and use!

I’m not without a plan, I’m going to use two main platforms to release my full work, Sound cloud and Audio Jungle. Also all 7 sound effects and both music tracks will be teased on my Twitter, YouTube and my Official Website. The reason I’m teasing my work on some of my sites is because, although small, I still have followings on these sites and I want to get as much exposure as I can. Below I’ll explain specifically the reasoning behind using Sound cloud and Audio Jungle but first it’s important to understand what audiences I’m targeting.

Target Audiences
My overall goal in my career is to work in video game sound, either as a sound designer, composer or some combination of both. That being said when I release work I’ve done to the public I want to target two audiences:

  1. Game Developers
  2. Peers (other video game sound fanatics)

Essentially I want Game developers to see my work because they might want to hire me for their projects which will help me expand my portfolio, experience and bank account(hopefully). Peers on the other hand will help me by providing feedback on my work or networking opportunities. I know myself that if I ever meet a developer that is looking for ambient music or hip-hop, both of which are not my forte, I’ve got a few peers that I could forward their details. So with the importance of my demographic explained, how am I going to reach them?

1. Sound cloud
Sound cloud is above all one of the easiest and most popular ways for content creators to share their music and sounds. The name literally implies that it is the sound cloud as in the actual audio internet. I’ve been using sound cloud for about a year now and I’ll admit when I started I didn’t really know exactly what I was going to use it for, but as I explored it more and more I discovered it’s true potential. I noticed that many of my followers and the people I was following were others interested in video game sound and music. At first I was a little disappointed that the slim amount of attention my tracks were gaining wasn’t from any game developers. Once I got over this I realised the huge potential my sound cloud had for gathering feedback and criticism. Since everyone was a content creator of some kind, we all had something to say about each others work, although sometimes you have to probe to get more information than this:


When you do get a juicy comment it’s usually got some good suggestions, like this: sc-comment

So I’m going to be uploading all the sound effects and music from Scooch onto sound cloud so that I can get some exposure to my target audience. Hopefully they’ll give me some constructive feedback in comments or I might have to probe with some private messages!

2. Audio Jungle
I recently discovered the potential of Audio Jungle while watching one of my peers presentations. I’d heard of it before, back in my days as a game developer, other developers would talk about it as a good place to source royalty free sound and music for relatively low prices… That’s when all the pieces fell into place and I realised what I was missing.
Straight away and as we type I’m registering an account on Audio Jungle. Essentially Audio Jungle is a the sound and music portion of a larger group called the envato market. They provide a free market place for content creators to sell their work. The finance works by you naming a price for your content, they take half and add a small percentage on top for the buyers, called a buyer fee. You can join as either and exclusive or non-exclusive author which basically means you get a bigger percentage of your named price if you only sell your content on Audio Jungle. Only and sell being the most important words, after looking into this it means I can still use other platforms like sound cloud and YouTube to promote my work and be an exclusive author on Audio Jungle, as long as there aren’t any transactions going on elsewhere. This is how the math works:


For a more detailed breakdown, you can go to their site and see all the terms.
They even have a review team that looks at items uploaded to see if they’re ready to sell, they also give you a little breakdown article of things that will make your work sell-able. Although its a bit basic, it provides a good little checklist to make sure your uploads are flexible enough to be used in a few different mediums.
The only problem I have with Audio Jungle is that it doesn’t have a specific section for video game assets, so it can be hard for a developer to easily pin down assets created to fit their needs. All things considered though, I think Audio Jungle is a good place for me to publish my work, as long as I’m still promoting on YouTube, Twitter and my website so that my audience has a bigger chance of finding my work.

To sum up, my game plan for publishing and promoting my work through the next week and into the holidays will be:

  1. Upload all the Scooch assets onto all of my sites with an announcement
  2. Upload the assets onto Audio Jungle and price according to some market research

Although the splash might be small at first, I’ve discovered recently that the more splashes you make the bigger the ripples become. This is what I’m going to focus on through the holiday break, making more online splashes.

Thanks for reading as always and have a great holiday!





It’s a Jungle out there… An AudioJungle

Project 2 Reflection – The Plan, The Process, The Problems

What I Did

For my second big project this Trimester I worked on a small puzzle driving game called Scooch where the player takes on the role of a robotic car that has to prove its parking skills to the world by traversing a few tricky parking lot challenges. I was in charge of all audio, from the two music tracks that needed to be created to the small list of intricate sound effects.
We agreed on an asset list and a reference track for the main music theme. Discussing the menu track we decided an edited version of the main theme would be appropriate. After this one productive meeting we almost cut communication entirely, which was a huge mistake but I’ll get into that in just a moment.

Here’s the asset list I was given:

  • Engine Noise (loop)
  • Tire screech (loop)
  • Pickup time bonus(-bloop-)
  • Traffic cone hit (cartoonish whack sound)
  • Solid object hit (the sound a cartoon car makes when it crashes into things)
  • Menu button (-blip-)
  • Restart/menu exit (-tshh-)
  • Level completed chime (-dut-dada-)
  • Main Game Music
  • Menu Music

The immediate challenge was the fundamental sound effects, the engine noise and the tire screeches needed to be carefully produced, also because most of these sounds needed to play alongside the main game track I also needed to think about all these elements together right from the start.
I needed to test these things as I developed them so what I needed to prioritise was getting a copy of the build from the developers so I could test my sounds. Then I needed to focus on the engine and tire screech loops. I didn’t do this and it hurt the overall quality of the sounds and by the time I did ask and receive a copy of the build it was more than too late.


Communication was the problem here, there was a definite lack of communication, much like in my first project. I’m confident the solution I put forward in my first project reflection will still work if I just put it into practice. The solution is to push for constant meetings, weekly meetings would be ideal on small agile productions like these. Even if the meetings don’t always get held the pressure to communicate that a meeting will be cancelled is a better encouragement of dialogue than doing absolutely nothing.


The above is something I don’t want to see in my budding career ever again, I’m going everything I can next Trimester to use a better forum for messaging team members than Facebook chat. Also having the ability to contact the entire team instead of just a single member might have gotten me the build I was after faster, although in contrast with last project it was a huge help to only be receiving creative direction from one person it made creative decisions a lot easier.
To sum up on communication, my goals for next Trimester are:

  • Organise weekly meetings
  • Use Discord or Slack for team discussion
  • Allow for 1 on 1 discussion so that creative direction can be easily discussed

Looking at some of my assets I can see lots of room for improvement, on both the technical side and the sound design side. Unfortunately a lot of the sound design issues could have been solved with better time management I’ll still go through a couple of elements and why I think they could be better.

Music Theory in Sound Design

Something I put into practice for this project was using music theory to tie all my sound effects and music together. The main theme and menu themes are both in the key of F major so I used that notes in and around that key to create different tensions and resolutions. For example the menu blip to confirm is played as an F, while the cancel tone is played on the C a perfect 4th below, this means that in the context of the music and the blips themselves the cancel tone is tense and the confirm tone is stable which I felt was pretty appropriate. Another example is the time bonus pick up which plays a F and a Bb, creating the outline of a suspended chord which gives the sound effect a lifting feel and makes you want to go faster after it’s collected. You can listen to both sounds here.

I want to do more of this kind of planning in sound design for the future, I feel it was a very effective strategy and made a lot of the sounds really come together.


Another thing that held this project back was the plan. The plan was created late and it lacked a lot of necessary details. After improving my plan due to input from lecturers I had a light bulb moment. It’s really hard to put into words but I just suddenly saw the value of having a detailed plan. Shortly after my plan was created I had an incredibly productive week in project work and in academia because I knew exactly what I was going to do, step by step. The key to this productivity was the plan, more importantly the objectives, goals and milestones that I set out for myself. By making a list of everything I needed to do and prioritising that list I never needed to think about what to do next, I just had to check the plan.

Here’s a sample of my project plan, the one I made for my studies looked very similar however it was written in notepad:

5 Edit and Produce the following sound effects for review: Engine Noise Loop, Tire Screech Loop, Wall Collision, Cone Collision and Time Bonus Pick-up Wednesday 9th Nov 3
6 Edit and Produce the following sound effects for review: Menu Confirmation Blip, Retry/Restart Blip and Level Complete Fan-Fare Thursday 10th Nov 4
7 Gather Feedback and perform any reforms necessary on any sound effects. Friday 11th Nov 3, 4
8 Use MuseScore to compose and arrange the Main Game loop then export all midi files ready for further arrangement in Pro Tools. Monday 14th Nov 5, 6
9 Substitute midi lines for higher quality instrumentation in pro tools, re-sample all midi into raw audio. Mix and master the tracks adding in any necessary compression, eq, effects, topping and tailing. Make sure the track loops seamlessly. Tuesday 15th Nov 5, 6
10 Submit main game track for review and make any necessary changes. Wednesday 16th Nov 5, 6
11 Remove all elements from main game track except for bass line add piano counter melody to create main menu track as a derived form of main game track. Thursday 17th Nov 6
12 Submit menu loop track for review and make any necessary changes. Friday 18th Nov 6
13 Take all feedback and review from assets and compile into a report and perform any necessary changes. Monday 21st Nov 8

I would say learning how to plan correctly was the single most important thing I learnt this Trimester at SAE, I can see myself taking this skill and developing it further. I now understand how effective it can be at increasing productivity for me. In future I want to take advantage of Project management tools like, Trello and more importantly Hack and Plan for Game Development.


The reference track:

The genre was Ska, not Ska punk but traditional Ska which I discovered after researching into the genre. I discovered the key to it was the instrumentation mostly, jazz big band instruments mostly with horns on the lead, guitar being used as a rhythm instrument and jump or walking bass lines.
I used this knowledge to compose a short loop of about 1:30, which I thought matched the reference track as well as my skill would allow and fit well with the game play. I used Kontakt 5 to put the piece together, so its made completely out of sampled instruments, I decided to go with a clavinet instead of a guitar because I thought it filled the role but also put a unique twist on the genre. It’s written in F major and pulses at about 100 bpm, I tried to match the bpm of the reference track closely as more often than not a game developer will choose a track based on the tempo and how it matches with the game play.

Here is a link to my track.

I’m quite proud of how it turned out, the composition is a little basic and although I didn’t need to do a lot of mixing and mastering I feel the slight touches I added really polish off a nice mix. Not my most complex piece but I feel like it has just enough variation to not get boring on at least the first 3 loops.


As a final word, I’m ashamed to say this project went much worse than I wanted it to and it could have all been avoided by changing some of the ways I approached the project. Better communication and better planning would have made this project really shine. What I needed to do was get on top of my project plan early so I knew exactly what I needed to do and promote better communication with the game developers by using better forums and having more meetings. In the future, Trello, Hack n Plan, Discord and Slack are going to be my best friends.

Project 2 Reflection – The Plan, The Process, The Problems

That’s a Massive Wavetable!

Massive is a wavetable synthesizer introduced to the world by Native Instruments. It is arguably the most popular wavetable synthesizer and its definitely one of my favourites.
Due to the fact that I use it so much I thought I’d give a little run through of its features and a deeper look at the functionality of a wavetable synthesizer and how it differs from FM and Subtractive.

First of all what is wavetable synthesis?

The term wavetable identifies a very important aspect about the synthesizer type, that is that it has these things built into it called wavetables. A wavetable is essentially like a collection of samples, each sample will usually be 1 complete cycle of a wave form. These can range from simple waves like sines or squares or get really complicated like some of the waves shown below:


From Future Audio

Although the coolest part about Massive and most other wavetable synthesizers, is that not only do you have all these waveforms at your disposal but you pick two of them and you can fade between each of them which yields and almost infinite amount of combinations. Massive has some classic pairings like Sine-Triangle, along with some more out there ones like Rough-Math I and II. Here’s a list of some of the combinations available with Massive by default.

This concept is hard to grasp but the easiest way to visualise it is described in the Massive manual which can be found here:

“Think of these wavetables in two dimensions. The horizontal axis represents time, and the “recorded” waveforms run from left to right on the table just as in any sample editor: playback starts from the left, and when one complete waveform cycle has been played from left to right, the playback jumps back to the beginning at the left to loop the waveform.

Along the vertical axis, on the other hand, there are different waveforms one above another, like the tracks in a multi-track sequencer: at the bottom there is one waveform and at the top there is another one. Between them are a series of intermediate waveforms that gradually morph from the bottom waveform shape to the top.”

With that explained it’s easier to visualise what happens when you turn the W-T position dial on the Massive interface. Whats happening is the position on the wavetables vertical axis is changing so you get different degrees of wave form combination.
As you can hear in this example I sweep the W-T position dial from left to right and you can hear the Sine wave morph into a Square wave, also note that I sit in the middle for a while on the combination of both:

So now that we understand how wavetable synthesis works lets dive in and create something.

Massive has all the other elements of synthesis such as filters, modulation and noise oscillators plus it has its own built in effects which are all really nice, especially its reverb. In this sound I will try to use at least one of each of these elements, however I will try to make the wave table dial the focal point of my sound.

So I’ve been into sound effects lately so instead of trying to make an instrument I think I’ll go for something magical that could be used in a game as an effect.


To start off I’m only using 1 oscillator, I want a crystal sound so I’ve decided to use the Sine-Square wave table, as you can see I’ve left the position right in the middle so the sound begins with both waves. However as you can see by that little 6 and the green inner-most circle that I’ve added an LFO.


What I’m doing here is using an LFO to with a sharp attack to modify the wave table position on the fly so that it adds each extreme of the wave table as a sort of vibrato effect. Makes the sound interesting and really takes advantage of the wavetables.

I then went on to add two effects, a reverb and a chorus.

I made the reverb as large and dense as I felt necessary to give it a dramatic effect then I turned the wet signal down so the reverb just poked through. This allowed it to be a long tailed reverb with a relatively low level. I used the chorus effect to give the sound a little more presence as you can see I’ve kept the wet signal fairly low on this as well so that it doesn’t overwhelm the sound.

Next, I added in a little bit of noise through a Daft filter, which is essentially a low pass filter very similar to an Acid filter. These elements together  just added ambience to the noise and filled it out as a sound effect element when a single tone is held.

Finally I changed the main envelope attached to the master making the attack relatively short to keep that bell chime at the beginning and let the release go fairly long to sustain that low level reverb, noise and the LFO.


The final sound turned out like this, definitely sounds like a crystal or some sort of magical hum, especially with that added noise. Might be able to use this as an instrument as well, it plays like a soft bell lead could be good for a laid back bridge or a relaxing ambient track.

So! To sum it all up, wavetable synths use wavetables and we’re all very educated on what those are by now! They’re just like sampled wave cycles and by combining two different ones at different degrees you can create some really interesting sounds, especially if you use LFO’s or modulators to sweep between two different waves.

That’s all for today, thanks for reading!

That’s a Massive Wavetable!

Creative Copyright

Over the course of this Trimester I worked on 2 big projects. One was a short interactive animation that required an ambient track and some Foley work for sound effects, the other was a small game that required 2 music tracks and a collection of sound effects.

Something I hadn’t really thought about until exploring the concepts of copyright was my personal ownership of the assets I created or contributed towards. Contracts are used in the real world to set out guidelines for important things such as copyright, ownership, funding and compensation.
The scary thing is not once did I enter a contract with anybody that I worked with, of course while studying this isn’t incredibly important as more often than not both parties are in it for the opportunity to learn and grown. This being said, developing contracts even for work done while studying is a great practice I would like to apply for my next Trimester, just to prevent situations like those described in this discussion.

Tying in with my Trimester of work, having done a lot of Foley the questions posed in this forum were definitely on my list of concerns in regards to the rights around Foley recording. There were a lot of variables to consider, the first being that my partner James and I recorded and produced everything together, so the first thing to think about, even before considering the animation team, would be who owns the rights to what out of the two of us.
Since we didn’t enter in any contracts, we’re completely at the mercy of each other. If I wanted to compile all the sounds into a library and sell it for profit, there would be little James could do about it, because there was no contract.
The short answer to this problem, and I think I’m going to turn into a broken record about this before the end of the post, is to make a contract that clearly outlines the ownership of all assets at all stages of development. Specifically for this project James and I could have simply agreed on paper that we both own all rights to the entire collection of recordings. If we wanted to protect our own work further we could specify and say that for example, Corey owns the Chandelier crash asset that he produced on his own. These sort of agreements are found in most contract templates for recording and production work, like the one found here.

The same project, required us also to produce an ambient track to use in the animation. Both James and I created our own separate renditions of the ambient track and left it to the animation team to decide which to use. The interesting thing about our tracks is that they both contained the Foley we recorded together. This led me to explore the ways that recorded content is considered in a production with regards to copyright. I concluded that it would essentially be the same contract that a studio musician enters into with a producer or composer. As a studio musician, at times you are required to use your instrumental skill to play someone’s written work, this is similar for a producer using specifically recorded Foley in a piece. The Foley artist is there to provide what the producer needs and would probably enter in a work for hire contract allowing the producer full rights on the recordings or they might tweak the terms a little and keep the rights to the recordings for their own purposes. This is a perfect example of the kind of work for hire agreement that would have been used here. It outlines that the producer, called the “Employer” here becomes the sole author of the produced works. The downside to this is that the “Musician” holds no copyright on any of the work, although in a case like this, the “Musician” party is content with holding no ownership of the material, they are in it for the pay.

In my final project for the Trimester something very interesting happened which lead me to explore royalty free content. It began after the agreement that I write and produce original music for the game “Scooch”. Shortly after I’d written the track, the game developers expressed interest in a short loop track that they’d heard and wanted to use it for music in the game. Having read this horror story about not triple checking licensing terms, I was a little concerned at first. But as it turns out it was released as an editable track with all stems included under a creative commons license.

Creative commons are free licenses that creators can attach to their work that let them “mix and match”(Direct quote from Creative Commons) different restrictions for their work.


(Image taken from the Creative Commons website)

This particular track was released with the Attribution and Non-Commercial terms attached, so we put Kevin McClouds name in the credits and since the game wasn’t going to be sold or distributed we weren’t in breach of the Non-Commercial terms.
Another point to note is that I made edits of the track, one to be used for the pause menu and one that was used as the main game loop. If the license had the Share-Alike term attached I would have had to distribute the creation under the same license as the original which would have required some extra work not scoped into the project plan.

All in all I think the big take away from my interaction with copyright over the Trimester comes in 2 flavours.
The first is that contracts are important and allow everybody to understand who owns what at the end of the day and I should start using them as soon as I can. The second is that copyright infringement is a serious matter and licensing of materials must be triple checked before being used in any way.

Contracts, contracts, contracts.

Work for Hire Contract

Music Production Contract

Composer Contract





Creative Copyright

Gruntilda’s Lair – Case Study 3

Banjo Kazooie is one of the most critically acclaimed video games of all time, developed for the Nintendo 64 by Rare in 1998, it was particularly well revered because of its quirky soundtrack that was composed and produced entirely by Grant Kirkhope.
Grants first solo project at Rare was Banjo Kazooie, before that he worked alongside composer David Wise on SNES titles like Donkey Kong Country 3. David taught Grant how to make music for the Nintendo 64 console and introduced him to sequencers and samplers which were used to create music for the system.

All of the above information can be sourced from Grant Kirkhopes website, this Game Grumps interview or his wikipedia page.

Grant recorded a lot of his own samples for the titles he worked on, mostly orchestral, however whilst having the samples to create orchestral pieces he also had a vast array of guitar and synth samples at his disposal which he used on titles such as 007: Goldeneye. Further to the point, Gruntilda’s Lair was created using entirely sampled instruments, mostly orchestral with a few splashes of well timed sound effects, like the witches chuckle which can be heard right off the bat at 0:02.

So to get into the song itself some basic details:

Key: C minor (C major)

Time Signature: 12/8

Tempo: 95

Length: 2:13 (Loop)

Gruntilda’s Lair is a cavernous maze that the player will find themselves in throughout most of the game, it is the central HUB that leads to the games other levels, to combat the problem of hearing the same song over and over again and going crazy Grant has used a few clever techniques in the arrangement to add new spins to the theme.
Firstly, when you approach one of the themed worlds, several of the instruments in the composition change, for example when you approach the Haunted Mansion level the arrangement changes from this to this. Adding in a spooky organ, some howling wind and some owls. Really clever way to switch things up, especially when you’re already using those instruments in other parts of the game.

Secondly and this leads into composition, but the slight variations on the same theme along with the constantly changing instrumentation keeps this simple song surprisingly fresh as you listen to it for hours.

The only other thing to note as far as composition is that the song is mainly in C minor however at times it does transform into C major. In the harmony and bass, the flats will occasionally naturalise and at times there is a counter melody that will sharpen the third during an arpeggio as can be seen below transcribed to piano sheet music:


The tempo sits at a steady 95 bpm, this like most of the music composed by Grant directly reflects the pacing of the players movement. This tempo fits very well with the speed at which the main character runs and due to tempo relationship between this song and the rest of the games music, which are all similar, I’m comfortable in making the assumption that this is a big reason for the songs tempo.
Looking at the time signature was a little confusing, at first I thought it was surely 4/4 because of the steady beat of the percussion on the 1st and 3rd beats however upon closer inspection its a compound meter of 12/8. This time signature is essentially a 4/4 time signature broken up so that it has 4 dotted quarter notes which can be divided up into to give a song a triplet swing. This triplet effect is present in the melody lines however most of the harmony, bass and percussion stick to the main beats to allow that marching band pulse.

Great video for understanding the time signature here.

To look at the structure and arrangement I used Pro Tools with a coup of empty tracks to create this image:


Overall it has an ABAB structure with a C section that is a variation of the B section and a Bridge section that is a variation of the A section, squashing these together is an definitive Intro and Outro. The Outro is also the point where the song loops back on itself.

There are eleven instruments in total which is typical of Grants style as often he will replay a melody line on a different instrument in a different section to change the effect often swapping the melody and bass lines or giving the lead instrument a harmonic role for a section such as when the Xylophone changes from the lead instrument to a harmonic instrument at 1:00. By keeping a few extra instruments on stand bye he can change the songs feel without writing any new music.

The instruments are all sampled from real instruments according to Grant and the library he used for this game and many others can be found online. Looking at the roles of the instruments in groups and individually helped me understand further the importance of them as elements:

The Strings
The string section is comprised of a Violin, Viola, Cello and Double Bass all of which can be heard during the intro when each instrument plays the same motif in their own registers. All are played pizzicato throughout the entire song which gives off a creepy vibe. The Double-Bass plays jump bass style throughout most of the song and the rest of the strings come in on the offbeat to make up the harmony.
Whats also interesting to note is that the strings alternate left and right panning during the intro which can be seen in this stereo analysis showing the cello coming in a little to the right:


This unfortunately is the extent of the panning, as can be seen in the following analysis the rest of the song is fairly centre aligned with the exception of the Tambourine which is panned slightly to the left along with the Pad and the Bassoon and Double Bass which are panned slightly right:


Melodic Percussion
The Xylophone and the Glockenspiel both sound as if they have a moderate amount of reverb added to them which I believe is to make the song sound more cavernous. Both are mainly used as lead instruments except when the Xylophone is flipped to a harmonic instrument at 1:00.

Tuba and Bassoon
The Tuba and Bassoon are both used as lead instruments as well in the B sections and the Tuba is used in place of the Double-Bass in the C section. They both sound like fairly low quality samples which are typical of the times, they have very limited dynamics when dropping into their lower ranges, which is why I think the Bassoon is mainly acting as a lead instrument I believe dropping it too low would cause it to become heavily distorted due to the low sample rates of the time.

The Pad
There is a pad like instrument that enters during the Bridge and continues into the beginning of the outro, it can heard here at 1:20. It sounds a lot like a choir or voice synth patch or maybe even some kind of theremin pad.

The percussion is fairly simple, it’s a Bass Drum on the 1st beat and a Tambourine hit on the 3rd beat. It creates this lazy marching band feel. The Tambourine sounds like it was recorded being played extremely slowly, it may even have been slowed down for that lazy effect. The bass drum is being hit with a mallet like in an orchestral ensemble as it doesn’t have that super sharp attack you would get from a kick drum. You can see the bass drum poking in at the 22Hz and 47Hz marks on the frequency analysis below.


The frequency analysis shows just how broad the spectrum is with the inclusion of all these orchestral instruments.

All in all Gruntilda’s Lair is a fantastic piece of video game music and is definitely some of Grant Kirkhopes best work. Not only is it an enjoyable piece on its own with its variations, but when it changes due to the switching themes within the level it takes on a whole new character. I wish I had enough time to go in depth with all the different variations, however this is going to be the end, but I do encourage anybody who finds this as weirdly interesting as me to go have a listen through the full song, which I will remind you is at the top.

Thanks for reading,



Gruntilda’s Lair – Case Study 3

Analysis Framework

Critical Listening – Frameworks

Critical Listening is a vital skill for any Audio Engineer and as an aspiring Audio Engineer it is a skill I must develop. By being able to Critically Listen to a piece of music or sound I can deconstruct it and find out how it works, which will allow me to achieve similar sonic or musical results in my own projects(P. Palombi, 2014).
As an aspiring Audio developer for video game and film I will need at minimum two reliable frameworks, one for sound effects, taking into consideration such things as layering, dynamics, timing, equalisation, and one for music considering things like, key, harmonic progression, melody, tempo, equalisation, dynamics and effects. This will allow me to fully deconstruct the engineering behind hopefully all audio elements within any chosen medium. I will use this as a guide for how I specifically plan to listen for these qualities and what tools I can use to make my listening more accurate.

Musical Framework

Musical analysis requires close examination of two broad concepts, composition and engineering. The way I analyze music at the moment draws inspiration from David L. Pages blog posts on critical listening(,Paul Carrs guide/forum, The Elements of Music( and a documented guide on writing musical evaluation( Along with these I will continue to learn and adapt my analysis skills from things I pick up on my own and things learnt during my Bachelor of Audio.


When looking at the composition of a piece, if possible, I will attempt to isolate and discuss any purely musical elements I can hear within the piece as they relate to classical and contemporary music theory. I’m a great advocate of formalising world music so I will also attempt to adapt the ideas put forward in “Towards a Global Music Theory” to help better understand music as a whole, wherever it’s from or however it is presented.

Musical Key

Identifying the key of a song and any transpositions will give me a foundation for exploring the harmonic and melodic content. To identify the key of a song I will use an instrument and attempt to find the tonic note or root note and then continue to re-evaluate the different sections until I understand the home key and any possible transpositions. Depending on the piece, this could be identified simply by looking at the first note and chord or if the piece involves transpositions and modal mixture it could become more difficult and I may have to seek outside assistance.

Harmonic Progressions

Harmonic progression will be discussed when any pitches appear simultaneously resulting in some form of identifiable chord. Along with the analysis of individual chords within a piece I will also attempt to analyse the progression of the harmony. I will need an instrument to help me identify the individual chords and will use a score or midi editor within a DAW to outline the harmonic progression so I can study it closer.


I will exam the melody line or lines in a song and treat them as if they were monophonic, although I will note any harmonic signifiers within the melody lines(“Global Music Theory”). Unfortunately my ear isn’t as well trained as it could be so I will either have to use an instrument to assist me with discovering certain individual notes or access the information elsewhere. However I go about it, once I have discovered a melody I might benefit from recording it on a score or as midi information within a DAW.

Rhythm (Time Signature and Tempo)

Originally I had this section labeled simply as Time Signature and Tempo only, and although I believe they are still valuable to identify, the rhythm or “pulse” of a song, is what I now believe to be the most important aspect of a song within the time domain. After reading “Towards a Global Music Theory” I now understand that an accurate identification of a song’s rhythm will help in deconstructing songs from all over the world, not just western music that usually strictly relies on well-known time signatures for their accent patterns.

To keep things uniform however I will use the method presented in “Towards a Global Music Theory” for identifying a song’s rhythm and time component by listening and attempting to identify groups. Groups are most likely to be present as musical events in multitudes of 3 and 2, as most music from all over the world is in some way linked with these numbers. Along with these numbers, “rests” which in music are short or long periods of silence, usually linked with specific instrument lines, can be used to identify a where a group begins/ends or help to determine a group’s size.

By using this method I should be able to identify a time signature that fits with the music and use the grid in Pro tools along with perhaps a tempo tap to discover the tempo of the song. For example, this Etude by Robert Schumann which I have identified using this method to be in 3/16 time (Although it was already identified to me as 3/16, I wanted to try my method).


Arrangement will look at the instrumentation of a piece and attempt to identify the associated “color” and timbral quality of the instruments. Some pieces with more organic instruments will be easier to analyze and record. Pieces with synthesized or sampled sounds will need a closer inspection and greater care must be given when describing and analyzing the purpose of a synthesized texture. To do this I will listen through the song and list as many different instruments I can identify within the piece, then I will describe their overall “color” and try to rationalise their purpose within the song.

Structure and Form

The final part of musical analysis will look at the structure and form of the piece. Chorus/Verse sections will be identified if applicable and the sections will be recorded in accordance with standard music theory practices, using ABCD etc. to identify individual sections and then organising them in chronological order.


The following points will discuss a song’s characteristics in regards to its engineering. It is important to not only look at a piece for its musical qualities but also the technologies involved with recording and producing those musical ideas.

Gain Staging

Gain staging or “balance” as it is referred to in Bobby Owsinskis “Mixing Engineers Handbook” is one of the most fundamental aspects of a song’s engineering. Simply, it is the volume or amplitude level and it can be looked at it two ways. Firstly it can be viewed as the amplitude of all elements at the same time or it can be broken up and viewed as amplitude for separate elements. The easiest way to look at the overall amplitude is to drop the track into a DAW and observe the metering. Getting specifics on the separate elements may be a little more difficult and will be down aurally. I will attempt to comment on the individual loudness of each element within a piece through critical listening, this will allow me to describe key instruments and how their individual loudness complements the overall piece.

Stereo Field

Stereo Field refers to the panning of individual instruments and how this affects the overall mix. This can viewed as a whole through a DAW by using a stereo field plugin to view the overall balance of a song while it’s playing. The best way to illustrate stereo field is, coincidentally, through illustration. I will use a rough drawing to illustrate where each instrument is sitting in the stereo field. This will allow me to observe the different panning techniques used within a piece and will help me better understand the reasons behind certain panning decisions.


Equalisation is the adjustment of individual amplitudes across the spectral plane. The frequency spectrum of a piece can easily be identified within a DAW however again the test will be the identification of individual instruments and their spectral qualities. Being able to listen and identify certain equalisation decisions will help me to be able to make similar decisions myself to improve the overall quality of my mixes.


Dynamics involve the adjustment to a particular sounds amplitude envelope, it involves things like compressors, gates, limiters etc. Again it can viewed as a change to the overall song or as a touch to individual instruments. Critical listening will allow me to identify the use of compressors and the effect they have on a piece so that I can achieve similar results in my own work.


In effects I’ll be looking at any examples of reverb, delay, chorus etc. Any kind of effect added to a sound to enhance it. I’ll be looking to comment on whether certain reverbs come naturally or whether they are digital. I’ll be looking at differences between digital doubling of voices or actual recorded harmonies. Effects are a great way to add extra character to an individual sound or a piece as a whole, for example reverb on an entire mix to give it a sense of space or a chorus on a vocal to give it some power and thickness. These effects and above all else the reason for the use of these effects will be what I’ll be looking for when critically listening for special effects.


Finally I’ll attempt to identify how a piece was recorded, obviously with certain pieces this won’t be applicable such as electronically produced music. However in any case I will attempt to make a comment on things such as mic placement or type and if I can’t hear it I will attempt to source the information elsewhere. Whether a piece is electronically produced, organically produced or anything in between will be discussed in this final section before a conclusion will be made summarising the most interesting points of a song’s production.


Finally after all things are considered I will conclude on the overall effect of the song and describe its most interesting points and what I learned from deconstructing its make-up. Noting any special details that may help me in current or future productions.

Phil Palombi, The Importance of Critical Listening, July 6th 2014, Available from URL:

Analysis Framework

Inclusive and Ex-clusive Design

Yet again my lack of experience and knowledge has astounded me! Of all the things I’ve learned about Design, I only very recently came across the concept of Inclusive Design. Inclusive Design is basically a way of approaching design so that the end product can be accessed by different groups. For example League of Legends has a colorblind mode so that colorblind players don’t have trouble distinguishing between the enemy and themselves. This adds to the game play for colorblind players as playing such a hectic game with the added confusion of not being able to visually track yourself is not fun at all.

League Colorblind

Left – Regular Mode, Right – Colorblind Mode (Red Green Mode)

Although an important part of Inclusive Design is the representation of genders, races, religions, orientations and cultures, what really interests me is the ways its used to give people with certain disabilities a way to experience something in the way other people do. Thinking about this has opened a flood-gate of ideas and problems for me to solve. As an aspiring game audio developer I’m suddenly faced with a sad realization and that is that, deaf people will not be able to enjoy the sonic characteristics of music or sound effects that I make. Specifically speaking a deaf person would perceive a horror game completely differently to a normal person, arguably they wouldn’t be able to fully experience the fear, which is the whole point!


Would it be as suspenseful and terrifying if you didn’t hear her footsteps get closer? 

I communicated with a deaf person named Michael Shway and their interpreter, Denise Green, about this topic to try and gain some further insight. What I was told, although not with any scientific measurement is that, at the cinema Michael noticed he could perceive low rumblings. I did some further research and this article by Geoff Leventhall states that deaf people can definitely perceive loud low frequencies (around 100Hz at 90+dB for all those Audio Engineers out there), like the ones you would feel in your chest. This could be something I experiment with, trying to create soundscapes for deaf people.

Drum Beat Blog gives a few examples of ways you can make games more accessible for disabled gamers. He mentions using speech recognition as a means of menu navigation for blind people or careful use of controller vibrations to give deaf people cues. These methods could be used to make games more accessible for people with disabilities, or, some of the concepts could be adapted to make games specifically for people with disabilities, like a video game for blind people where all the game mechanics are audio based and there are no visuals.

I’m inexpressibly glad that the concept of Inclusive Design has been introduced to me and I’m actually really excited to start thinking about Inclusive Design the next time I’m working on a project. Any way I can change something or add something to make it more accessible for other groups will be fully considered in the future!

Inclusive and Ex-clusive Design