How I did it - Unity's Viking Village (Middleware)
This example of work was created for a University assignment that required me to implement sounds into a game using middleware. An explanation of audio middleware may be in order for some, if you already know what audio middleware is then you can scroll on down.
Audio middleware is a third party software that allows sound designers to easily implement audio without needing the full knowledge of programming. The big boys of the middleware world are Wwise and FMOD. Sound designers are able to import audio assets into the software where they can edit the audio in multiple ways in order to gain their final objective. The audio can be contained within events that can be activated via collider triggers or through code. These assets are then exported and can be accessed in the game engine once the middleware has been implemented. Although middleware allows sound designers to implement audio without the knowledge of heavy coding, I believe that understanding basic programming can help sound designers pass problems that would otherwise stop them in their tracks. And many employers may welcome a sound designer that has programming knowledge.
Initially I did a walkthrough of the level to understand the environment that I was creating the audio for. This helped me plan what assets I needed to create for game and what would fit within the theme of the environment.
Once I gained an understanding of the area I began to create the audio which I then edited and imported into my chosen middleware, Wwise. The footsteps were placed into two random containers, one for dirt and the other for wood. Random containers have the ability to randomise the order in which the audio is played back, therefore assets will not be repeated as often. In order to increase the amount of assets that I had I would need to alter the pitch of each one when played back. This is easily done by enabling randomisation for pitch, this allows the pitch to change randomly when the asset is played back, thus increase the amount without taking up more memory.
Below shows the switch groups that I created for the footsteps. Switches allow the game engine to trigger a container to playback audio. In this example I have chosen my Dirt footsteps to be the default switch and the Wood footsteps as the alternate, an infinite amount of switches can be created if you wish. In a game engine a trigger can be created to switch from Dirt to Wood, this will play the Wood footsteps when needed. If the player exits the trigger or there is an error, the audio will default to the Dirt container, as this is my chosen default. This is an easy way to change footsteps depending on the surfaces and is simple to implement and code.
As I had completed the footstep audio, I decided to test the implementation of Wwise into Unity. In order to do this you must download the integration file from Wwise's website and import the package to Unity. Wwise also allows you to integrate through the software launch menu. The next step was to create colliders that would work as triggers to switch between the footstep audio. The colliders are shown below:
These colliders were used to switch the audio of the footsteps. When the player enters the collider, the audio plays back the wooden footsteps as there are wooden planks under foot. Once the player leaves the trigger the audio reverts to the default audio as I specified this within the event. Both of these are accomplished by dragging and dropping the event from the Wwise tab onto the objects component. From there you are able to change how you would like the event to be manipulated by in game events. This is the simplest way of creating interactive areas using Wwise integration.
In order for the footstep audio to play whenever the players foot hits the ground, I needed to write short lines of code. Thankfully, Unity's first person controller had already written the code and I just needed to replace the code in order for the Wwise audio to output instead of Unity's audio.
Knowing that the integration of Wwise into Unity was successful I continued importing my audio assets into the middleware. The location of the game level is next to the ocean, I therefore needed to have an ocean soundscape in the background to improve the illusion of the audio. This recording was fairly easy as I live near the sea. I did not want the audio to overwhelm the player, and in order to stop this I created states in Wwise that would change the volume of the audio via the location of the player. I could have used an RTCP to change the volume level but I opted to use states to acquire more control and make mixing the audio easier.
The states change the volume of the waves audio relative to the players position, the transition between the volumes would occur via a trigger and would gradually increase or decrease over four seconds. The image below shows the different states and the audio volume that has been set for them.
The levels took some fine tuning but I finally managed to settle on my preferred volumes. These were imported into Unity where I then created the triggers that would change the states and there fore the volume. I decided to created three large box colliders in order to cover the whole area to make sure that the player would always trigger the audio where ever they were in the level. I also kept spaces between the colliders to stop the audio from constantly fluctuating. The volume level changes are subtle yet you are able to understand that you are moving closer to the sea.
Littered around the environment are torches that follow the path that the player walks along. Although torches make relatively little noise I did not want to miss the opportunity to test my skills. Within Wwise I enabled the 3D position attribute for the fire audio which would allow me to attenuate (alter) the volume of the audio over distance. I decided to shorten the distance within the 3D position which would cutoff the audio once the player was past my chosen threshold. I chose to have a small logarithmic curve to ensure that the player could hear the torch without needing to be right next to it.
Once the attenuation had been set I was able to import the asset and choose where to place the audio in the world. I began by placing the event component onto an empty object and moving the object onto each torch. By the time I got to my third torch I realised this would take up too much time and opted for an alternate strategy. This was when I thought of the most amazing idea!
To place the audio on an existing object! (I must admit at the time this seemed inventive but I soon realised that it was a simple process...)
After my revolutionary idea I pressed on to adding personality to the game. I wanted to add voices to a building to break up the area. I again enabled the 3D position attribute on my pub audio and began creating a curve for the attenuation. I increased the distance and added another logarithmic curve in order to create illusion of rowdy occupants that could be heard from a distance. The drop off work well but I needed to further affect the audio in order to make the player think the voices were coming from inside the building. Sound is able to travel over long distances but not all sound can travel through objects.
Sound is made up of frequencies, these frequencies are what create sound. An easy way to think of sound is like thinking of people, all individuals have separate personalities which is what makes every person different. Frequencies are the personalities of sound and are what make each sound different. Anyway, back to the pub audio!
Some frequencies are not able to travel through objects, these are known as high frequencies, low frequencies are able to travel through most objects. This is why you can always hear the bass of music coming out of a car while all doors and windows are shut. In order to recreate this for my pub audio, I increased the low-pass filter level. A low-pass filter allows a desired amount of low frequencies through while cutting off the rest of the frequencies.
I then added the pub audio into the game, which was added to an empty game object and placed within a building.
The final piece of the project was to add music to the game. This proved to be the most complex chapter as I was new to Wwise's interactive music, nevertheless I pushed on. I created a small piece using paid virtual instruments and tried to make it as interactive as I could. The piece consisted of a Flute and a Nyckelharpa (this is an old Swedish instrument, perfect for the viking feel!). I decided to choose these instruments as the Flute played long notes which gave the feeling of vastness and exploration. The Nyckelharpa was added to bring in the viking feeling and worked well with the flute.
I had created two melody lines that would play when walking through the level. These melodies were added to Wwise and placed into two separate music segments and were simply named "Melody1" and "Melody2". These two music segments were then placed into a music playlist, the playlist allows the user to edit the play back of the music segments by choosing which melody plays next, the transition time and when to loop the pieces.
I chose to play both melodies one after the other and loop the whole piece. I then created states that would trigger each instrument and alter the volume. One state triggered the Flutes, the other triggered the Nyckelharpas and the last triggered all instruments to play together.
I decided that the player would start the game without music in order to observe the other pieces of audio. To ensure this I created three separate box colliders, each one controlling the state triggers. I also wanted the music to playback the flutes first before the other instruments. To ensure this I created a script and wrote lines that would enable the other two triggers once the flute collider had been entered. This allowed me to control when the player would hear the music if they decided to explore before heading towards the pier. I also needed to destroy the component that starts the music, otherwise each time the player entered the trigger the music would restart.
And that is how I did it! This required research, code knowledge and a thorough read through of Wwise 101 in order to get the results I wanted and to understand the full potential of middleware.
I thank you so much if you have read through all of this! To any wanna be sound designer I hope this helps you! Anyone can contact me via the message system on my main page!