Unity Audio vs Wwise


To start with I wanted to do a general investigation into Wwise the integrated audio package for Unity by AudioKinetic. When I started working through it I figured it would be more interesting to look at Wwise in comparison to Unity’s own audio API and mixer components which have been around since Unity 5.

To do that I’m going to compare a game in three different builds. Build one is it’s original state with simple scripts that run an AudioSource.Play() method. The Second build I will add another layer of complexity by using the Unity built in Mixer and see if there are any differences or advantages. Lastly I’ll redo the project with the Wwise API and investigate how that impacts build size and project complexity and weigh it up against the previous two builds. Mostly I’m looking for difference in performance between the three builds, build size and complexity, and weighing that up against ease of implementation and flexibility.

I refreshed an old project called “MusicVisualiser”that I started for my Five Games in Ten Weeks Challenge. The game is like a singing solar system. There is a bunch of “planets” in the night sky that play a set piece of music when clicked. It’s a really simple concept and project but I think it will work for this comparison as the parameters can be limited to just a few audio tracks but we can play with spacing and roll-off and other advanced audio features.

Let’s have a look at the game first.

These “planets” are simple native Unity sphere meshes with an Audio Source component and a particle system that’s triggered when it’s clicked. You can see in the Audio Source that we are not using a Mixer for Output and all the Audio sources compete for resources and play at their default volume and priority.

The PlayMe script just takes in the AudioSource and plays it:

   public AudioSource my_sound;
        
  if (Input.GetMouseButtonDown(0))
  {
      RaycastHit hitInfo;
      target = GetClickedObject(out hitInfo);
      if (target != null && target.name == my_name)
      {
          _mouseState = true;
          screenSpace = Camera.main.WorldToScreenPoint(target.transform.position);
          offset = target.transform.position - Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, screenSpace.z));

          my_sound.Play();   // This is the Audio Component!
          var expl1 = GetComponent<ParticleSystem>();
          expl1.Play();
      }
  }

Pretty simple right. This is what the project looks like in the Profiler when it’s running and being actively engaged with. At that point we are looking at two Audio Sources are playing:

This is the build size from the Editor Log with our Audio Files broken out:

Build Report (Audio.Play)
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 547.5 kb 1.4%
Shaders 188.0 kb 0.5%
Other Assets 1.4 kb 0.0%
Levels 38.3 kb 0.1%
Scripts 941.9 kb 2.4%
Included DLLs 3.9 mb 10.2%
File headers 9.3 kb 0.0%
Complete size 38.6 mb 100.0%

Used Assets and files from the Resources folder, sorted by uncompressed size:
204.3 kb 0.5% Assets/SomethingLurks_AAS.wav
164.5 kb 0.4% Assets/Step2Down_AAS.wav
136.9 kb 0.3% Assets/Underwater_AAS.wav
41.8 kb 0.1% Assets/M1_M12_37_ThumPiano_Aflat1.wav

Unity Audio with Mixer

Now we add in the Mixer component to the project:

Then add a couple of Channels to the Mixer to split the audio between left and right. Then the Audio Sources are dropped into one or another of the Mixer Channels:

Adding the Mixer as the Output source

Next for bit more interest I added some effects in the Mixer. Here is where we see the advantages of using the Unity Mixer. Sounds can be manipulated in complex ways and the Audio Output chain be defined with presets and levels etc.

If we have a look at our Profiler while running with the new component we cannot really see any great differences. The ‘Others’ section of the CPU Usage is a bit higher and the Garbage Collector in the Memory is pumping regularly but the Audio Stats look pretty much unchanged:

Profiler Mixer

Mind you this is a fairly low utilising game so we might get wildly different stats if we were really putting the system under the pump but I’m not performance testing here just comparing run states between the two builds.

Next if we build the game and have a look at the Editor Log the only thing that’s changed here is that the “Other Assets” size is a KB higher (Complete size has not been changed):

Build Report (Mixer)
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 547.5 kb 1.4%
Shaders 188.0 kb 0.5%
Other Assets 2.3 kb 0.0%
Levels 38.3 kb 0.1%
Scripts 941.9 kb 2.4%
Included DLLs 3.9 mb 10.2%
File headers 9.3 kb 0.0%
Complete size 38.6 mb 100.0%

Unity with Wwise

Next we are going to add Wwise to the Project. This is the basic workflow. In the Wwise Launcher we register our project and on the first tab we are presented with three Hierarchies.

Project Audio Explorer in Wwise

The Master-Mixer Hierarchy – does what it says.
The Actor-Mixor Hierarchy – where most of your game audio develops (use the SoundSFX defaults).
Interactive Music Hierarchy – other stuff we won’t get into.

Events Tab

The next tab along is the events tab where you link your audio to game events. You can define your event here (use the default work unit).
Once you got the event there you can associate the event with the audio in the Action List.

SoundBank Tab – this is the bit that get’s imported into your project.

Next you generate a SoundBank with Wwise that includes your audio and the code for the API calls to trigger sounds. You export that SoundBank into your game engine and link up the calls in your code.

To Get Started with Wwise

To get started make an account with Audiokinetic and download the Wwise Launcher. The Integration package for Unity can be downloaded and installed directly from the WWise Launcher.

In the Wwise Launcher there is a WWISE tab that you can install and start the application from. Once you open it up you need to register your project within the launcher so Wwise can track you 🙂 ( click on the key icon next to your Wwise project and select ‘Register your Project to obtain a License’). Wise will run in Trial mode which restricts the SoundBank content to 200 media assets and cannot be used for Commercial purposes. Pricing for licensing is on their site but this is not a sales piece so if you want it you can look it up.

There are a bunch of plugins by Audiokinetic and their partners available and also Community offerings like AudioRain a dedicated rain synth with 60 procedurally generated presets for rain. What’s not to love about that!

There is a Wwise SDK for authoring your own plugins and a Wwise API which allows you to integrate into any engine, tool or application.

Audiokinetic do certifications that covers audio integration workflows,
mixing virtual soundscapes, working with sound triggering systems, and performance optimisation :
https://www.audiokinetic.com/learn/certifications/

Basically in Wwise you let the Launcher do all the setting up for you. You will install the Wwise binaries from here and manage your platform versions. Projects can be integrated here and if you don’t have the necessary plugins installed the Wwise Launcher will install them for you.

Integrating the MusicVisualiser project with Wwise.
This is how big the Wwise Integration packages and binaries are.
Applying…
Done!

That’s basically it for the set up of Wwise and Integration with your Project. Next up we will have a look at what this has done to the Unity Console.

Wwise in Unity

First thing we see is a bunch of errors that can be safely ignored. As we did not perform any configuration of our project in Wwise with audio files and events there was no SoundBank to generate yet.

Unity – Initial Errors can be ignored if you have not generated your SoundBank yet.

In the Unity Console we have a new tab in our editor. The Wwise Picker Tab contains all the elements of the Wwise project that have been imported with the project integration. There is also a Wwise Global Game Object in the Unity Hierarchy and all the Wwise folders in the Assets folder.

Unity Editor
The WwiseGlobal Game Object

Under the Component pull down there is a whole slew of Ak (AudioKinetic) options.

Wwise Components.
Wwise Configuration Settings.

I know there has been a lot of “show and tell” in this post but I’m going to keep going and show the process of importing the audio into the Wwise Project, creating Events, and Generating the SoundBank.

Working in Wwise

In the Wwise Project Explorer I right click on the Default Work Unit and import the audio files that were part of my project. (I’ve stripped the raw files out of my project for now and removed all the Mixer components and etc.).

Importing Audio Files into the Wwise Project.
This is what the files look like.
Right click on the file to create a new Event (which can be called in the Unity code).
Here is the event created for “Play”.
And all my “Play” events.

Finally a SoundBank is generated from which the Unity project can access the sound files through the AudioKinetic API.

Generating a SoundBank

Wwise Audio in Unity

When we go back to our Unity Editor and Refresh the Project and Generate SoudBanks we are presented with the following in the Wwise Picker. We can now access these files and and drag them on to our game objects directly. It’s that simple. Drag a sound from the Picker onto a Game Object and it automagically creates a component that is immediately accessible from within the editor.

The new audio imported into the Wwise Picker.

Below the Play_Underwater_AAS event and audio file has been added to the Sphere Game Object.

The Trigger, Actions, and Callbacks can all be configured and access through the API. In my case I easily integrated the functionality I wanted with only one line change to my attached PlayMe.cs script that we looked at above. So now instead of my audio coming from the AudioSource component referenced by my_sound the audio is played by the AKSoundEngine.PostEvent.

            //my_sound.Play();
            AkSoundEngine.PostEvent("Play_Underwater_AAS", this.gameObject);

Actually getting Wwise installed and set up and integrated with my Project was very very easy but not without bumps. It takes a very long time for packages to download and I had a bit of trouble upgrading my Wwise Launcher from an old version (it got stuck! and I had to remove it by hand and re-install). When I did have issues I got some very excellent help from AudioKinetic and after logging a case was emailed directly by a real person (which honestly was so surprising and wonderful to get that kind of support from a company when I’m on a trial license with no formal support agreement or rights).

So lets have a look at the differences in performance and package size. The first thing you notice with the Profiler below is that there is very little difference in performance but we can no longer see our audio stats as it’s been abstracted away from the Unity Engine. The Graph still shows the resources being used by Audio and the Total Audio CPU seems to be up to a third lower than the native Unity Audio statistics. It looks like it’s being clamped at just over 1.2. MB instead of regular peaks over 3 MB.

Profiler with Wwise Audio running.

The Build Report is only a couple of MB larger for the total project size:

Build Report
Uncompressed usage by category:
Textures 0.0 kb 0.0%
Meshes 0.0 kb 0.0%
Animations 0.0 kb 0.0%
Sounds 0.0 kb 0.0%
Shaders 188.0 kb 0.5%
Other Assets 7.3 kb 0.0%
Levels 38.5 kb 0.1%
Scripts 1.3 mb 3.1%
Included DLLs 3.9 mb 9.7%
File headers 13.2 kb 0.0%
Complete size 40.5 mb 100.0%

Basically a 2 MB difference! The Sounds have been extracted away as a file in the Build Report and we assume they are now part of “Other Assets” above.

I’m kinda blown away by how how little additional file size there is to the build considering the additional libraries code and available complexity that Wwise adds. There is literally a plethora of options and effects that we can play with in the Wwise package. It’s a bit like the excitement I got after the install of my first real Audio DAW. The scope is part boggling and part fantastical wonder at where we can go next. (Audio does get me unusually stimulated but that’s to be expected and tempered accordingly).

The questions I wanted to answer with this whole experiment was 1. Would including an audio middleware like Wwise make my Project more complex and difficult to manage? 2. Would the added Package make my build much larger? and 3. Would the performance of the Audio package be as good as the simple Unity Audio API? The answers are: No. No, and Yes. So I’m pretty happy with that and if the cost point of using the licensed version of Wwise is balanced out against the advantages of using it in the total cost of the Project then I would most definitely one hundred percent go for it.

,

Leave a Reply

Your email address will not be published. Required fields are marked *