Quantcast
Channel: Envato Tuts+ Game Development
Viewing all articles
Browse latest Browse all 728

Web Audio and 3D Soundscapes: Implementation

$
0
0

In this tutorial we will wrap Web Audio in a simple API that focuses on playing sounds within a 3D coordinate space, and can be used for immersive interactive applications including, but not limited to, 3D games.

This tutorial is the second in a two-part series. If you have not read the first tutorial in the series you should do that before reading this tutorial because it introduces you to the various Web Audio elements we will be using here.

Demonstration

Before we get started, here's a small demonstration that uses the simplified API that we will be covering in this tutorial. Sounds (represented by the white squares) are randomly positioned and played in a 3D coordinate space using the head-related transfer function (HRTF) that Web Audio provides for us.

The source files for the demonstration are attached to this tutorial.

Overview

Because the simplified API (AudioPlayer) has already been created for this tutorial and is available for download, what we are going to do here is take a broad look at the AudioPlayer API and the code that powers it.

Before continuing this tutorial, please read the previous tutorial in this series if you haven't done so already and are new to the world of Web Audio.

AudioPlayer

The AudioPlayer class contains our simplified API and is exposed on the window object alongside the standard Web Audio classes if, and only if, Web Audio is supported by the web browser. This means we should check for the existence of the class before we attempt to use it.

if (window.AudioPlayer !== undefined) {
    audioPlayer = new AudioPlayer()
}

(We could have tried to create a new AudioPlayer object within a try...catch statement, but a simple conditional check works perfectly well.)

Behind the scenes, the audioPlayer creates a new AudioContext object and a new AudioGainNode object for us, and connects the GainNode object to the destination node exposed by the AudioContext object.

var m_context = new AudioContext()
var m_gain = m_context.createGain()
...
m_gain.connect(m_context.destination)

When sounds are created and played they will be connected to the m_gain node, this allows us to control the volume (amplitude) of all the sounds easily.

The audioPlayer also configures the audio listener, exposed by m_context, so it matches the common 3D coordinate system used with WebGL. The positive z axis points at the viewer (in other words, it points out of the 2D screen), the positive y axis points up, and the positive x axis points to the right.

m_context.listener.setOrientation(0, 0, -1, 0, 1, 0)

The position of the listener is always zero; it sits at the centre of the audio coordinate system.

Loading Sounds

Before we can create or play any sounds, we need to load the sound files; luckily enough audioPlayer takes care of all the hard work for us. It exposes a load(...) function that we can use to load the sounds, and three event handlers that allow us to keep track of the load progress.

audioPlayer.onloadstart = function() { ... }
audioPlayer.onloaderror = function() { ... }
audioPlayer.onloadcomplete = function() { ... }

audioPlayer.load("sound-01.ogg")
audioPlayer.load("sound-02.ogg")
audioPlayer.load("sound-03.ogg")

The set of sound formats that are supported is browser dependant. For example, Chrome and Firefox support OGG Vorbis but Internet Explorer doesn't. All three browsers support MP3, which is handy, but the problem with MP3 is the lack of seamless sound looping—the MP3 format is simply not designed for it. However, OGG Vorbis is, and can loop sounds perfectly.

When calling the load(...) function multiple times, audioPlayer will push the requests into a queue and load them sequentially. When all of the queued sounds have been loaded (and decoded) the onloadcomplete event handler will be called.

Behind the scenes, audioPlayer uses a single XMLHttpRequest object to load the sounds. The responseType of the request is set to "arraybuffer", and when the file has loaded the array buffer is sent to m_context for decoding.

// simplified example

m_loader = new XMLHttpRequest()
m_queue = []

function load() {
    m_loader.open("GET", m_queue[0])
    m_loader.responseType = "arraybuffer"
    m_loader.onload = onLoad
    m_loader.send()
}

function onLoad(event) {
    var data = m_loader.response
    var status = m_loader.status

    m_loader.abort() // resets the loader

    if (status < 400) {
        m_context.decodeAudioData(data, onDecode)
    }
}

If the loading and decoding of a file is successful, audioPlayer will either load the next file in the queue (if the queue is not empty) or let us know that all the files have been loaded.

Creating Sounds

Now that we have loaded some sound files we can create and play our sounds. We first need to tell audioPlayer to create the sounds, and this is done by using the create(...) function exposed by audioPlayer.

var sound1 = audioPlayer.create("sound-01.ogg")
var sound2 = audioPlayer.create("sound-02.ogg")
var sound3 = audioPlayer.create("sound-03.ogg")

We are free to create as many sounds as we need even if we have only loaded a single sound file.

var a = audioPlayer.create("beep.ogg")
var b = audioPlayer.create("beep.ogg")
var c = audioPlayer.create("beep.ogg")

The sound file path passed to the create(...) function simply tells audioPlayer which file the created sound should use. If the specified sound file has not been loaded when the create(...) function is called, a runtime error will be thrown.

Playing Sounds

When we have created one or more sounds, we are free to play those sounds whenever we need to. To play a sound, we use the aptly named play(...) function exposed by audioPlayer.

audioPlayer.play(sound1)

To determine whether to play a looped sound, we can also pass a Boolean to the play(...) function. If the Boolean is true, the sound will loop continuously until it is stopped.

audioPlayer.play(sound1, true)

To stop a sound, we can use the stop(...) function.

audioPlayer.stop(sound1)

The isPlaying(...) function lets us know whether a sound is currently playing.

if (audioPlayer.isPlaying(sound1)) { ... }

Behind the scenes, the audioPlayer has to do a surprising amount of work to get a sound to play, due to the modular nature of Web Audio. Whenever a sound needs to be played,audioPlayer has to create new AudioSourceBufferNode and PannerNode objects, configure and connect them, and then connect the sound to the m_gain node. Thankfully, Web Audio is highly optimized so the creation and configuration of new audio nodes rarely causes any noticeable overhead.

sound.source = m_context.createBufferSource()
sound.panner = m_context.createPanner()

sound.source.buffer = sound.buffer
sound.source.loop = loop
sound.source.onended = onSoundEnded

// This is a bit of a hack but we need to reference the sound
// object in the onSoundEnded event handler, and doing things
// this way is more optimal than binding the handler.
sound.source.sound = sound
			
sound.panner.panningModel = "HRTF"
sound.panner.distanceModel = "linear"
sound.panner.setPosition(sound.x, sound.y, sound.z)

sound.source.connect(sound.panner)
sound.panner.connect(m_gain)

sound.source.start()

Playing sounds is obviously useful, but the purpose of audioPlayer is to play sounds within a 3D coordinate system, so we should probably set the sound positions before playing them. audioPlayer exposes a few functions that allow us to do just that.

Positioning Sounds

  • The setX(...) and getX(...) functions exposed by audioPlayer can be used to set and get the position of a sound along the coordinate system's x axis.
  • The setY(...) and getY(...) functions can be used to set and get the position of a sound along the coordinate system's y axis.
  • The setZ(...) and getZ(...) functions can be used to set and get the position of a sound along the coordinate system's z axis.
  • Finally, the helpful setPosition(...) function can be used to set the position of a sound along the coordinate system's x, y, and z axes respectively.
audioPlayer.setX(sound1, 100)
audioPlayer.setZ(sound1, 200)

console.log(audioPlayer.getX(sound1)) // 100
console.log(audioPlayer.getZ(sound1)) // 200

audioPlayer.setPosition(sound1, 300, 0, 400)

console.log(audioPlayer.getX(sound1)) // 300
console.log(audioPlayer.getZ(sound1)) // 400

The farther a sound is from the center of the coordinate system, the quieter the sound will be. At a distance of 10000 (the Web Audio default) a sound will be completely silent.

Volume

We can control the global (master) volume of the sounds by using the setVolume(...) and getVolume(...) functions exposed by audioPlayer.

audioPlayer.setVolume(0.5) // 50%

console.log(audioPlayer.getVolume()) // 0.5

The setVolume(...) function also has a second parameter that can be used to fade the volume over a period of time. For example, to fade the volume to zero over a two second period, we could do the following:

audioPlayer.setVolume(0.0, 2.0)

The tutorial demo takes advantage of this to fade-in the sounds smoothly.

Behind the scenes, the audioPlayer simply tells the m_gain node to linearly change the gain value whenever the volume needs to be changed.

var currentTime = m_context.currentTime
var currentVolume = m_gain.gain.value

m_gain.gain.cancelScheduledValues(0.0)
m_gain.gain.setValueAtTime(currentVolume, currentTime)
m_gain.gain.linearRampToValueAtTime(volume, currentTime + time)

audioPlayer enforces a minimum fade time of 0.01 seconds, to ensure that steep changes in volume don't cause any audible clicks or pops.

Conclusion

In this tutorial, we took a look at one way to wrap Web Audio in a simple API that focuses on playing sounds within a 3D coordinate space for use in (among other applications) 3D games.

Due to the modular nature of Web Audio, programs that use Web Audio can get complex pretty quickly, so I hope this tutorial has been of some use to you. When you understand how Web Audio works, and how powerful it is, I'm sure you will have a lot of fun with it.

Don't forget the AudioPlayer and demonstration source files are available on GitHub and ready for download. The source code is commented fairly well so it's worth taking the time to have a quick look at it.

If you have any feedback or questions, please feel free to post a comment below.

Resources


Viewing all articles
Browse latest Browse all 728

Trending Articles