Please note, this is a STATIC archive of website developer.mozilla.org from November 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

L'audio dans les jeux Web

Cette traduction est incomplète. Aidez à traduire cet article depuis l'anglais.

L'audio represente une chose essentielle dans n'importe quel jeux vidéo; il apporte de l'information et contribue à l'atmosphère du jeu. La prise en charge de l'audio a evolué demanière rapide mais il reste encore beaucoup de difference de prise en charge entre navigateurs. Nous avons souvent besoin de decider quelles parties de notre contenu audio est interessant et laquelle ne l'est pas, et mettre en place une strategie en conséquence. Cet article fournis un guide detaillé sur l'implementation de l'audio dans les jeux HTML5, detaillant quels choix technologiques fonctionneront sur le plus grand nombre de navigateurs.

Avertissement sur l'audio sur mobile

Les plateformes mobiles sont de loin les plateforme où il est le plus difficile de mettre en place l'audio. Malheureusement c'est la plateforme la plus utilisée le plus souvent par les joueurs. Il y a certaines différences entre les plateformes "desktop" habituelles et les plateformes mobiles qui ont surement poussé les editeurs de navigateurs à faire des choix qui peuvent rendre difficle l'implementation de l'audio par les utilisateurs. Regardons ensemble ces différences.

Lecture automatique

Beaucoup de navigateur mobiles vont simplement ignorer n'importe quelle requête de lancement automatique de musique faite par votre jeu; à la place l'utilisateur vas être obligé de le lancer lui même la lecture via une action quelconque. Cela signifie que vous allez devoir prendre compte de cette diférence lors de l'implementation de votre lecture automatique. Ce problème est généralement

Many mobile browsers will simply ignore any requests made by your game to automatically play audio; instead playback for audio needs to be started by a user-initiated event. This means you will have to structure your audio playback to take account of that. This is usually mitigated against by loading the audio in advance and priming it on a user-initiated event.

For more passive audio auto play, for example background music that starts as soon as a game loads, one trick is to detect *any* user initiated event and start playback then. For other more active sounds that are to be used during the game we could consider priming them as soon as something like a start button is pressed.

To prime audio like this we want to play a part of it; for this reason it is useful to include a moment of silence at the end of your audio sample. Jumping to, playing and pausing that silence will mean we can now use JavaScript to play that file at arbitrary points.

Note: Playing part of your file at zero volume could also work if the browser allows you to change volume (see below). Also note that playing and immediately pausing your audio does not guarantee that a small piece of audio won't be played.

Note: Adding a web app to your mobile home screen may change its capabilities. In the case of autoplay on iOS, this appears to be the case currently. If possible, you should try your code on several devices and platforms to see how it works.

Volume

Programmatic volume control may be disabled in mobile browsers. The reason often given is that the user should be in control of the volume at the OS level and this shouldn't be overridden.

Buffering and preloading

Likely as an attempt to mitigate runaway mobile network data use, we also often find that buffering is disabled before playback has been initiated. Buffering is the process of the browser downloading the media in advance, which we often need to do to ensure smooth playback.

Note: In many ways the concept of buffering is an outdated one. As long as byte-range requests are accepted (which is the default behavior), we should be able to jump to a specific point in the audio without having to download the preceding content. However preloading is still useful; without it, there would always need to be some client-server communication required before playing can commence.

Concurrent audio playback

A requirement of many games is the need to play more than one piece of audio at the same time; for example, there might be background music playing along with sound effects for various things happening in the game. Although the situation is soon going to get better with the adoption of the Web Audio API, the current most widely-supported method — using the vanilla <audio> element — results in patchy results on mobile devices.

Testing and support

Here's a table that shows what mobile platforms support the features talked about above.

Mobile support for web audio features
Mobile browser Version Concurrent play Autoplay Volume adjusting Preload
Chrome (Android) 32+ Y N N N
Firefox (Android) 26+ Y Y N N
Firefox OS 1.2+ Y Y Y Y
IE Mobile 11+ Y Y N Y
Opera Mobile 11+ N N N N
Safari (iOS) 7+ Y/N* N N Y
Android Browser 2.3+ N N N N

* Note: Safari 7 has issues playing if you try and start all pieces of audio simultaneously. If you stagger playback you may have some degree of success.

Note: Concurrent audio playback is tested using our concurrent audio test example, where we attempt to play three pieces of audio at the same time using the standard audio API.

Note: Simple autoplay functionality is tested with our autoplay test example.

Note: Volume changeability is tested with our volume test example.

Mobile workarounds

Although mobile browsers can present problems, there are ways to work around the issues detailed above.

Audio sprites

Audio sprites borrow their name from CSS sprites, which is a visual technique for using CSS with a single graphic resource to break it into a series of sprites. We can apply the same principle to audio so that rather than having a bunch of small audio files that take time to load and play, we have one larger audio file containing all the smaller audio snippets we need. To play a specific sound from the file, we just use the known start and stop times for each audio sprite.

The advantage is that we can prime one piece of audio and have our sprites ready to go. To do this we can just play and instantly pause the larger piece of audio. You'll also reduce the number of server requests and save bandwidth.

var myAudio = document.createElement("audio");
myAudio.src = "mysprite.mp3";
myAudio.play();
myAudio.pause();

You'll need to sample the current time to know when to stop. If you space your individual sounds by at least 500ms then using the timeUpdate event (which fires every 250ms) should be sufficient. Your files may be slightly longer than they strictly need to be, but silence compresses well.

Here's an example of an audio sprite player — first let's set up the user interface in HTML:

<audio id="myAudio" src="https://jPlayer.org/tmp/countdown.mp3"></audio>
<button data-start="18" data-stop="19">0</button>
<button data-start="16" data-stop="17">1</button>
<button data-start="14" data-stop="15">2</button>
<button data-start="12" data-stop="13">3</button>
<button data-start="10" data-stop="11">4</button>
<button data-start="8"  data-stop="9">5</button>
<button data-start="6"  data-stop="7">6</button>
<button data-start="4"  data-stop="5">7</button>
<button data-start="2"  data-stop="3">8</button>
<button data-start="0"  data-stop="1">9</button>

Now we have buttons with start and stop times in seconds. The "countdown.mp3" MP3 file consists of a number being spoken every 2 seconds, the idea being that we play back that number when the corresponding button is pressed.

Let's add some JavaScript to make this work:

var myAudio = document.getElementById('myAudio');
var buttons = document.getElementsByTagName('button');
var stopTime = 0;

for (var i = 0; i < buttons.length; i++) {
  buttons[i].addEventListener('click', function() {
    myAudio.currentTime = this.getAttribute("data-start");
    stopTime = this.getAttribute("data-stop");
    myAudio.play();
  }, false);
}

myAudio.addEventListener('timeupdate', function() {
  if (this.currentTime > stopTime) {
    this.pause();
  }
}, false);

Note: You can try out our audio sprite player live on JSFiddle.

Note: On mobile we may need to trigger this code from a user-initiated event such as a start button being pressed, as described above.

Note: Watch out for bit rates. Encoding your audio at lower bit rates means smaller file sizes but lower seeking accuracy.

Background music

Music in games can have a powerful emotional effect. You can mix and match various music samples and assuming you can control the volume of your audio element you could cross-fade different musical pieces. Using the playbackRate() method you can even adjust the speed of your music without affecting the pitch, to sync it up better with the action.

All this is possible using the standard <audio> element and associated HTMLMediaElement API, but it becomes much easier and more flexible with the more advanced Web Audio API. Let's look at this next.

Web Audio API for games

Now that it's supported in all modern browsers except for Opera Mini and Internet Explorer (although Microsoft is now working on it), an acceptable approach for many situations is to use the Web Audio API (see the Can I use Web Audio API page for more on browser compatibility). The Web Audio API is an advanced audio JavaScript API that is ideal for game audio. Developers can generate audio and manipulate audio samples as well as positioning sound in 3D game space.

A feasible cross-browser strategy would be to provide basic audio using the standard <audio> element and, where supported, enhance the experience using the Web Audio API.

Note: Significantly, iOS Safari now supports the Web Audio API, which means it's now possible to write web-based games with native-quality audio for iOS.

As the Web Audio API allows precise timing and control of audio playback, we can use it to play samples at specific moments, which is a crucial immersive aspect of gaming. You want those explosions to be accompanied by a thundering boom, not followed by one, after all.

Background music with the Web Audio API

Although we can use the <audio> element to deliver linear background music that doesn't change in reaction to the game environment, the Web Audio API is ideal for implementing a more dynamic musical experience. You may want music to change depending on whether you are trying to build suspense or encourage the player in some way. Music is an important part of the gaming experience and depending on the type of game you are making you may wish to invest significant effort into getting it right.

One way you can make your music soundtrack more dynamic is by splitting it up into component loops or tracks. This is often the way that musicians compose music anyway, and the Web Audio API is extremely good at keeping these parts in sync. Once you have the various tracks that make up your piece you can bring tracks in and out as appropriate.

You can also apply filters or effects to music. Is your character in a cave? Increase the echo. Maybe you have underwater scenes, so apply a filter that muffles the sound.

Let's look at some Web Audio API techniques for dynamically adjusting music from its base tracks.

Loading your tracks

With the Web Audio API you can load separate tracks and loops individually using XMLHttpRequest, which means you can load them synchronously or in parallel. Loading synchronously might mean parts of your music are ready earlier and you can start playing them while others load.

Either way you may want to synchronize tracks or loops. The Web Audio API contains the notion of an internal clock that starts ticking the moment you create an audio context. You'll need to take account of the time between creating an audio context and when the first audio track starts playing. Recording this offset and querying the playing track's current time gives you enough information to synchronize separate pieces of audio.

To see this in action, let's lay out some separate tracks:

<ul>
  <li><a class="track" href="https://jPlayer.org/audio/mp3/gbreggae-leadguitar.mp3">Lead Guitar</a></li>
  <li><a class="track" href="https://jPlayer.org/audio/mp3/gbreggae-drums.mp3">Drums</a></li>
  <li><a class="track" href="https://jPlayer.org/audio/mp3/gbreggae-bassguitar.mp3">Bass Guitar</a></li>
  <li><a class="track" href="https://jPlayer.org/audio/mp3/gbreggae-horns.mp3">Horns</a></li>
  <li><a class="track" href="https://jPlayer.org/audio/mp3/gbreggae-clav.mp3">Clavi</a></li>
</ul>

All of these tracks are the same tempo and are designed to be synchronized with each other.

window.AudioContext = window.AudioContext || window.webkitAudioContext;

var offset = 0;
var context = new AudioContext();

function playTrack(url) {
  var request = new XMLHttpRequest();
  request.open('GET', url, true);
  request.responseType = 'arraybuffer';

  var audiobuffer;

  // Decode asynchronously
  request.onload = function() {
    
    if (request.status == 200) {
      
      context.decodeAudioData(request.response, function(buffer) {
        var source = context.createBufferSource();
        source.buffer = buffer;
        source.connect(context.destination);
        console.log('context.currentTime ' + context.currentTime);

        if (offset == 0) {
          source.start();
          offset = context.currentTime;
        } else {
          source.start(0,context.currentTime - offset);
        }

      }, function(e) {
        console.log('Error decoding audio data:' + e);
      });
    } else {
      console.log('Audio didn\'t load successfully; error code:' + request.statusText);
    }
  }
  request.send();
}

var tracks = document.getElementsByClassName('track');

for (var i = 0, len = tracks.length; i < len; i++) {
  tracks[i].addEventListener('click', function(e){

    playTrack(this.href);
    e.preventDefault();
  });
}

Note: You can try out our Web Audio API multitrack demo live on JSFiddle.

Now let's look over the code. First we set up a new AudioContext and create a function (playTrack()) that loads and starts playing a track.

start() (formerly known as noteOn()) will start playing an audio asset. start() asks three (optional) parameters:

  1. when: The absolute time to commence playback.
  2. where (offset): The part of the audio to start playing from.
  3. how long: The duration to play for.

stop() takes one optional parameter — when — which is the delay before stopping.

If start()'s second parameter — the offset — is zero, we start playing from the start of the given piece of audio, which is what we do in the first instance. We then store the AudioContext.currentTime — the offset of when the first piece began playing, subtract that from any subsequent currentTimes to calculate the actual time, and use that to synchronize our tracks.

In the context of your game world you may have loops and samples that are played in different circumstances, and it can be useful to be able to synchronize with other tracks for a more seamless experience.

Note: This example does not wait for the beat to end before introducing the next piece; we could do this if we knew the BPM (Beats Per Minute) of the tracks.

You may find that the introduction of a new track sounds more natural if it comes in on the beat/bar/phrase or whatever units you want to chunk your background music into.

To do this before playing the track you want to sync, you should calculate how long it is until the start of the next beat/bar etc.

Here's a bit of code that given a tempo (the time in seconds of your beat/bar) will calculate how long to wait until you play the next part — you feed the resulting value to the start() function with the first parameter, which takes the absolute time of when that playback should commence. Note the second parameter (where to start playing from in the new track) is relative:

if (offset == 0) {
  source.start();
  offset = context.currentTime;
} else {
  var relativeTime = context.currentTime - offset;
  var beats = relativeTime / tempo;
  var remainder = beats - Math.floor(beats);
  var delay = tempo - (remainder*tempo);
  source.start(context.currentTime+delay, relativeTime+delay);
}

Note: You can try our wait calculator code here, on JSFiddle (I've synched to the bar in this case).

Note: If the first parameter is 0 or less than the context currentTime, playback will commence immediately.

Positional audio

Positional audio can be an important technique in making audio a key part of an immersive gaming experience. The Web Audio API not only enables us to position a number of audio sources in three-dimensional space but can also allow us to apply filters that make that audio appear more realistic.

In short, using the positional capabilities of the Web Audio API we can relate further information about the game world to the player.

We can relate:

  • The position of objects
  • The direction of objects (movement of position and recreation of the Doppler effect)
  • The environment (cavernous, underwater, etc.)

This is especially useful in a three-dimensional environment rendered using WebGL, where the Web Audio API makes it possible to tie audio to the objects and viewpoints.

Note: See Web Audio API Spatialization Basics for more details.

See Also

Étiquettes et contributeurs liés au document

 Contributeurs à cette page : quentin.lamamy
 Dernière mise à jour par : quentin.lamamy,