Nuestros voluntarios aún no han traducido este artículo al Español. Únete a nosotros y ayúdanos a traducirlo
The AudioBuffer
interface represents a short audio asset residing in memory, created from an audio file using the AudioContext.decodeAudioData()
method, or from raw data using AudioContext.createBuffer()
. Once put into an AudioBuffer, the audio can then be played by being passed into an AudioBufferSourceNode
.
Objects of these types are designed to hold small audio snippets, typically less than 45 s. For longer sounds, objects implementing the MediaElementAudioSourceNode
are more suitable. The buffer contains data in the following format: non-interleaved IEEE754 32-bit linear PCM with a nominal range between -1
and +1
, that is, 32bits floating point buffer, with each samples between -1.0 and 1.0. If the AudioBuffer
has multiple channels, they are stored in separate buffer.
Properties
AudioBuffer.sampleRate
Read only- Returns a float representing the sample rate, in samples per second, of the PCM data stored in the buffer.
AudioBuffer.length
Read only- Returns an integer representing the length, in sample-frames, of the PCM data stored in the buffer.
AudioBuffer.duration
Read only- Returns a double representing the duration, in seconds, of the PCM data stored in the buffer.
AudioBuffer.numberOfChannels
Read only- Returns an integer representing the number of discrete audio channels described by the PCM data stored in the buffer.
Methods
AudioBuffer.getChannelData()
- Returns a
Float32Array
containing the PCM data associated with the channel, defined by thechannel
parameter (with0
representing the first channel). AudioBuffer.copyFromChannel()
- Copies the samples from the specified channel of the
AudioBuffer
to thedestination
array. AudioBuffer.copyToChannel()
- Copies the samples to the specified channel of the
AudioBuffer
, from thesource
array.
Example
The following simple example shows how to create an AudioBuffer
and fill it with random white noise. You can find the full source code at our audio-buffer demo repository; a running live version is also available.
// Stereo var channels = 2; // Create an empty two second stereo buffer at the // sample rate of the AudioContext var frameCount = audioCtx.sampleRate * 2.0; var myArrayBuffer = audioCtx.createBuffer(channels, frameCount, audioCtx.sampleRate); button.onclick = function() { // Fill the buffer with white noise; // just random values between -1.0 and 1.0 for (var channel = 0; channel < channels; channel++) { // This gives us the actual array that contains the data var nowBuffering = myArrayBuffer.getChannelData(channel); for (var i = 0; i < frameCount; i++) { // Math.random() is in [0; 1.0] // audio needs to be in [-1.0; 1.0] nowBuffering[i] = Math.random() * 2 - 1; } } // Get an AudioBufferSourceNode. // This is the AudioNode to use when we want to play an AudioBuffer var source = audioCtx.createBufferSource(); // set the buffer in the AudioBufferSourceNode source.buffer = myArrayBuffer; // connect the AudioBufferSourceNode to the // destination so we can hear the sound source.connect(audioCtx.destination); // start the source playing source.start(); }
Specifications
Specification | Status | Comment |
---|---|---|
Web Audio API The definition of 'AudioBuffer' in that specification. |
Working Draft | Initial definition. |
Browser compatibility
Feature | Chrome | Firefox (Gecko) | Internet Explorer | Opera | Safari (WebKit) |
---|---|---|---|---|---|
Basic support | 14 webkit | 25 (25) | No support | 15 webkit 22 |
6 webkit |
copyFromChannel() and copyToChannel() |
? | 27 (27) | No support | ? | No support |
Feature | Android | Chrome | Firefox Mobile (Gecko) | Firefox OS | IE Phone | Opera Mobile | Safari Mobile |
---|---|---|---|---|---|---|---|
Basic support | No support | 28 webkit | 25.0 (25) | 1.2 | No support | No support | 6 webkit |
copyFromChannel() and copyToChannel() |
No support | ? | 27.0 (27) | No support | No support | No support |