Please note, this is a STATIC archive of website developer.mozilla.org from 03 Nov 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

Revision 719157 of AnalyserNode

  • Revision slug: Web/API/AnalyserNode
  • Revision title: AnalyserNode
  • Revision id: 719157
  • Created:
  • Creator: Delapouite
  • Is current revision? No
  • Comment fixed link to Fast Fourier Transform

Revision Content

{{APIRef()}}

The AnalyserNode interface represents a node able to provide real-time frequency and time-domain analysis information. It is an {{domxref("AudioNode")}} that passes the audio stream unchanged from the input to the output, but allows you to take the generated data, process it, and create audio visualizations.

An AnalyzerNode has exactly one input and one output. The node works even if the output is not connected.

Without modifying the audio stream, the node allows to get the frequency and time-domain data associated to it, using a FFT.

  • Number of inputs 1
  • Number of outputs 1 (but may be left unconnected)
  • Channel count mode "explicit"
  • Channel count 1
  • Channel interpretation "speakers"

Note: See the guide Visualizations with Web Audio API for more information on creating audio visualizations.

Properties

Inherits properties from its parent, {{domxref("AudioNode")}}.

{{domxref("AnalyserNode.fftSize")}}
Is an unsigned long value representing the size of the FFT (Fast Fourier Transform) to be used to determine the frequency domain.
{{domxref("AnalyserNode.frequencyBinCount")}} {{readonlyInline}}
Is an unsigned long value half that of the FFT size. This generally equates to the number of data values you will have to play with for the visualization.
{{domxref("AnalyserNode.minDecibels")}}
Is a double value representing the minimum power value in the scaling range for the FFT analysis data, for conversion to unsigned byte values — basically, this specifies the minimum value for the range of results when using getByteFrequencyData().
{{domxref("AnalyserNode.maxDecibels")}}
Is a double value representing the maximum power value in the scaling range for the FFT analysis data, for conversion to unsigned byte values — basically, this specifies the maximum value for the range of results when using getByteFrequencyData().
{{domxref("AnalyserNode.smoothingTimeConstant")}}
Is a double value representing the averaging constant with the last analysis frame — basically, it makes the transition between values over time smoother.

Methods

Inherits methods from its parent, {{domxref("AudioNode")}}.

{{domxref("AnalyserNode.getFloatFrequencyData()")}}
Copies the current frequency data into a {{domxref("Float32Array")}} array passed into it.
{{domxref("AnalyserNode.getByteFrequencyData()")}}
Copies the current frequency data into a {{domxref("Uint8Array")}} (unsigned byte array) passed into it.
{{domxref("AnalyserNode.getFloatTimeDomainData()")}}
Copies the current waveform, or time-domain, data into a {{domxref("Float32Array")}} array passed into it.
{{domxref("AnalyserNode.getByteTimeDomainData()")}}
Copies the current waveform, or time-domain, data into a {{domxref("Uint8Array")}} (unsigned byte array) passed into it.

Example

The following example shows basic usage of an {{domxref("AudioContext")}} to create an AnalyserNode, then {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}} and {{htmlelement("canvas")}} to collect time domain data repeatedly and draw an "oscilloscope style" output of the current audio input. For more complete applied examples/information, check out our Voice-change-O-matic demo (see app.js lines 128–205 for relevant code).

var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioCtx.createAnalyser();

  ...

analyser.fftSize = 2048;
var bufferLength = analyser.frequencyBinCount;
var dataArray = new Uint8Array(bufferLength);
analyser.getByteTimeDomainData(dataArray);

// draw an oscilloscope of the current audio source

function draw() {

      drawVisual = requestAnimationFrame(draw);

      analyser.getByteTimeDomainData(dataArray);

      canvasCtx.fillStyle = 'rgb(200, 200, 200)';
      canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);

      canvasCtx.lineWidth = 2;
      canvasCtx.strokeStyle = 'rgb(0, 0, 0)';

      canvasCtx.beginPath();

      var sliceWidth = WIDTH * 1.0 / bufferLength;
      var x = 0;

      for(var i = 0; i < bufferLength; i++) {
   
        var v = dataArray[i] / 128.0;
        var y = v * HEIGHT/2;

        if(i === 0) {
          canvasCtx.moveTo(x, y);
        } else {
          canvasCtx.lineTo(x, y);
        }

        x += sliceWidth;
      }

      canvasCtx.lineTo(canvas.width, canvas.height/2);
      canvasCtx.stroke();
    };

    draw();

Specifications

Specification Status Comment
{{SpecName('Web Audio API', '#the-analysernode-interface', 'AnalyserNode')}} {{Spec2('Web Audio API')}}  

Browser compatibility

{{CompatibilityTable}}
Feature Chrome Firefox (Gecko) Internet Explorer Opera Safari (WebKit)
Basic support {{CompatChrome(10.0)}}{{property_prefix("webkit")}} {{CompatGeckoDesktop(25.0)}}  {{CompatNo}} 15.0{{property_prefix("webkit")}}
22 (unprefixed)
6.0{{property_prefix("webkit")}}
Feature Android Firefox Mobile (Gecko) Firefox OS IE Mobile Opera Mobile Safari Mobile Chrome for Android
Basic support {{CompatUnknown}} 26.0 1.2 {{CompatUnknown}} {{CompatUnknown}} {{CompatUnknown}} 33.0

See also

Revision Source

<p>{{APIRef()}}</p>
<p>The <strong><code>AnalyserNode</code></strong><strong> </strong>interface represents a node able to provide real-time frequency and time-domain analysis information. It is an {{domxref("AudioNode")}} that passes the audio stream unchanged from the input to the output, but allows you to take the generated data, process it, and create audio visualizations.</p>
<p>An <code>AnalyzerNode</code> has exactly one input and one output. The node works even if the output is not connected.</p>
<p><img alt="Without modifying the audio stream, the node allows to get the frequency and time-domain data associated to it, using a FFT." src="https://mdn.mozillademos.org/files/9707/WebAudioFFT.png" style="width: 661px; height: 174px;" /></p>
<ul class="audionodebox">
 <li><dfn>Number of inputs</dfn> <code>1</code></li>
 <li><dfn>Number of outputs</dfn> <code>1</code> (but may be left unconnected)</li>
 <li><dfn>Channel count mode</dfn> <code>"explicit"</code></li>
 <li><dfn>Channel count</dfn> <code>1</code></li>
 <li><dfn>Channel interpretation</dfn> <code>"speakers"</code></li>
</ul>
<div class="note">
 <p><strong>Note</strong>: See the guide <a href="/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API">Visualizations with Web Audio API</a> for more information on creating audio visualizations.</p>
</div>
<h2 id="Properties">Properties</h2>
<p><em>Inherits properties from its parent, </em><em>{{domxref("AudioNode")}}</em>.</p>
<dl>
 <dt>
  <span id="cke_bm_91S" style="display: none;">&nbsp;</span>{{domxref("AnalyserNode.fftSize")}}</dt>
 <dd>
  Is an unsigned long value representing the size of the FFT (<a href="https://en.wikipedia.org/wiki/Fast_Fourier_transform" title="/en-US/docs/">Fast Fourier Transform</a>) to be used to determine the frequency domain.</dd>
 <dt>
  {{domxref("AnalyserNode.frequencyBinCount")}} {{readonlyInline}}</dt>
 <dd>
  Is an unsigned long value half that of the FFT size. This generally equates to the number of data values you will have to play with for the visualization.</dd>
 <dt>
  {{domxref("AnalyserNode.minDecibels")}}</dt>
 <dd>
  Is a double value representing the minimum power value in the scaling range for the FFT analysis data, for conversion to unsigned byte values — basically, this specifies the minimum value for the range of results when using <code>getByteFrequencyData()</code>.</dd>
 <dt>
  {{domxref("AnalyserNode.maxDecibels")}}</dt>
 <dd>
  Is a double value representing the maximum power value in the scaling range for the FFT analysis data, for conversion to unsigned byte values — basically, this specifies the maximum value for the range of results when using <code>getByteFrequencyData()</code>.</dd>
 <dt>
  {{domxref("AnalyserNode.smoothingTimeConstant")}}</dt>
 <dd>
  Is a double value representing the averaging constant with the last analysis frame — basically, it makes the transition between values over time smoother.</dd>
</dl>
<h2 id="Methods">Methods</h2>
<p><em>Inherits methods from its parent, </em><em>{{domxref("AudioNode")}}</em>.</p>
<dl>
 <dt>
  {{domxref("AnalyserNode.getFloatFrequencyData()")}}</dt>
 <dd>
  Copies the current frequency data into a {{domxref("Float32Array")}} array passed into it.</dd>
</dl>
<dl>
 <dt>
  {{domxref("AnalyserNode.getByteFrequencyData()")}}</dt>
 <dd>
  Copies the current frequency data into a {{domxref("Uint8Array")}} (unsigned byte array) passed into it.</dd>
</dl>
<dl>
 <dt>
  {{domxref("AnalyserNode.getFloatTimeDomainData()")}}</dt>
 <dd>
  Copies the current waveform, or time-domain, data into a {{domxref("Float32Array")}} array passed into it.</dd>
 <dt>
  {{domxref("AnalyserNode.getByteTimeDomainData()")}}</dt>
 <dd>
  Copies the current waveform, or time-domain, data into a {{domxref("Uint8Array")}} (unsigned byte array) passed into it.</dd>
</dl>
<h2 id="Example">Example</h2>
<p>The following example shows basic usage of an {{domxref("AudioContext")}} to create an <code>AnalyserNode</code>, then {{domxref("window.requestAnimationFrame()","requestAnimationFrame")}} and {{htmlelement("canvas")}} to collect time domain data repeatedly and draw an "oscilloscope style" output of the current audio input. For more complete applied examples/information, check out our <a href="https://mdn.github.io/voice-change-o-matic/">Voice-change-O-matic</a> demo (see <a href="https://github.com/mdn/voice-change-o-matic/blob/gh-pages/scripts/app.js#L128-L205">app.js lines 128–205</a> for relevant code).</p>
<pre class="brush: js">
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioCtx.createAnalyser();

  ...

analyser.fftSize = 2048;
var bufferLength = analyser.frequencyBinCount;
var dataArray = new Uint8Array(bufferLength);
analyser.getByteTimeDomainData(dataArray);

// draw an oscilloscope of the current audio source

function draw() {

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; drawVisual = requestAnimationFrame(draw);

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; analyser.getByteTimeDomainData(dataArray);

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canvasCtx.fillStyle = 'rgb(200, 200, 200)';
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canvasCtx.lineWidth = 2;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canvasCtx.strokeStyle = 'rgb(0, 0, 0)';

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canvasCtx.beginPath();

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; var sliceWidth = WIDTH * 1.0 / bufferLength;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; var x = 0;

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; for(var i = 0; i &lt; bufferLength; i++) {
&nbsp; &nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; var v = dataArray[i] / 128.0;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; var y = v * HEIGHT/2;

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if(i === 0) {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canvasCtx.moveTo(x, y);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } else {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canvasCtx.lineTo(x, y);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; x += sliceWidth;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }

&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canvasCtx.lineTo(canvas.width, canvas.height/2);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; canvasCtx.stroke();
&nbsp;&nbsp;&nbsp; };

&nbsp;&nbsp;&nbsp; draw();</pre>
<h2 id="Specifications">Specifications</h2>
<table class="standard-table">
 <tbody>
  <tr>
   <th scope="col">Specification</th>
   <th scope="col">Status</th>
   <th scope="col">Comment</th>
  </tr>
  <tr>
   <td>{{SpecName('Web Audio API', '#the-analysernode-interface', 'AnalyserNode')}}</td>
   <td>{{Spec2('Web Audio API')}}</td>
   <td>&nbsp;</td>
  </tr>
 </tbody>
</table>
<h2 id="Browser_compatibility">Browser compatibility</h2>
<div>
 {{CompatibilityTable}}</div>
<div id="compat-desktop">
 <table class="compat-table">
  <tbody>
   <tr>
    <th>Feature</th>
    <th>Chrome</th>
    <th>Firefox (Gecko)</th>
    <th>Internet Explorer</th>
    <th>Opera</th>
    <th>Safari (WebKit)</th>
   </tr>
   <tr>
    <td>Basic support</td>
    <td>{{CompatChrome(10.0)}}{{property_prefix("webkit")}}</td>
    <td>{{CompatGeckoDesktop(25.0)}}&nbsp;</td>
    <td>{{CompatNo}}</td>
    <td>15.0{{property_prefix("webkit")}}<br />
     22 (unprefixed)</td>
    <td>6.0{{property_prefix("webkit")}}</td>
   </tr>
  </tbody>
 </table>
</div>
<div id="compat-mobile">
 <table class="compat-table">
  <tbody>
   <tr>
    <th>Feature</th>
    <th>Android</th>
    <th>Firefox Mobile (Gecko)</th>
    <th>Firefox OS</th>
    <th>IE Mobile</th>
    <th>Opera Mobile</th>
    <th>Safari Mobile</th>
    <th>Chrome for Android</th>
   </tr>
   <tr>
    <td>Basic support</td>
    <td>{{CompatUnknown}}</td>
    <td>26.0</td>
    <td>1.2</td>
    <td>{{CompatUnknown}}</td>
    <td>{{CompatUnknown}}</td>
    <td>{{CompatUnknown}}</td>
    <td>33.0</td>
   </tr>
  </tbody>
 </table>
</div>
<h2 id="See_also">See also</h2>
<ul>
 <li><a href="/en-US/docs/Web_Audio_API/Using_Web_Audio_API">Using the Web Audio API</a></li>
</ul>
Revert to this revision