Reputation: 1111
I'm synthesizing sound through the web audio api using various oscillators/filters and have time domain and spectrogram visualizations that run in real time as the oscillators play (similar to here and here).
However, I'm wanting to be able to create an initial precomputed visualization based on the audio network before its run for a set amount of time so a user can view what the network will sound like before playing it. Is this possible or is there a way to speed up time to quickly generate the visualizations?
Upvotes: 3
Views: 601
Reputation: 1555
Use an OfflineAudioContext, this gives you a PCM buffer back, asynchronously. Compute windowed RMS values out of that (or just use the time domain, depends on what you want to do), and put it on a or whatever.
The OfflineAudioContext lets you run a graph as fast as the machine runs, and is a drop-in replacement for the AudioContext, apart from three nodes, that can't be used (MediaStreamAudioDestinationNode, MediaStreamSourceNode, and MediaElementAudioSourceNode), because MediaStream are real-time objects: they make no sense when not rendering in real-time.
It goes like this:
var off = new OfflineAudioContext(2 /* stereo */,
44100 /* length in frames */,
44100 /* samplerate */);
/* do your thing: setup your graph as usual */
off.createOscillator(...);
...
...
/* This is called when the graph has rendered one
* second of audio, here: 44100 frames at 44100 frames per second */
off.oncomplete = function(e) {
e.renderedBuffer; /* is an AudioBuffer that contains the PCM data */
};
/* kick off the rendering */
off.startRendering();
Upvotes: 3