Reputation: 963
I wanted to ask if there is a way to process audio with the WebAudio API in fixed-size chunks and afterward have access to those chunks. The OfflineAudioContext
doesn't seem to support that case, it only processes the graph once for the duration set in the constructor.
The use case I had in mind was visualizing live audio. My original idea was to use one live audio context and one offline context. Both would process the same audio graph. The offline graph would run in a different sample rate and only be used to supply rendered buffers for visualization.
I've read that creating a new OfflineAudioContext
as starting the rendering has to be triggered by user input on many browsers.
Upvotes: 2
Views: 204
Reputation: 6048
It's not clear why you need two contexts for visualization. You can use an AnalyserNode
connected somewhere in the processing graph of the real-time context to get the time-domain or frequency-domain data. You can use these for visualization.
Upvotes: 1