ruhig brauner
ruhig brauner

Reputation: 963

Processing Audio in Chunks with JS (WebAudio API)

I wanted to ask if there is a way to process audio with the WebAudio API in fixed-size chunks and afterward have access to those chunks. The Offline​Audio​Context doesn't seem to support that case, it only processes the graph once for the duration set in the constructor. The use case I had in mind was visualizing live audio. My original idea was to use one live audio context and one offline context. Both would process the same audio graph. The offline graph would run in a different sample rate and only be used to supply rendered buffers for visualization.

I've read that creating a new Offline​Audio​Contextas starting the rendering has to be triggered by user input on many browsers.

Upvotes: 2

Views: 204

Answers (1)

Raymond Toy
Raymond Toy

Reputation: 6048

It's not clear why you need two contexts for visualization. You can use an AnalyserNode connected somewhere in the processing graph of the real-time context to get the time-domain or frequency-domain data. You can use these for visualization.

Upvotes: 1

Related Questions