Reputation: 28157
So, I'm displaying the waveform and FFT of the sound stream coming from the microphone using the WebAudio API.
My question is, how large is each data array available at a specific moment? For example, look at the getByteTimeDomainData function from the AnalyserNode
. It says in the docs that it "copies the current waveform, or time-domain, data into a Uint8Array"
. What exactly is the current waveform
? Last second of sound input? Does the size of the current waveform data depend on the input frequency?
In other wards, being more like a stream interface that receives chunks of data, how large is each chunk? If we only get the time domain data once every 100ms do we miss the sound that happened between those times or is the sound wave buffered until the next getByteTimeDomainData
call?
Upvotes: 1
Views: 155
Reputation: 13918
You set the size, with the fftSize attribute on AnalyserNode. Yes, you will "miss the sound that happened between those times" - if you need to do something like that, use ScriptProcessor.
Upvotes: 2