Reputation: 634
I'm trying to do frequency analysis of an audio file, but I want it to happen without playing. I found there was a offlineAudioContext
, which is probably what I need here.
Complete code in this jsfiddle
The Web Audio API is a bit of unexplored territory for me and I'm not completely sure what I'm doing, a lot of the tutorials are focused on realtime audio and that's exactly what I want to prevent.
Inside context.oncomplete
, I managed to get some accurate looking rendered audiobuffer data. When getting data from the fft, I seem to be getting a very small set of data, which I'm guessing it's just the data from the last sample. I'd rather have this data for every x ms of the audio file that I'm loading. I would love to get some ideas on how to get this format of the data?
Basically what I'm expecting is something like:
[
// array with frequency data for every (nth) frequency band for 1st sample,
// array with frequency data for every (nth) frequency band for 2nd sample,
// array with frequency data for every (nth) frequency band for 3rd sample,
…
]
Upvotes: 4
Views: 2961
Reputation: 9857
Assuming that buffer is a float32array containing a one channel sound,
here is a nice code for offline analysis I wrote.
The function passed to callback will be executed at every step of the analysis.
The function passed to onend will be called at the end of the process.
function callback(data) {
callback.count = callback.count || 0;
console.log(callback.count++, data);
}
fftAnalyser = function(buff, sampleRate, fftSize, callback, onend) {
// by Zibri (2019)
var frequencies = [1000,1200,1400,1600]; //for example
var frequencyBinValue = (f)=>{
const hzPerBin = (ctx.sampleRate) / (2 * analyser.frequencyBinCount);
const index = parseInt((f + hzPerBin / 2) / hzPerBin);
return data[index]+1;
}
ctx = new OfflineAudioContext(1,buff.length,sampleRate);
ctx.sampleRate = sampleRate;
ctx.destination.channelCount = 1;
processor = ctx.createScriptProcessor(1024, 1, 1);
processor.connect(ctx.destination);
analyser = ctx.createAnalyser();
analyser.fftSize = fftSize;
analyser.smoothingTimeConstant = 0.0;
//analyser.minDecibels = -60;
//analyser.maxDecibels = -20;
source = ctx.createBufferSource();
source.connect(analyser);
processor.onaudioprocess = function(e) {
data = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(data);
state = frequencies.map(frequencyBinValue.bind(this))
if ("function" == typeof callback)
onend();
callback(state);
}
analyser.connect(processor);
var sbuffer = ctx.createBuffer(1, buff.length, ctx.sampleRate);
sbuffer.getChannelData(0).set(buff);
source.buffer = sbuffer;
source.onended = function(e) {
console.log("Analysys ended.");
if ("function" == typeof onend)
onend();
}
console.log("Start Analysis.");
source.start(0);
ctx.startRendering();
}
Upvotes: 3
Reputation: 14464
When you set fftSize on your AnalyserNode
, you'll get (fftSize / 2)
number of bins. So that's why you're seeing a lot less data than you expect.
Essentially what's happening is that getByteFrequencyData
is only looking at the first 128 samples of your rendered buffer, and the rest simply get ignored.
Instead, you might want to try using a ScriptProcessorNode
with a bufferSize
equal to your fftSize
. Then, in the ScriptProcessorNode
's onaudioprocess
event handler, you can grab the buffer and get its FFT. Something like this:
var processor = context.createScriptProcessor(fftSize, 1, 1);
source.connect(processor);
processor.onaudioprocess = function( e ) {
var input = e.inputBuffer.getChannelData(0),
data = new Uint8Array(fftSamples);
fft.getByteFrequencyData(data);
// do whatever you want with `data`
}
Upvotes: 2