Reputation: 3
I want to detect played notes and chords using the Web Audio API (using the microphone as an input device). Before I can analyse the data, I need it the individual frequencies mapped to their loudness. I started with the following snippet:
const stream = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false
});
const context = new AudioContext();
const source = context.createMediaStreamSource(stream);
const analyser = context.createAnalyser();
const data = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(data);
data
now is an array of values between 0
to 255
. The question I have now is how can I map the frequencies to the loudness values of the data array?
Ideally, I'd like an object like this:
{
...
438: 128,
439: 200,
440: 255,
441: 200,
...
}
Thanks for your help.
Upvotes: 0
Views: 290
Reputation: 6048
The value in data[k]
corresponds to the frequency k * Nyquist/frequencyBinCount
where Nyquist
is one half of the sampling frequency, AudioContext.sampleRate
.
I think that's what you're asking for. If not please clarify.
Upvotes: 0