Shufi123
Shufi123

Reputation: 233

Getting mp3 file frequency data with js

Is there a way to get mp3 data (its frequencies) before playing it? I don't have to get it all, mind you - it would be enough if I could get it to give me just the frame data for 5 seconds "into the future" (a frame that hasn't been played yet). I also need this to be client/browser-side, and not server-side. Is there a way to do this?

Upvotes: 3

Views: 2392

Answers (2)

neryushi
neryushi

Reputation: 128

You don't have to stream the audio data from the WebAudioAPI to the output source. This is the node network they usually use in tutorials:

    context = new AudioContext();
    src = context.createMediaElementSource(source);
    analyser = context.createAnalyser();
    var listen = context.createGain();

    src.connect(listen);
    listen.connect(analyser);
    analyser.connect(context.destination);

if you disconnect the context.destination, you should get the data, but muted. For a code like this:

    context = new AudioContext();
    src = context.createMediaElementSource(source);
    analyser = context.createAnalyser();
    var listen = context.createGain();

    src.connect(listen);
    listen.connect(analyser);

then you can run this code on two same audio files, each with a different timestamp, and save the data from the silent one.

Upvotes: 3

chrisguttandin
chrisguttandin

Reputation: 9066

As you mentioned already the AnalyzerNode that is part of the Web Audio API is meant for real-time usage.

Therefore I would recommend using Meyda. Meyda itself relies on fftjs for computing the spectrum when used in a offline scenario.

Here is a little snippet which would log the amplitudespectrum for every block of 512 samples without any overlap for the first 5 seconds of an mp3.

const Meyda = require('meyda');

fetch('/my.mp3')
    .then((response) => response.arrayBuffer())
    .then((arrayBuffer) => {
        // It's of course also possible to re-use an existing
        // AudioContext to decode the mp3 instead of creating
        // a new one here.
        offlineAudioContext = new OfflineAudioContext({
            length: 1,
            sampleRate: 44100
        });

        return offlineAudioContext.decodeAudioData(arrayBuffer);
    })
    .then((audioBuffer) => {
        const signal = new Float32Array(512);

        for (let i = 0; i < audioBuffer.sampleRate * 5; i += 512) {
            audioBuffer.copyFromChannel(signal, 0, i);

            console.log(Meyda.extract('amplitudeSpectrum', signal));
        }
    });

Upvotes: 3

Related Questions