Reputation: 5098
I would like to use FFmpeg to stream a playlist containing multiple audio files (mainly FLAC and MP3). During playback, I would like FFmpeg to normalize the loudness of the audio signal and generate a spectrogram and waveform of the audio signal separately. The spectrogram and waveform should serve as a audio stream monitor. The final audio stream, spectrogram and waveform outputs will be sent to a browser, which plays the audio stream and continuously renders the spectrogram and waveform "images". I would also like to be able to remove and add audio files from the playlist during playback.
As a first step, I would like to use the ffmpeg
command to achieve the desired result, before I'll try to write code which does the same programmatically.
(sidenote: I've discovered libgroove
which basically does what I want, but I would like to understand the FFmpeg internals and write my own piece of software. The target language is Go and using either the goav
or go-libav
libraries might do the job. However, I might end up writing the code in C, then creating Go language bindings from C, instead of relying on one of the named libraries.)
Here's a little overview:
playlist (input) --> loudnorm --> split --> spectrogram --> separate output
|
split ---> waveform ----> separate output
|
+------> encode ------> audio stream output
For the loudness normalization, I intend to use the loudnorm
filter, which implements the EBU R128 algorithm.
For the spectrogram, I intend to use the showspectrum
or showspectrumpic
filter. Since I want the spectrogram to be "steamable", I'm not really sure how to do this. Maybe there's a way to output segments step-by-step? Or maybe there's a way to output some sort of representation (JSON or any other format) step-by-step?
For the waveform, I intend to use the showwaves
or showwavespic
filter. The same as for the spectrogram applies here, since the output should be "streamable".
I'm having a little trouble to achieve what I want using the ffmpeg
command. Here's what I have so far:
ffmpeg \
-re -i input.flac \
-filter_complex "
[0:a] loudnorm [ln]; \
[ln] asplit [a][b]; \
[a] showspectrumpic=size=640x518:mode=combined [ss]; \
[b] showwavespic=size=1280x202 [sw]
" \
-map '[ln]' -map '[ss]' -map '[sw]' \
-f tee \
-acodec libmp3lame -ab 128k -ac 2 -ar 44100 \
"
[aselect='ln'] rtp://127.0.0.1:1234 | \
[aselect='ss'] ss.png | \
[aselect='sw'] sw.png
"
Currently, I get the following error:
Output with label 'ln' does not exist in any defined filter graph, or was already used elsewhere.
Also, I'm not sure whether aselect
is the correct functionality to use. Any Hints?
Upvotes: 0
Views: 1211
Reputation: 31
You were pretty close. I don't think aselect
is correct; it selects frames to send to the output, rather than streams. Try this instead:
ffmpeg \
-re -i input.flac \
-filter_complex "
[0:a] loudnorm , asplit=3 [a][b][ln];
[a] showspectrumpic=size=640x518:mode=combined [ss];
[b] showwavespic=size=1280x202 [sw]
" \
-map '[ln]' -acodec libmp3lame -ab 128k -ac 2 -ar 44100 -f rtp 'rtp://127.0.0.1:1234' \
-map '[ss]' ss.png \
-map '[sw]' sw.png
Notice that the loudnorm
and asplit
filters are combined into a filterchain. From the documentation:
A filterchain consists of a sequence of connected filters, each one connected to the previous one in the sequence. A filterchain is represented by a list of ","-separated filter descriptions. A filtergraph consists of a sequence of filterchains. A sequence of filterchains is represented by a list of ";"-separated filterchain descriptions. [1]
Each -map
option selects a stream and sends it to the next output file or stream. So in this case the [ln]
stream is sent to the rtp
stream, [ss]
is sent to ss.png
, and so on.
Upvotes: 1