Reputation: 934
Currently, I am using ffmpeg to start from frames in raw format to generate a video output that is broadcast in OBS.
The problem is that we want to add an audio track to it, that is: each frame will now have its matrix of pixels and its audio.
I don't know exactly how to make ffmpeg take this composite signal or what the best methodology is. Actually I use the stdin of the ffmpeg. I use this code to generate it:
url = "udp://" + udp_address + ":" + str(udp_port)
command = [
'ffmpeg',
'-loglevel',
'error',
'-re',
'-y',
# Input
'-i', '-',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', str(size[0]) + 'x' + str(size[1]),
# Output
'-b:v', '16M',
'-maxrate', '16M',
'-bufsize', '16M',
'-pix_fmt', 'bgr24',
'-f', 'mpegts', url
]
proc = subprocess.Popen(command, stdin=subprocess.PIPE)
proc.stdin.write(frame.tobytes())
What would be the best methodology to add audio to each frame? Thanks
Upvotes: 6
Views: 7752
Reputation: 133873
Use a named pipe (FIFO). Simplified example for Linux/macOS:
Make named pipes:
mkfifo video
mkfifo audio
Output/pipe video and audio to stdout. This is using ffmpeg
to generate video and audio to the named pipes just for demonstration purposes.
ffmpeg -y -re -f lavfi -i testsrc2=s=1280x720:r=25 -f rawvideo video & ffmpeg -y -re -f lavfi -i sine=r=44100 -f s16le audio
Use the named pipes as inputs for ffmpeg
:
ffmpeg -f rawvideo -video_size 320x240 -pixel_format yuv420p -framerate 25 -i video -f s16le -sample_rate 44100 -channels 1 -i audio -t 10 output.mp4
Upvotes: 4
Reputation: 1861
With FFMPEG you can have multiple inputs and then use the -map flag to choose what input streams should be used.
For example if you have 1 video and 1 audio fragment you can use something like this:
ffmpeg -i video.mp4 -i audio.mp3 -map 0:v:0 -map 1:a:0 -c:v copy -c:a copy output.mkv
Map can be used to select the different input streams. In this case:
To find more information see: https://trac.ffmpeg.org/wiki/Map
-Devedse
Upvotes: 1