Reputation: 1061
I have an application that records raw audio data in LPCM stored in a buffer. I would like to encapsulate the data in a transport stream and send that transport stream through UDP to a stream segmenter (according to HTTP Live Streaming specifications) on another host.
FFmpeg provides a command-line utility to do so but with a file as input
ffmpeg -re -i output.aac -acodec copy -f mpegts udp://127.0.0.1:5555
.
My first thought was to use FFmpeg API, especially the libavformat library. Does libavformat provide a muxer that I could use to encapsulate my audio in LPCM into a transport stream or do I have to implement it from scratch?
I have found this source code https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/mpegts.c but I am not sure if it actually does what I'm looking for.
Thanks for your help,
Upvotes: 1
Views: 6337
Reputation: 742
So based on your comment about not needing it to necessarily be LPCM in the TS you will need to:
There is a reasonable example of all this here: https://github.com/rvs/ffmpeg/blob/master/libavformat/output-example.c
As mentioned in the prior answer from szatmary you could also just pipe this to ffmpeg which may be simplest
Upvotes: 1
Reputation: 31140
You can use the ts muxer directly via libavformat. However you can also pipe the audio to ffmpeg using -i -
Upvotes: 0