Reputation: 877
I'm using Icecast to stream live audio from internal microphones and want the listener to have as small a latency as possible.
A naive solution would be to simply access http://myhostname:8000/my_mountpoint
to get the stream but the <audio>
tag does internal buffering before playing and results in a pretty high latency.
Current solution: I used the ReadableStreams
API to decode (using decodeAudioData
of Web Audio API) and play chunks of data by routing the decoded data to an Audio Context destination (internal speakers). This works and brings down the latency significantly.
Problem: This streams API, while experimental, should technically work on the latest Chrome, Safari, Opera, FF (after setting a particular flag). I'm however having problems with decodeAudioData
in all other browsers except Chrome and Opera. My belief is that FF and Safari cannot decode partial MP3 data because I usually hear a short activation of the speakers when I start streaming. On Safari, the callback on a successful decodeAudioData
is never called and FF simply says EncodingError: The given encoding is not supported.
Are there any workarounds if I want to at least get it to work on Safari and FF? Is the decodeAudioData
implementation actually different on Chrome and Safari such that one works on partial MP3 and the other doesn't?
Upvotes: 1
Views: 6099
Reputation: 9099
As @Brad mentioned, Opus is excellent for low-latency. I wrote a low-latency WASM Opus decoder that can be used with Streams, and I started another demo that demonstrates how to playback partial files with decodeAudioData()
:
Upvotes: 0
Reputation: 163622
I'm using Icecast to stream live audio from internal microphones and want the listener to have as small a latency as possible.
There are a few issues with your setup that prevent the lowest latency possible:
HTTP Progressive (the type of streaming used by Icecast and similar) runs over TCP, a reliable transport. If a packet is lost, there is some lag time while it is re-transmitted to the client, before data is delivered to the browser. This ensures every bit of audio is heard, in-order, but can cause a delay. For low latency, UDP packets are commonly used so that any lost packets can just be skipped over rather than waited for, causing a glitch, but the client keeps the latency down.
MP3 isn't a great codec for low latency. There are better codecs, such as Opus, which are more efficient and can generate smaller segments.
As you have seen, clients will buffer more by default, when streaming over HTTP.
However, in ideal conditions, I've streamed over HTTP Progressive with less than 250ms latency in Chrome, with a custom server. I bet you could get similar performance out of Icecast if you turned off the server-side buffering and used a different codec.
I'm however having problems with decodeAudioData in all other browsers except Chrome and Opera. My belief is that FF and Safari cannot decode partial MP3 data
Yes, decodeAudioData()
is meant for decoding an entire file. You'll have a hard time decoding arbitrarily segmented chunks in a sample-accurate way.
Fortunately, there is a method to do what you want... MediaSource Extensions.
https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API
Basically, you can use that exact same ReadableStream interface and push the data to a SourceBuffer to be eventually played out a normal <audio>
tag. This does work with normal MP3 audio from Icecast and similar servers, provided you've worked around any cross-origin issues.
Upvotes: 1
Reputation: 2890
Yes, this is an answer, not a comment. Icecast is not designed for such use cases. It was designed for 1-to-n bulk broadcast of data over HTTP in non-synchronized fashion.
What you are explaining sounds like you really should consider something that was designed to be low latency, like web-RTC.
If you think you really should use Icecast, please explain why. Because your question speaks otherwise. I'm all for more Icecast use, I'm it's maintainer after all, but its application should make sense.
Upvotes: 4