Reputation: 535
If I use websocket protocol to receive 20ms ulaw samples (for an in house audio conferencing app), convert them to PCM, buffer them for any jitter (if need be)...is it possible and if so, how, can I instruct a browser to play them? Can I enumerate playback devices? How does that work for a sandboxed browser environment?
Using Javascript with as few plugins as possible. Has anyone around here played with this?
I know you can use webrtc and SRTP, but I am mainly thinking about composing the audio buffer and submitting it for playback.
Upvotes: 3
Views: 3543
Reputation: 1324
For those coming from search engines:
Yes, it is possible. You can use createBufferSource
nodes to receive your audio data coming from the WebSocket.
It goes something like this:
var context = new window.AudioContext()
var channels = 1
var sampleRate = 44100
var frames = sampleRate * 3
var buffer = context.createBuffer(channels, frames, sampleRate)
// `data` comes from your Websocket, first convert it to Float32Array
buffer.getChannelData(0).set(data)
var source = context.createBufferSource()
source.buffer = buffer
// Then output to speaker for example
source.connect(context.destination)
Beware, you may get performance issues when doing this in the UI process, a worker might be better. Also, resampling may be required.
Upvotes: 4