Raja Mandava
Raja Mandava

Reputation: 113

webrtc audio device disconnection and reconnection

I have a video call application based on WebRTC. It is working as expected. However when call is going on, if I disconnect and connect back audio device (mic + speaker), only speaker part is working. The mic part seems to be not working - the other side can't hear anymore.

Is there any way to inform WebRTC to take audio input again once audio device is connected back?

Upvotes: 2

Views: 4090

Answers (1)

jib
jib

Reputation: 42500

Is there any way to inform WebRTC to take audio input again once audio device is connected back?

Your question appears simple—the symmetry with speakers is alluring—but once we're dealing with users who have multiple cameras and microphones, it's not that simple: If your user disconnects their bluetooth headset they were using, should you wait for them to reconnect it, or immediately switch to their laptop microphone? If the latter, do you switch back if they reconnect it later? These are application decisions.

The APIs to handle these things are: primarily the ended and devicechange events, and the replaceTrack() method. You may also need the deviceId constraint, and the enumerateDevices() method to a handle multiple devices.

However, to keep things simple, let's take the assumptions in your question at face value to explore the APIs:

When the user unplugs their sole microphone (not their camera) mid-call, our job is to resume conversation with it when they reinsert it, without dropping video:

  1. First, we listen to the ended event to learn when our local audio track drops.
  2. When that happens, we listen for a devicechange event to detect re-insertion (of anything).
  3. When that happens, we could check what changed using enumerateDevices(), or simply try getUserMedia again (microphone only this time).
  4. If that succeeds, use await sender.replaceTrack(newAudioTrack) to send our new audio.

This might look like this:

let sender;

(async () => {
  try {
    const stream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
    pc.addTrack(stream.getVideoTracks()[0], stream);
    sender = pc.addTrack(stream.getAudioTracks()[0], stream);
    sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
  } catch (e) {
    console.log(e);
  }
})();

async function tryAgain() {
  try {
    const stream = await navigator.mediaDevices.getUserMedia({audio: true});
    await sender.replaceTrack(stream.getAudioTracks()[0]);
    navigator.mediaDevices.ondevicechange = null;
    sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
  } catch (e) {
    if (e.name == "NotFoundError") return;
    console.log(e);
  }
}

// Your usual WebRTC negotiation code goes here

The above is for illustration only. I'm sure there are lots of corner cases to consider.

Upvotes: 4

Related Questions