Reputation: 36
I am trying to get my laptop's speaker level shown in my application. I am new to WebRTC and Web Audio API, so just wanted to confirm about the possibility of a feature. The application is an electron application and has a calling feature, so when the user at the other end of the call speaks, the application should display a level of output which varies accordingly to the sound. I have tried using WebRTC and Web Audio API, and even seen a sample. I am able to log values but that changes when I speak in the microphone, while I need only the values of speaker not the microphone.
export class OutputLevelsComponent implements OnInit {
constructor() { }
ngOnInit(): void {
this.getAudioLevel()
}
getAudioLevel() {
try {
navigator.mediaDevices.enumerateDevices().then(devices => {
console.log("device:", devices);
let constraints = {
audio : {
deviceId: devices[3].deviceId
}
}
navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
console.log("stream test: ", stream);
this.handleSuccess(stream)
});
});
} catch(e) {
console.log("error getting media devices: ", e);
}
}
handleSuccess(stream: any) {
console.log("stream: ", stream);
var context = new AudioContext();
var analyser = context.createScriptProcessor(1024, 1, 1);
var source = context.createMediaStreamSource(stream);
source.connect(analyser);
// source.connect(context.destination);
analyser.connect(context.destination);
opacify();
function opacify() {
analyser.onaudioprocess = function(e) {
// no need to get the output buffer anymore
var int = e.inputBuffer.getChannelData(0);
var max = 0;
for (var i = 0; i < int.length; i++) {
max = int[i] > max ? int[i] : max;
}
if (max > 0.01) {
console.log("max: ", max);
}
}
}
}
}
I have tried the above code, where I use enumerateDevices() and getUserMedia() which will give a set of devices, for demo purposes I am taking the last device which has 'audiooutput' as value for kind property and accessing stream of the device.
Please let me know if this is even possible with Web Audio API. If not, is there any other tool that can help me implement this feature?
Thanks in advance.
Upvotes: 1
Views: 1305
Reputation: 109
The problem is likely linked to the machine you are running. On macOS, there is no way to capture system audio output from Browser APIs as it requires a signed kernel extension. Potential workarounds are using Blackhole for Sunflower. On windows, the code should work fine though.
Upvotes: 1
Reputation: 9176
You would need to use your handleSuccess()
function with the stream that you get from the remote end. That stream usually gets exposed as part of the track
event.
Upvotes: 0