Reputation: 1
I have combed through the Azure Communication Services docs back and forth and can't find any place where annotations are supported for a group call using the SDK in JavaScript: https://learn.microsoft.com/en-us/azure/communication-services/quickstarts/voice-video-calling/getting-started-with-calling?tabs=uwp&pivots=platform-web
For the moment, I'm having to capture the remote video stream, add a canvas on top and then draw on the canvas, and then return that as a combined video stream as if it's my own video feed (i.e. as if it's from my webcam) using FabricJS. This has a lot of unintended side effects, like the stream being sent to the remote participant being very compressed and the aspect ratio / zoom is corrupted to where they only see the very center of the annotated video.
Needless to say, I am wondering if anyone has worked with this before and knows if Microsoft has any official way of adding annotations, or even if a better workaround / library exists? Here is the relevant portion of my code
async annotate() {
try {
// Get the video element
const videoElement = this.technicianVideoContainer.querySelector('video');
if (!videoElement) {
console.error('No video element found');
return;
}
// Create a canvas container element
const canvasContainer = document.createElement('div');
canvasContainer.className = 'canvas-container';
canvasContainer.style.width = `${videoElement.clientWidth}px`;
canvasContainer.style.height = `${videoElement.clientHeight}px`;
canvasContainer.style.position = 'absolute';
canvasContainer.style.left = `${videoElement.offsetLeft}px`;
canvasContainer.style.top = `${videoElement.offsetTop}px`;
canvasContainer.style.objectFit = `none`;
canvasContainer.style.zIndex = '100';
// Append the canvas container to the body or a suitable container
this.technicianVideoContainer.appendChild(canvasContainer);
// Create a canvas element inside the canvas container
this.canvas = new fabric.Canvas(document.createElement('canvas'));
this.canvas.setWidth(videoElement.clientWidth); // Match video element width
this.canvas.setHeight(videoElement.clientHeight); // Match video element height
// Append the canvas to the canvas container
canvasContainer.appendChild(this.canvas.wrapperEl);
// Set the z-index of the canvas to be higher than the video element
this.canvas.wrapperEl.style.zIndex = '101';
// Add the video element to the canvas as an image
const videoImage = new fabric.Image(videoElement, {
left: 0,
top: 0,
angle: 0,
selectable: false // Make it non-selectable for now as drawing will happen later
});
this.canvas.add(videoImage); // Add the video image to the canvas
// Update the canvas with the video frame on a regular interval
setInterval(() => {
if (this.canvas) {
this.canvas.requestRenderAll(); // Re-render the canvas to update the video frame
}
}, 100);
await this.toggleDrawingMode();
} catch (error) {
console.error('Error annotating video:', error);
}
}
async toggleDrawingMode() {
if (!this.canvas) {
console.error('Canvas is not initialized.');
return;
}
this.canvas.isDrawingMode = !this.canvas.isDrawingMode;
this.canvas.freeDrawingBrush.color = 'red';
this.canvas.freeDrawingBrush.width = 2;
if (this.canvas.isDrawingMode) {
// Enable drawing mode
this.canvas.on('mouse:down', (options) => {
if (options && options.e) {
const pointer = this.canvas.getPointer(options.e);
const eventOptions = { e: options.e };
this.canvas.freeDrawingBrush.onMouseDown(pointer, eventOptions);
this.canvas.renderAll();
}
});
this.canvas.on('mouse:up', async (options) => {
if (options && options.e) {
const remoteVideoStream = this.remoteVideoStreams.find(stream => stream.isAvailable);
if (!remoteVideoStream) {
console.error('No available remote video stream found');
return;
}
const technicianVideoStream = await remoteVideoStream.getMediaStream();
const videoElement = document.createElement('video');
videoElement.srcObject = technicianVideoStream;
videoElement.play();
// Create a new canvas to combine video and annotations
const compositeCanvas = document.createElement('canvas');
compositeCanvas.width = this.canvas.getWidth();
compositeCanvas.height = this.canvas.getHeight();
const ctx = compositeCanvas.getContext('2d');
const drawFrame = () => {
ctx.clearRect(0, 0, compositeCanvas.width, compositeCanvas.height);
ctx.drawImage(videoElement, 0, 0, compositeCanvas.width, compositeCanvas.height);
// Render the fabric.js canvas onto the composite canvas
this.canvas.renderAll();
// Draw the fabric.js canvas onto the composite canvas
ctx.drawImage(this.canvas.getElement(), 0, 0);
requestAnimationFrame(drawFrame);
};
videoElement.addEventListener('play', () => {
drawFrame();
});
// Capture the combined stream
const combinedStream = compositeCanvas.captureStream(60); // 30 FPS
// let a =combinedStream.getVideoTracks()[0]
const combinedLocalVideoStream = new LocalVideoStream(combinedStream);
// const videoStreamRenderer = new VideoStreamRenderer(combinedLocalVideoStream);
// const view = await videoStreamRenderer.createView();
// this.technicianVideoContainer.appendChild(view.target);
// Start sending the combined video stream in the call
if (!this.localVideoOn) {
await this.call.startVideo(combinedLocalVideoStream);
this.localVideoOn = true;
}
}
});
} else {
// Disable drawing mode
this.canvas.off('mouse:down');
this.canvas.off('mouse:up');
}
}
subscribeToRemoteVideoStream = async (remoteVideoStream: RemoteVideoStream, remoteParticipantUserId: string) => {
let renderer = new VideoStreamRenderer(remoteVideoStream);
let view;
this.remoteVideoContainer = document.createElement('div');
this.remoteVideoContainer.className = 'remote-video-container';
const videoContainerStyle = 'height: 100%; border: solid';
this.remoteVideoContainer.setAttribute('style', videoContainerStyle);
let loadingSpinner = document.createElement('div');
loadingSpinner.className = 'loading-spinner';
const isUserTechnician = this.videoData.usersAndTokens[0].user.id === remoteParticipantUserId;
remoteVideoStream.on('isReceivingChanged', async () => {
try {
console.log("video changed")
if (remoteVideoStream.isAvailable) {
if (this.canvas) {
await this.stopVideo();
this.destroyCanvas();
await this.annotate();
}
this.remoteVideoStreams = this.remoteVideoStreams.concat(remoteVideoStream);
const isReceiving = remoteVideoStream.isReceiving;
const isLoadingSpinnerActive = this.remoteVideoContainer.contains(loadingSpinner);
if (!isReceiving && !isLoadingSpinnerActive) {
this.remoteVideoContainer.appendChild(loadingSpinner);
} else if (isReceiving && isLoadingSpinnerActive) {
this.remoteVideoContainer.removeChild(loadingSpinner);
}
this.remoteVideoOn = true;
} else {
this.remoteVideoOn = false;
}
} catch (e) {
console.error(e);
}
});
}
The rest of my component / html follows the sample code in the link above, thank you for reading!
I have tried using FabricJS as well as Konva, where FabricJS achieves the result of the remote participant getting a compressed, somewhat corrupted annotated video sent back. I more recently tried Konva where it has a promising start, but despite having the code as close to FabricJS but with Konva's implentations as possible, I am getting no video back as the remote participant. Since both of these are less than ideal, I really would like an official way to do it from Microsoft, or a better work-around than 'faking' my device video as the remote video + annotations.
Upvotes: 0
Views: 116
Reputation: 1308
When using canvas.captureStream()
, the video quality is degraded because the canvas API is not optimized for high-quality video streaming. The canvas operates on 2D graphics, and when trying to render high-quality video frames, it can introduce compression artifacts and aspect ratio distortions.
canvas.captureStream()
, leave the video stream untouched, and send the annotations separately. This way, the video quality is maintained, and annotations are overlaid by the remote client.Using the ACS SDK, subscribe to the remote participant’s video stream. This is the starting point of the issue.
const videoElement = document.querySelector('video');
const canvas = new fabric.Canvas(document.createElement('canvas'));
canvas.setWidth(videoElement.clientWidth);
canvas.setHeight(videoElement.clientHeight);
document.body.appendChild(canvas.wrapperEl);
const compositeCanvas = document.createElement('canvas');
compositeCanvas.width = canvas.getWidth();
compositeCanvas.height = canvas.getHeight();
const combinedStream = compositeCanvas.captureStream(30);
dataChannel.send(JSON.stringify({ x: pointer.x, y: pointer.y }));
I had used Kurento media server. It can handle real-time video streams, combine the streams with annotations server-side, and then send the combined result to all participants.
Upvotes: 0