sçuçu
sçuçu

Reputation: 3070

Is it possible to add a stream as source to an html canvas element as to a html video element?

According to MDN:

The HTMLMediaElement interface adds to HTMLElement the properties and methods needed to support basic media-related capabilities that are common to audio and video.

HTMLMediaElement.captureStream(). It can be used with both <video> and <canvas> elements to capture their stream.

Conversely, one can add a video stream as srcObject to a <video> element, then it shows it. Is it possible for <canvas> element too?

Is it possible to add a stream as source to an html <canvas> element?

Upvotes: 13

Views: 14225

Answers (2)

Kaiido
Kaiido

Reputation: 136986

No there is nothing in any of the Canvas APIs able to consume a MediaStream.

The canvas APIs work only with raw pixels, and contain no decoder of any sort. You must use either either javascript objects that are able to do this decode (e.g ImageBitmap), or HTMLElements.

So in the case of a MediaStream, currently the only object able to decode it's video content will be an HTMLVideoElement, that you'll be able to draw on your canvas easily.


2021 update

The WebCodecs API has made great progress recently, and is becoming more and more mature that it's now worth being mentioned as a solution.

This API offers a new interface called VideoFrame which will soon be part of the CanvasImageSources type, meaning, we can use it directly with drawImage, texImage2D and everywhere such a CanvasImageSource can be used.
The MediaCapture Transform W3C group has developed a MediaStreamTrackProcessor that does return such VideoFrames from a video MediaStreamTrack.

So we now have a more direct way to render a MediaStream to a canvas, which currently only works in Chrome browser with the #enable-experimental-web-platform-features flag on...

if( window.MediaStreamTrackProcessor ) {
  const canvas = document.querySelector("canvas");
  const ctx = canvas.getContext("2d");
  const track = getCanvasTrack(); // MediaStream.getVideoTracks()[0]
  const processor = new MediaStreamTrackProcessor( track );
  const reader = processor.readable.getReader();
  readChunk();
  function readChunk() {
    reader.read().then( ({ done, value }) => {
      // the MediaStream video can have dynamic size
      if( canvas.width !== value.displayWidth || canvas.height !== value.displayHeight ) {
        canvas.width = value.displayWidth;
        canvas.height = value.displayHeight;
      }
      ctx.clearRect( 0, 0, canvas.width, canvas.height );
      // value is a VideoFrame
      ctx.drawImage( value, 0, 0 );
      value.close(); // close the VideoFrame when we're done with it
      if( !done ) {
        readChunk();
      }
    });
  }
}
else {
  console.error("Your browser doesn't support this API yet");
}

// We can't use getUserMedia in StackSnippets
// So here we use a simple canvas as source
// for our MediaStream.
function getCanvasTrack() {
  // just some noise...
  const canvas = document.createElement("canvas");
  const ctx = canvas.getContext("2d");
  const img = new ImageData(300, 150);
  const data = new Uint32Array(img.data.buffer);
  const track = canvas.captureStream().getVideoTracks()[0];

  anim();
  
  return track;
  
  function anim() {
    for( let i=0; i<data.length;i++ ) {
      data[i] = Math.random() * 0xFFFFFF + 0xFF000000;
    }
    ctx.putImageData(img, 0, 0);
    if( track.readyState === "live" ) {
      requestAnimationFrame(anim);
    }
  }
  
}
<canvas></canvas>

As a glitch project (source) using the camera as source.

Upvotes: 11

Brad
Brad

Reputation: 163438

@Kaiido is correct in that there isn't any way to do this directly. So, here's what you must do:

function onFrame() {
  window.requestAnimationFrame(onFrame);
  canvasContext.drawImage(video, 0, 0);
}
onFrame();

A couple gotchas you're going to run into:

  • Your source video can change resolution mid-stream. This is very common in WebRTC calls where the source may scale the actual pixel resolution due to bandwidth or CPU constraints. One way around this is to check the size of the video every single frame you draw, and scale accordingly on your canvas.
  • This frame loop doesn't run at speed when the tab doesn't have focus. If you're relying on captureStream from this canvas as well, due to throttling policies, it isn't going to work if the tab doesn't have focus.
  • The canvas buffer doesn't update when the tab doesn't have focus, so even if you hack around the timer issue with an audio script node or something, it won't work if you want to use captureStream from the canvas as well.
  • Remember that there is no "genlock" here. For every frame you copy to the canvas, an arbitrary number of frames (possibly zero!) could have passed by on the video. This might not matter for your situation.

Upvotes: 4

Related Questions