Reputation: 4464
For a scientific application, I need to do live processing of the video stream received from the web cam in JavaScript.
WebRTC makes it simple to display the live stream on a web site, and using a <canvas>
also allows to take and process screenshots.
I need to track brightness of the video image. Therefore, what I need is a stream of a few pixels (their RGB values) over time. It seems very inefficient to copy the <video>
to a <canvas>
30 times per second just to have a still image and analyse a fews pixels...
Is there any way to access the video content of a MediaStreamTrack more directly?
Upvotes: 2
Views: 2525
Reputation: 42430
Is there any way to access the video content of a MediaStreamTrack more directly?
Not real-time, no. The W3C has discussed adding such an API in workers, but none exists today.
The MediaRecorder API comes close: it can give you blobs of data at some millisecond interval (see start(timeslice)), but it's not real-time.
It seems very inefficient...
Modern browsers have background threads to do heavy lifting like downscaling, so I'd caution against premature optimization. Things generally only slow down when bits are exposed en masse to main thread JavaScript. Therefore I'd worry less about your camera resolution than the size of your canvas.
If you only need a few pixels for brightness, make your canvas real tiny. The overhead should be low. E.g.:
video.srcObject = await navigator.mediaDevices.getUserMedia({video: true});
await new Promise(r => video.onloadedmetadata = r);
const ctx = canvas.getContext('2d');
requestAnimationFrame(function loop() {
ctx.drawImage(video, 0, 0, 16, 12);
requestAnimationFrame(loop);
});
<canvas id="canvas" width="16" height="12"></canvas>
If overhead is still a concern, I'd reduce the frame rate.
Upvotes: 3