user23967137
user23967137

Reputation: 1

How to encode MP4 file with Webcodec's VideoEncode API

I'm trying to make a tool that uses Webcodec's VideoEncoder API to compress/convert a submitted video file locally. The problem is that the outputted video after the encoding comes out with a fraction of a second and full of noise (no image). I've used Vanilagy's MP4 Muxer for Multiplexing. Here's my code:

async function main() {
    const videoFile = files[0];
    const arrayBuffer = await readFileAsArrayBuffer(videoFile);

    let muxer = new Muxer({
      target: new ArrayBufferTarget(),
      video: {
        codec: "avc",
        width: 1280,
        height: 720,
      },
      fastStart: "in-memory",
      firstTimestampBehavior: "offset",
    });

    const frame = new VideoFrame(arrayBuffer, {
      format: "I420",
      codedWidth: 1280,
      codedHeight: 720,
      timestamp: 0,
      duration: 0,
    });

    let encoder = new VideoEncoder({
      output: (chunk, meta) => muxer.addVideoChunk(chunk, meta),
      error: (e) => console.error(e),
    });

    encoder.configure({
      codec: "avc1.64001f",
      width: 1280,
      height: 720,
      bitrate: 2_000_000, // 2 Mbps
      framerate: 25,
    });

    encoder.encode(frame, { keyFrame: true });

    frame.close();

    await encoder.flush();
    muxer.finalize();

    let { buffer } = muxer.target;

    downloadBlob(new Blob([buffer]));
  }

Link to REPL so you can test it

I've tried different video codecs and even importing the video from a HTML Video element, but then I only got some frames of the video, not the full length.

I've already seen this thread but can't find a solution either.

Upvotes: 0

Views: 946

Answers (2)

pedrobroese
pedrobroese

Reputation: 56

You have a conceptual problem: You can not create videoFrames correctly out of an ArrayBuffer generated from the video file. To generate the videoframes you have two options:

First and simplest: As suggested above, you can turn your videoFile into a videoStream and then paint it to a canvas, or even better, use the mediaStreamTrackProcessor API.

Altough this is the simplest, there is a big drawback: You won't get consistent frame rate... This happens because when you choose to use the browsers API's, you levae the decoder control for them and they have algorithims that, eventually skip frames.

Second and hardest, but will give you total control: Use mp4Box to extract the encodedVideoChunks relative to each frame and then decode them. By doing so, you'll have access to every frame of the video file as well as the metadata. In this URL you can find a fully functional example: https://w3c.github.io/webcodecs/samples/video-decode-display/

Upvotes: 0

Konstantin Paulus
Konstantin Paulus

Reputation: 2195

As mentioned in this example Video Processing with WebCodecs, valid input sources of a Video Frame are:

  • Canvas
  • ImageBitmap
  • MediaStreamTrack

What you can do to achieve the desired result is to paint each individual frame of the input video to a canvas which you can then be referenced as the first argument of the VideoFrame object.

Upvotes: 0

Related Questions