thelearner
thelearner

Reputation: 1136

Adding watermark to video is extremely slow

I am using AVComposition to render a watermark to a video. This process takes around 15 seconds, which doesn't seem ok for a 20 seconds video. My export settings are:

let exporter = AVAssetExportSession.init(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
    exporter?.outputURL = outputPath
    exporter?.outputFileType = AVFileType.mp4
    exporter?.shouldOptimizeForNetworkUse = true
    exporter?.videoComposition = mainCompositionInst
    DispatchQueue.main.async {
        exporter?.exportAsynchronously(completionHandler: {

            if exporter?.status == AVAssetExportSessionStatus.completed {
                completion(true, exporter)
            }else{
                completion(false, exporter)
            }

        })
    }

This is how I add the watermark:

    //Creating image layer
    let overlayLayer = CALayer()
    let overlayImage: UIImage = image
    overlayLayer.contents = overlayImage.cgImage
    overlayLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    overlayLayer.contentsGravity = kCAGravityResizeAspectFill
    overlayLayer.masksToBounds = true

    //Creating parent and video layer
    let parentLayer = CALayer()
    let videoLayer = CALayer()
    parentLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    videoLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    parentLayer.addSublayer(videoLayer)
    parentLayer.addSublayer(overlayLayer)

    //Adding those layers to video
    composition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
}

and this is how I eventually transform my video:

 let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction.init(assetTrack: videoTrack!)
    let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video)[0]
    var videoAssetOrientation = UIImageOrientation.up
    var isVideoAssetPortrait = false
    let videoTransform = videoAssetTrack.preferredTransform

    if videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0 {
        videoAssetOrientation = .right
        isVideoAssetPortrait = true
    }
    if videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0 {
        videoAssetOrientation = .left
        isVideoAssetPortrait = true
    }
    if videoTransform.a == 1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == 1.0 {
        videoAssetOrientation = .up
    }
    if videoTransform.a == -1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == -1.0 {
        videoAssetOrientation = .down
    }

    videoLayerInstruction.setTransform(videoAssetTrack.preferredTransform, at: kCMTimeZero)

    //Add instructions
    mainInstruction.layerInstructions = [videoLayerInstruction]
    let mainCompositionInst = AVMutableVideoComposition()
    let naturalSize : CGSize!
    if isVideoAssetPortrait {
        naturalSize = CGSize(width: videoAssetTrack.naturalSize.height, height: videoAssetTrack.naturalSize.width)
    } else {
        naturalSize = videoAssetTrack.naturalSize
    }

So my question now is, how can I improve the performance of merging the watermark to my video? 15 seconds is totally unacceptable for any kind of end-user. Furthermore, I need to transport this video over the internet, so the loading screen would show its beauty more than a total of approximately twenty seconds.

Upvotes: 1

Views: 1103

Answers (1)

Jake
Jake

Reputation: 2216

Per the Apple Documentation, try using the class AVAsynchronousCIImageFilteringRequest

Overview

You use this class when creating a composition for Core Image filtering with the init(asset:applyingCIFiltersWithHandler:) method. In that method call, you provide a block to be called by AVFoundation as it processes each frame of video, and the block’s sole parameter is a AVAsynchronousCIImageFilteringRequest object. Use that object both to the video frame image to be filtered and allows you to return a filtered image to AVFoundation for display or export. Listing 1 shows an example of applying a filter to an asset.

    let filter = CIFilter(name: "CIGaussianBlur")!
    let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in

    // Clamp to avoid blurring transparent pixels at the image edges
    let source = request.sourceImage.imageByClampingToExtent()
    filter.setValue(source, forKey: kCIInputImageKey)

    // Vary filter parameters based on video timing
    let seconds = CMTimeGetSeconds(request.compositionTime)
    filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)

    // Crop the blurred output to the bounds of the original image
    let output = filter.outputImage!.imageByCroppingToRect(request.sourceImage.extent)

    // Provide the filter output to the composition
    request.finishWithImage(output, context: nil)
})

There is a tutorial in Objective C that may be a good resource as well.

Upvotes: 2

Related Questions