Blane Townsend
Blane Townsend

Reputation: 3048

How to overlay one video on another in iOS?

I am trying to crop an already taken video into a circle in iOS. How might I go about doing this. I know how I would do it with AVCaptureSession but I don't know to pass in an already taken video as an AVCaptureDevice? Is there a way to crop a video into a circle. I want to overlay it on top of another video so it has to have a transparent background as well. Thanks.

Upvotes: 6

Views: 4438

Answers (2)

yoAlex5
yoAlex5

Reputation: 34401

Swift Video overlay

You have next options:

  1. Add extra overlay(image, animation) using AVMutableVideoComposition. animationTool. Actually it is a kind of post processing where extra overlay is added to your video

Add corners to video:

func applyCornersAsOverlayToComposition(
    composition: AVMutableVideoComposition,
    coverViewFrameSize: CGSize
) {
    //set up the parent layer
    let parentLayer = CALayer()
    
    //apply corners
    parentLayer.masksToBounds = true
    parentLayer.cornerRadius = CollagePresenter.containerViewCornerRadius
    
    let videoLayer = CALayer()
    parentLayer.frame = CGRectMake(0, 0, coverViewFrameSize.width, coverViewFrameSize.height)
    videoLayer.frame = CGRectMake(0, 0, coverViewFrameSize.width, coverViewFrameSize.height)
    
    //priority is important to make an overlay
    parentLayer.addSublayer(videoLayer)
    
    let animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
    composition.animationTool = animationTool
}
  1. Overlap multiple videos using AVMutableVideoComposition.instructions and AVMutableVideoCompositionLayerInstruction. It allows you to transform video (rotate, translate, scale...) and combine multiple of them
private func foo(
    videoAsset1: AVURLAsset,
    videoAsset2: AVURLAsset
) {
    let composition = AVMutableComposition()
    
    let trackVideo1 = videoAsset1.tracks(withMediaType: .video)[0]
    let trackVideo2 = videoAsset2.tracks(withMediaType: .video)[0]
    
    let videoTrack1 = composition.addMutableTrack(
        withMediaType: .video,
        preferredTrackID: kCMPersistentTrackID_Invalid
    )!
    
    let videoTrack2 = composition.addMutableTrack(
        withMediaType: .video,
        preferredTrackID: kCMPersistentTrackID_Invalid
    )!
    
    try! videoTrack1.insertTimeRange(
        CMTimeRangeMake(start: CMTime.zero, duration: videoAsset1.duration),
        of: trackVideo1,
        at: CMTime.zero
    )
    
    try! videoTrack2.insertTimeRange(
        CMTimeRangeMake(start: CMTime.zero, duration: videoAsset2.duration),
        of: trackVideo1,
        at: CMTime.zero
    )
    
    let transform1 = CGAffineTransform(scaleX: 0.1, y: 0.1)
        .concatenating(CGAffineTransform(rotationAngle: 0))
        .concatenating(
            CGAffineTransformMakeTranslation(
                0,
                0
            )
        )
    
    let transform2 = CGAffineTransform(scaleX: 0.2, y: 0.2)
        .concatenating(CGAffineTransform(rotationAngle: 0))
        .concatenating(
            CGAffineTransformMakeTranslation(
                2,
                2
            )
        )
    
    let layerInstruction1 = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack1)
    layerInstruction1.setTransform(
        transform1, at: CMTime.zero
    )
    
    let layerInstruction2 = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack2)
    layerInstruction2.setTransform(
        transform2, at: CMTime.zero
    )
    
    let mainInstruction = AVMutableVideoCompositionInstruction()
    mainInstruction.backgroundColor = UIColor.yellow.cgColor
    
    //Max duration
    mainInstruction.timeRange = CMTimeRangeMake(
        start: CMTime.zero,
        duration: max(videoAsset1.duration, videoAsset2.duration)
    )
    
    mainInstruction.layerInstructions = [layerInstruction1, layerInstruction2]
    
    let videoComposition = AVMutableVideoComposition()
    videoComposition.renderSize = CGSize(width: 1920, height: 1080)
    videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
    
    videoComposition.instructions = [mainInstruction]
}
  1. Overlap multiple videos with custom instructions using AVMutableVideoComposition.customVideoCompositorClass
let videoComposition = AVMutableVideoComposition()
videoComposition.renderSize = CGSize(width: 1920, height: 1080)
videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)

//every frame processing
videoComposition.customVideoCompositorClass = CustomCompositor.self
let customOverlayInstruction = CustomOverlayInstruction(
    timeRange: CMTimeRangeMake(
        start: CMTime.zero,
        duration: videoAsset.duration
    ),
    videoEntries: videoEntries
)

videoComposition.instructions = [customOverlayInstruction]

Upvotes: 0

rob mayoff
rob mayoff

Reputation: 385970

I guess you want to produce something like this:

demo of video overlay with oval crop

You don't want an AVCaptureSession, because you're not capturing video. You want an AVMutableComposition. You need to read the “Editing” section of the AV Foundation Programming Guide. Here's a summary of what you need to do:

  1. Create the AVAsset objects for your videos and wait for them to load their tracks.

  2. Create an AVMutableComposition.

  3. Add a separate AVMutableCompositionTrack to the composition for each of the input videos. Make sure to assign explicit, different track IDs to each track. If you let the system pick, it will use track ID 1 for each and you won't be able to access both later in the compositor.

  4. Create an AVMutableVideoComposition.

  5. Create an AVMutableVideoCompositionInstruction.

  6. For each input video, create an AVMutableVideoCompositionLayerInstruction and explicitly assign the track IDs you used back in step 3.

  7. Set the AVMutableVideoCompositionInstruction's layerInstructions to the two layer instructions you created in step 6.

  8. Set the AVMutableVideoComposition's instructions to the instruction you created in step 5.

  9. Create a class that implements the AVVideoCompositing protocol. Set the customVideoCompositorClass of the video composition (created in step 4) to this custom class (e.g. videoComposition.customVideoCompositorClass = [CustomVideoCompositor class];).

  10. In your custom compositor, get the input pixel buffers from the AVAsynchronousVideoCompositionRequest and use them to draw the composite frame (containing a background video frame overlaid by a circular chunk of the foreground video frame). You can do this however you want. I did it using Core Graphics because that's easy, but you'll probably want to use OpenGL (or maybe Metal) for efficiency in a production app. Be sure to specify kCVPixelBufferOpenGLESCompatibilityKey if you go with OpenGL.

  11. Create an AVAssetExportSession using your composition from step 1.

  12. Set the session's output URL and file type.

  13. Set the session's videoComposition to the video composition from step 4.

  14. Tell the session to exportAsynchronouslyWithCompletionHandler:. It will probably be slow!

You can find my test project in this github repository.

Upvotes: 20

Related Questions