Reputation: 5234
I'm getting unexpected results when applying CIFilters to an image. The expected result is a black and white image with the color orange blending through the edges. Instead I get a blue-ish image with red blending through the edges. My function is:
func sketch(with ciImage: CIImage) -> CIImage {
var sourceCore = ciImage
var convolutionValue_A:CGFloat = -0.0925937220454216
var convolutionValue_B:CGFloat = -0.4166666567325592
var convolutionValue_C:CGFloat = -1.8518532514572144
var convolutionValue_D:CGFloat = 0.23148006200790405
var convolutionValue_E:CGFloat = 4.5833334922790527
var convolutionValue_F:CGFloat = 14.166666984558105
var brightnessVal:CGFloat = 1.1041666269302368
var contrastVal:CGFloat = 3.0555555820465088
var weightsArr: [CGFloat] = [
convolutionValue_A, convolutionValue_A, convolutionValue_B, convolutionValue_B, convolutionValue_B, convolutionValue_A, convolutionValue_A,
convolutionValue_A, convolutionValue_B, convolutionValue_C, convolutionValue_C, convolutionValue_C, convolutionValue_B, convolutionValue_A,
convolutionValue_B, convolutionValue_C, convolutionValue_D, convolutionValue_E, convolutionValue_D, convolutionValue_C, convolutionValue_B,
convolutionValue_B, convolutionValue_C, convolutionValue_E, convolutionValue_F, convolutionValue_E, convolutionValue_C, convolutionValue_B,
convolutionValue_B, convolutionValue_C, convolutionValue_D, convolutionValue_E, convolutionValue_D, convolutionValue_C, convolutionValue_B,
convolutionValue_A, convolutionValue_B, convolutionValue_C, convolutionValue_C, convolutionValue_C, convolutionValue_B, convolutionValue_A,
convolutionValue_A, convolutionValue_A, convolutionValue_B, convolutionValue_B, convolutionValue_B, convolutionValue_A, convolutionValue_A
]
let inputWeights:CIVector = CIVector(values: weightsArr, count: weightsArr.count)
sourceCore = sourceCore
.applyingFilter("CIColorControls", parameters: [kCIInputImageKey: sourceCore,
kCIInputSaturationKey: 0.0,
kCIInputBrightnessKey: brightnessVal,
kCIInputContrastKey: contrastVal])
// transforms image to only show edges in black and white
sourceCore = sourceCore
.applyingFilter("CIConvolution7X7", parameters: [kCIInputImageKey: sourceCore,
kCIInputWeightsKey: inputWeights]).cropped(to: sourceCore.extent)
// give camera image a black and white Noir effect
var ciImage = ciImage
.applyingFilter("CIPhotoEffectNoir", parameters: [kCIInputImageKey: ciImage])
// make solid color
let color = CIColor(red: 0.819, green: 0.309, blue: 0.309)
let colFilter = CIFilter(name: "CIConstantColorGenerator")!
colFilter.setValue(color, forKey: kCIInputColorKey)
var solidColor = colFilter.value(forKey: "outputImage") as! CIImage
solidColor = solidColor.cropped(to: ciImage.extent)
// color should only be shown through outlines
// for some reason the input image is blue-ish
sourceCore = sourceCore
.applyingFilter("CIBlendWithMask", parameters: [
kCIInputImageKey: ciImage, // black and white image
kCIInputBackgroundImageKey: solidColor, // solid color
kCIInputMaskImageKey:sourceCore]) // edge work image
ciImage = sourceCore
return ciImage
}
Here are the images every step of the way through the function:
This is what results when I apply the CIColorControls with 0 saturation and the 7x7 convolution. I use this for my mask to attempt to show orange through the black outline areas.
This is the black and white after using the CIFilter "CIPhotoEffectNoir" that is used as the input image when blending it with the mask and solid color.
The solid, orange color that I use as a background when blending with the mask.
The resulting image after performing CIBlendWithMask that appears incorrect.
First of all, I'd expect the background color only to appear where the outlines are on the mask image. Where the background color is, it appears orange instead of red, and the input image is blue-ish instead of strictly black and white.
Upvotes: 2
Views: 1150
Reputation: 5234
Figured it out, although I'm not sure why this works. I'm guessing it's a bug with Apple's tech. After the CIConvolution7X7, I needed to blend that black and white image with a white background using CIColorDodgeBlendMode or CILinearDodgeBlendMode.
Final code:
func sketch(with ciImage: CIImage) -> CIImage {
var sourceCore = ciImage
var convolutionValue_A:CGFloat = -0.0925937220454216
var convolutionValue_B:CGFloat = -0.4166666567325592
var convolutionValue_C:CGFloat = -1.8518532514572144
var convolutionValue_D:CGFloat = 0.23148006200790405
var convolutionValue_E:CGFloat = 4.5833334922790527
var convolutionValue_F:CGFloat = 14.166666984558105
var brightnessVal:CGFloat = 1.1041666269302368
var contrastVal:CGFloat = 3.0555555820465088
var weightsArr: [CGFloat] = [
convolutionValue_A, convolutionValue_A, convolutionValue_B, convolutionValue_B, convolutionValue_B, convolutionValue_A, convolutionValue_A,
convolutionValue_A, convolutionValue_B, convolutionValue_C, convolutionValue_C, convolutionValue_C, convolutionValue_B, convolutionValue_A,
convolutionValue_B, convolutionValue_C, convolutionValue_D, convolutionValue_E, convolutionValue_D, convolutionValue_C, convolutionValue_B,
convolutionValue_B, convolutionValue_C, convolutionValue_E, convolutionValue_F, convolutionValue_E, convolutionValue_C, convolutionValue_B,
convolutionValue_B, convolutionValue_C, convolutionValue_D, convolutionValue_E, convolutionValue_D, convolutionValue_C, convolutionValue_B,
convolutionValue_A, convolutionValue_B, convolutionValue_C, convolutionValue_C, convolutionValue_C, convolutionValue_B, convolutionValue_A,
convolutionValue_A, convolutionValue_A, convolutionValue_B, convolutionValue_B, convolutionValue_B, convolutionValue_A, convolutionValue_A
]
let inputWeights:CIVector = CIVector(values: weightsArr, count: weightsArr.count)
sourceCore = sourceCore
.applyingFilter("CIColorControls", parameters: [kCIInputImageKey: sourceCore,
kCIInputSaturationKey: 0.0,
kCIInputBrightnessKey: brightnessVal,
kCIInputContrastKey: contrastVal])
// transforms image to only show edges in black and white
sourceCore = sourceCore
.applyingFilter("CIConvolution7X7", parameters: [kCIInputImageKey: sourceCore,
let whiteCIColor = CIColor(red: 1, green: 1, blue: 1)
let whiteColor = CIImage(color: whiteCIColor).cropped(to: ciImage.extent)
// CIColorDodgeBlendMode, CILinearDodgeBlendMode
sourceCore = sourceCore
.applyingFilter("CIColorDodgeBlendMode", parameters: [kCIInputImageKey: sourceCore,
kCIInputBackgroundImageKey: whiteColor])
kCIInputWeightsKey: inputWeights]).cropped(to: sourceCore.extent)
// give camera image a black and white Noir effect
var ciImage = ciImage
.applyingFilter("CIPhotoEffectNoir", parameters: [kCIInputImageKey: ciImage])
// make solid color
let color = CIColor(red: 0.819, green: 0.309, blue: 0.309)
let colFilter = CIFilter(name: "CIConstantColorGenerator")!
colFilter.setValue(color, forKey: kCIInputColorKey)
var solidColor = colFilter.value(forKey: "outputImage") as! CIImage
solidColor = solidColor.cropped(to: ciImage.extent)
// color is shown through outlines correctly,
// and image is black and white
sourceCore = sourceCore
.applyingFilter("CIBlendWithMask", parameters: [
kCIInputImageKey: ciImage, // black and white image
kCIInputBackgroundImageKey: solidColor, // solid color
kCIInputMaskImageKey:sourceCore]) // edge work image
ciImage = sourceCore
return ciImage
}
Upvotes: 0
Reputation: 10398
I think the problem is in the final blending step. When you call sourceCore.applyingFilter(...)
, it will automatically set sourceCore
as inputImage
on that filter. And I'm not sure what happens when you re-define that in the parameters
.
You could instead create the filter like this:
let blendFilter = CIFilter(name: "CIBlendWithMask", parameters: [
kCIInputImageKey: ciImage,
kCIInputBackgroundImageKey: solidColor,
kCIInputMaskImageKey: sourceCore
])
return blendFilter.outputImage
By the way, you can create a solid color image much easier like this:
let solidColor = CIImage(color: color).cropped(to: ciImage.extent)
Upvotes: 1