Reputation: 365
I got the tensorflow example app for iOS from here. My model works fine with this tf's app in real time detection, but I'd like to do it with a single image. As far as I could see, the main part to run the model is:
self.result = self.modelDataHandler?.runModel(onFrame: buffer)
This buffer
variable is a CVPixelBuffer, I can obtain it from a video frame using CMSampleBufferGetImageBuffer()
as the tf's app does. But my app is not using frames, so I don't have this option.
My captured photo is a UIImage, I tried to convert it to a CVPixelBuffer to use it with the code above:
let ciImage: CIImage = CIImage(cgImage: (self.image?.cgImage)!)
let buffer: CVPixelBuffer = self.getBuffer(from: ciImage)!
The getBuffer()
is:
func getBuffer(from image: CIImage) -> CVPixelBuffer? {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.extent.width), Int(image.extent.height), kCVPixelFormatType_32BGRA, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
print("Error converting ciImage to CVPixelBuffer")
return nil
}
return pixelBuffer
}
And then run it with:
self.result = self.modelDataHandler?.runModel(onFrame: buffer)
let inferences: [Inference] = self.result!.inferences
let time: Double = self.result!.inferenceTime
As a result I have a time
of about 50 or 60 ms, but the inferences
comes empty. I don't know if my conversion from UIImage to CVPixelBuffer is right or if there is another error or procedure that I'm forgetting.
If you have some questions, please ask me, any help would be great! Thanks.
Upvotes: 1
Views: 868
Reputation: 365
I've found my problem, my conversion from UIImage to CVPixelBuffer was wrong, no CIImage is needed. From this question I got the right code to do this conversion.
Upvotes: 1