Changwei
Changwei

Reputation: 672

Swift version get wrong pixels data but the Object-c version get the right pixels data from the same image using method "CGBitmapContextCreate"

I can't believe my eyes, they are basically the same code, just convert the Object-c code to swift code, but the Object-c code always gets the right answer, but the swift code sometimes gets the right answer, sometimes gets wrong.

The Swift rendition:

class ImageProcessor1 {
    class func processImage(image: UIImage) {
        guard let cgImage = image.cgImage else {
            return
        }
        let width = Int(image.size.width)
        let height = Int(image.size.height)
        let bytesPerRow = width * 4
        let imageData = UnsafeMutablePointer<UInt32>.allocate(capacity: width * height)
        let colorSpace = CGColorSpaceCreateDeviceRGB()

        let bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue
        guard let imageContext = CGContext(data: imageData, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
            return
        }
        imageContext.draw(cgImage, in: CGRect(origin: .zero, size: image.size))
        print("---------data from Swift version----------")
        for i in 0..<width * height {
            print(imageData[i])
        }
    }
}

The Objective-C rendition:

- (UIImage *)processUsingPixels:(UIImage*)inputImage {

  // 1. Get the raw pixels of the image
  UInt32 * inputPixels;

  CGImageRef inputCGImage = [inputImage CGImage];
  NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
  NSUInteger inputHeight = CGImageGetHeight(inputCGImage);

  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

  NSUInteger bytesPerPixel = 4;
  NSUInteger bitsPerComponent = 8;

  NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;

  inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));

  CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight,
                                               bitsPerComponent, inputBytesPerRow, colorSpace,
                                               kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

  CGContextDrawImage(context, CGRectMake(0, 0, inputWidth, inputHeight), inputCGImage);

    NSLog(@"---------data from Object-c version----------");
    UInt32 * currentPixel = inputPixels;
    for (NSUInteger j = 0; j < inputHeight; j++) {
        for (NSUInteger i = 0; i < inputWidth; i++) {
            UInt32 color = *currentPixel;
            NSLog(@"%u", color);
            currentPixel++;
        }
    }
  return inputImage;
}

Available at https://github.com/tuchangwei/Pixel

And if you get the same answer, please run it more times.

Upvotes: 1

Views: 213

Answers (1)

Rob
Rob

Reputation: 437882

Both your Objective-C and Swift code have leaks. Also your Swift code is not initializing the allocated memory. When I initialized the memory, I didn’t see any differences:

imageData.initialize(repeating: 0, count: width * height)

FWIW, while allocate doesn't initialize the memory buffer, the calloc does:

... The allocated memory is filled with bytes of value zero.

But personally, I’d suggest you get out of the business of allocating memory at all and pass nil for the data parameter and then use bindMemory to access that buffer. If you do that, as the documentation says:

Pass NULL if you want this function to allocate memory for the bitmap. This frees you from managing your own memory, which reduces memory leak issues.

Thus, perhaps:

class func processImage(image: UIImage) {
    guard let cgImage = image.cgImage else {
        return
    }
    let width = cgImage.width
    let height = cgImage.height
    let bytesPerRow = width * 4

    let colorSpace = CGColorSpaceCreateDeviceRGB()

    let bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue
    guard
        let imageContext = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
        let rawPointer = imageContext.data
    else {
        return
    }

    let pixelBuffer = rawPointer.bindMemory(to: UInt32.self, capacity: width * height)

    imageContext.draw(cgImage, in: CGRect(origin: .zero, size: CGSize(width: width, height: height)))
    print("---------data from Swift version----------")
    for i in 0..<width * height {
        print(pixelBuffer[i])
    }
}

Upvotes: 1

Related Questions