Reputation: 675
is there a way to improve the speed / performance of drawing pixel per pixel into a UIView?
The current implementation of a 500x500 pixel UIView, is terribly slow.
class CustomView: UIView {
public var context = UIGraphicsGetCurrentContext()
public var redvalues = [[CGFloat]](repeating: [CGFloat](repeating: 1.0, count: 500), count: 500)
public var start = 0
{
didSet{
self.setNeedsDisplay()
}
}
override func draw(_ rect: CGRect
{
super.draw(rect)
context = UIGraphicsGetCurrentContext()
for yindex in 0...499{
for xindex in 0...499 {
context?.setStrokeColor(UIColor(red: redvalues[xindex][yindex], green: 0.0, blue: 0.0, alpha: 1.0).cgColor)
context?.setLineWidth(2)
context?.beginPath()
context?.move(to: CGPoint(x: CGFloat(xindex), y: CGFloat(yindex)))
context?.addLine(to: CGPoint(x: CGFloat(xindex)+1.0, y: CGFloat(yindex)))
context?.strokePath()
}
}
}
}
Thank you very much
Upvotes: 4
Views: 1851
Reputation: 1
I took the answer from Manuel and got it working in Swift 5. The main sticking point here was to clear the dangling pointer warning now in Xcode 12.
var image:CGImage?
pixelData.withUnsafeMutableBytes( { (rawBufferPtr: UnsafeMutableRawBufferPointer) in
if let rawPtr = rawBufferPtr.baseAddress {
let bitmapContext = CGContext(data: rawPtr,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4*width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
image = bitmapContext?.makeImage()
}
})
I did have to move away from the rgba struct approach for front loading the data and moved to direct UInt32 values derived from rawValues in the enum. The 'append' or 'replaceInRange' approach to updating an existing array took hours (my bitmap was LARGE) and ended up exhausting swap space on my computer.
enum Color: UInt32 { // All 4 bytes long with full opacity
case red = 4278190335 // 0xFF0000FF
case yellow = 4294902015
case orange = 4291559679
case pink = 4290825215
case violet = 4001558271
case purple = 2147516671
case green = 16711935
case blue = 65535 // 0x0000FFFF
}
With this approach I was able to quickly build a Data buffer with that data amount via:
func prepareColorBlock(c:Color) -> Data {
var rawData = withUnsafeBytes(of:c.rawValue) { Data($0) }
rawData.reverse() // Byte order is reveresed when defined
var dataBlock = Data()
dataBlock.reserveCapacity(100)
for _ in stride(from: 0, to: 100, by: 1) {
dataBlock.append(rawData)
}
return dataBlock
}
With that I just appended each of these blocks into my mutable Data instance 'pixelData' and we are off. You can tweak how the data is assembled, as I just wanted to generate some color bars in a UIImageView to validate the work. For a 800x600 view, it took about 2.3 seconds to generate and render the whole thing.
Again, hats off to Manuel for pointing me in the right direction.
Upvotes: 0
Reputation: 675
That's how it looks now, are there any optimizations possible or not?
public struct rgba {
var r:UInt8
var g:UInt8
var b:UInt8
var a:UInt8
}
public let imageview = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
let width_input = 500
let height_input = 500
let redPixel = rgba(r:255, g:0, b:0, a:255)
let greenPixel = rgba(r:0, g:255, b:0, a:255)
let bluePixel = rgba(r:0, g:0, b:255, a:255
var pixelData = [rgba](repeating: redPixel, count: Int(width_input*height_input))
pixelData[1] = greenPixel
pixelData[3] = bluePixel
self.view.addSubview(imageview)
imageview.frame = CGRect(x: 100,y: 100,width: 600,height: 600)
imageview.image = draw(pixel: pixelData,width: width_input,height: height_input)
}
func draw(pixel:[rgba],width:Int,height:Int) -> UIImage
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
let data = UnsafeMutableRawPointer(mutating: pixel)
let bitmapContext = CGContext(data: data,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4*width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
let image = bitmapContext?.makeImage()
return UIImage(cgImage: image!)
}
Upvotes: 1
Reputation: 12129
When drawing individual pixels, you can use a bitmap context. A bitmap context takes raw pixel data as an input.
The context copies your raw pixel data so you don't have to use paths, which are likely much slower. You can then get a CGImage
by using context.makeImage()
.
The image can then be used in an image view, which would eliminate the need to redraw the whole thing every frame.
If you don't want to manually create a bitmap context, you can use
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
// draw everything into the context
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Then you can use a UIImageView
to display the rendered image.
It is also possible to draw into a CALayer
, which does not need to be redrawn every frame but only when resized.
Upvotes: 1