I've been trying to figure out how to convert an array of rgb pixel data to a UIImage in Swift.
I'm keeping the rgb data per pixel in a simple struct:
public struct PixelData {
var a: Int
var r: Int
var g: Int
var b: Int
}
I've made my way to the following function, but the resulting image is incorrect:
func imageFromARGB32Bitmap(pixels:[PixelData], width: Int, height: Int)-> UIImage {
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
let bitsPerComponent:Int = 8
let bitsPerPixel:Int = 32
assert(pixels.count == Int(width * height))
var data = pixels // Copy to mutable []
let providerRef = CGDataProviderCreateWithCFData(
NSData(bytes: &data, length: data.count * sizeof(PixelData))
)
let cgim = CGImageCreate(
width,
height,
bitsPerComponent,
bitsPerPixel,
width * Int(sizeof(PixelData)),
rgbColorSpace,
bitmapInfo,
providerRef,
nil,
true,
kCGRenderingIntentDefault
)
return UIImage(CGImage: cgim)!
}
Any tips or pointers on how to properly convert an rgb array to an UIImage?
Note: This is a solution for iOS creating a
UIImage
. For a solution for macOS andNSImage
, see this answer.Your only problem is that the data types in your
PixelData
structure need to beUInt8
. I created a test image in a Playground with the following:Update for Swift 4:
I updated
imageFromARGB32Bitmap
to work with Swift 4. The function now returns aUIImage?
andguard
is used to returnnil
if anything goes wrong.Making it a convenience initializer for UIImage:
This function works well as a
convenience
initializer forUIImage
. Here is the implementation:Here is an example of its usage: