How to get UIImage negative colours without changing color space

980 views Asked by At

So, I've followed the advice in this question:

how to give UIImage negative color effect

but when I do the conversion, the color space information is lost and reverts to RGB. (I want it in Gray).

If I NSLog the CGColorSpaceRef before and after the given code, it confirms this.

CGColorSpaceRef before = CGImageGetColorSpace([imageView.image CGImage]);
NSLog(@"%@", before);

UIGraphicsBeginImageContextWithOptions(imageView.image.size, YES, imageView.image.scale);

CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeCopy);

[imageView.image drawInRect:CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height)];

CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeDifference);

CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(),[UIColor whiteColor].CGColor);

CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height));

imageView.image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

CGColorSpaceRef after = CGImageGetColorSpace([imageView.image CGImage]);
NSLog(@"%@", after);

Is there any way to keep the colorspace information, or, if not, how can I change it back afterwards?

Edit: On reading the documentation for UIGraphicsBeginImageContextWithOptions it says:

For bitmaps created in iOS 3.2 and later, the drawing environment uses the premultiplied ARGB format to store the bitmap data. If the opaque parameter is YES, the bitmap is treated as fully opaque and its alpha channel is ignored.

So maybe it isn't possible without changing it to a CGContext? I have found that if I set the opaque parameter to YES then it removes the alpha channel, which is adequate (the tiff reader I am using cannot process ARGB images). I would still like to only have a grayscale image though in order to reduce the file size.

1

There are 1 answers

0
CHiroshiWard On BEST ANSWER

The only way I've found to solve this is to add another method to re-convert the image to grayscale after I've inverted it. I added this method:

- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);

// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();

// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);

// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);

// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);

// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];

// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);

// Return the new grayscale image
return newImage;
}

If anyone has any neater methods I'd be happy to hear them!