Best performant way to check if an image is all white?

1.7k views Asked by At

I'm trying to determine if a drawing currently is all white. The solution I could come up with was to scale down the image, then check pixel by pixel if it's white and return NO as soon as it finds a pixel that is not white.

It works, but I have a gut feeling it could be done in a more performant way. Here's the code:

- (BOOL)imageIsAllWhite:(UIImage *)image {
    CGSize size = CGSizeMake(100.0f, 100.0f);
    UIImageView *imageView = [[UIImageView alloc] initWithImage:[image scaledImageWithSize:size]];


    unsigned char pixel[4 * (int)size.width * (int)size.height];

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef cgContext = CGBitmapContextCreate(
            pixel,
            (size_t)size.width,
            (size_t)size.height,
            8,
            (size_t)(size.width * 4),
            colorSpace,
            kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedLast);

    CGContextTranslateCTM(cgContext, 0, 0);

    [imageView.layer renderInContext:cgContext];

    CGContextRelease(cgContext);

    CGColorSpaceRelease(colorSpace);

    for (int i = 0; i < sizeof(pixel); i = i + 4) {
        if(!(pixel[i] == 255 && pixel[i+1] == 255 && pixel[i+2] == 255)) {
            return NO;
        }
    }

    return YES;
}

Any ideas for improvement?

2

There are 2 answers

0
Tommy On

It feels like there's no speedy route that would go to the GPU and back again so the answer is really no more interesting than taking a statistical approach and using GCD to ensure multicore utilisation.

In most images, colours are more likely to be close to other similar colours. So if one pixel is white, it's more likely that its neighbouring pixel is also white. Therefore a strict linear progression through the pixels is less likely to find a white pixel quickly than is sampling points a distance apart, then sampling closer points, etc. Ideally there'd be some f(x) that took the relevant range of integers as input and returned each of them only once, such that the distance between f(x) and f(x+1) is greatest for x = 0 and then decreases monotonically.

If the image is reasonably large, and more so if you can afford to return the result asynchronously, then the cost of dispatching the task to multiple cores is likely to be outweighed by the gain of having multiple cores work on it at once.

You're fixing your image size at 100x100 pixels. I'm going to take a liberty and assume you can move up to 128x128 because it makes the f(x) easy — in that case you can just do a bit reversal.

E.g.

static inline int convolution(int input) {
    // bit reverse a 14-bit number
    return ((input & 0x0001) << 13) |
           ((input & 0x0002) << 11) |
           ((input & 0x0004) << 9) |
           ((input & 0x0008) << 7) |
           ((input & 0x0010) << 5) |
           ((input & 0x0020) << 3) |
           ((input & 0x0040) << 1) |
           ((input & 0x0080) >> 1) |
           ((input & 0x0100) >> 3) |
           ((input & 0x0200) >> 5) |
           ((input & 0x0400) >> 7) |
           ((input & 0x0800) >> 9) |
           ((input & 0x1000) >> 11) |
           ((input & 0x2000) >> 13);
}

... elsewhere ...

__block BOOL hasFoundNonWhite = NO;

const int numberOfPixels = 128 * 128;
const int pixelsPerBatch = 128;
const int numberOfBatches = numberOfPixels / pixelsPerBatch;

dispatch_apply(numberOfBatches, 
               dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), 
               ^(size_t index) {

    if (hasFoundNonWhite) {
        return;
    }

    index *= pixelsPerBatch;
    for (int i = index; i < index + pixelsPerBack; i ++) {

        int indexToCheck = convolution(i);
        int arrayIndex = indexToCheck << 2;
        if (!(pixel[arrayIndex] == 255 && pixel[arrayIndex+1] == 255 && pixel[arrayIndex+2] == 255)) {
            hasFoundNonWhite = YES;
            return;
        }
    }
});

return !hasFoundNonWhite;

Addendum: the other knee-jerk thing you'd do when dealing with a vector processing task like this is check the Accelerate framework, likely vDSP. That ends up compiling down to use the vector unit on your CPU. In this case you might reformulate the test as "sum of vector must equal size of vector * 255" (if you can make an assumption about alpha). However there is no integral sum, and converting to float probably isn't worth the cost.

1
Nimit Parekh On

Please following code for check whether UIImage is White color

- (BOOL) checkIfImage:(UIImage *)someImage {
    CGImageRef image = someImage.CGImage;
    size_t width = CGImageGetWidth(image);
    size_t height = CGImageGetHeight(image);
    GLubyte * imageData = malloc(width * height * 4);
    int bytesPerPixel = 4;
    int bytesPerRow = bytesPerPixel * width;
    int bitsPerComponent = 8;
    CGContextRef imageContext =
    CGBitmapContextCreate(
                          imageData, width, height, bitsPerComponent, bytesPerRow, CGImageGetColorSpace(image),
                          kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
                          );

    CGContextSetBlendMode(imageContext, kCGBlendModeCopy);
    CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
    CGContextRelease(imageContext);

    int byteIndex = 0;

    BOOL imageExist = YES;
    for ( ; byteIndex < width*height*4; byteIndex += 4) {
        CGFloat red = ((GLubyte *)imageData)[byteIndex]/255.0f;
        CGFloat green = ((GLubyte *)imageData)[byteIndex + 1]/255.0f;
        CGFloat blue = ((GLubyte *)imageData)[byteIndex + 2]/255.0f;
        CGFloat alpha = ((GLubyte *)imageData)[byteIndex + 3]/255.0f;
        if( red != 1 || green != 1 || blue != 1 || alpha != 1 ){
            imageExist = NO;
            break;
        }
    }

    return imageExist;
}

Calling Functions

 UIImage *image = [UIImage imageNamed:@"demo1.png"];
    BOOL isImageFlag=[self checkIfImage:image];
    if (isImageFlag == YES) {
            NSLog(@"YES it's totally White");
    }else{
            NSLog(@"Nope it's not White");
    }