Number of sensors and number of pixels

286 views Asked by At

One question about the sampling process when converting an analog image to digital. I understand about megapixels and stuff, and how the megapixel myth has served the digital camera industry well. And I already understand that a bigger sensor is better (that's why a point-and-shoot 14MP camera can be inferior in quality compared to say, a 12MP DSLR - if one with such an MP count exists)....

I've been reading about how a CCD and CMOS digitize images (say in a digital camera). The overall concept is quite clear. The only thing that I'm still confused about, although might be a simple thing, is... whether the number of sensors (on a sensor array) translates to the number of pixels in the final image. Is this the case? Meaning, say that the sensor array is 10 X 10 (yes, very small but just as an example). But does this mean that a 10 X 10 sample is taken as well? Which finally translates to a digital image with a 10px X 10px "spatial resolution"? (Am using "" as I've read that there are many definition, even for spatial resolution)

I thought of asking this question, after reading this part in Gonzalez and Woods (Plz refer to the picture)

enter image description here

Would appreciate any feedback :D Thanks!

2

There are 2 answers

0
Mark Ransom On

Yes, in the trivial and contrived example you've provided, you're correct - the number of pixels in the sensor is equivalent to the number of pixels in your final image.

The real world is generally a little more complicated. If you want color for example, each pixel requires one reading for each of the primary colors red, green, and blue. But most sensors only generate one reading per pixel. The color for each reading is determined by a Bayer Filter located over every pixel, and the missing values must be filled in by interpolation - a process known as demosaicing.

A practical sensor will also contain a small number of pixels that don't make into the final image. Some are used to measure black level (they are masked so no light hits them), and some give extra information at the edge of the image so the demosaicing process can work properly. If the output of a camera is a JPEG image, the dimensions will typically be a multiple of 16 to match the block size used by JPEG compression.

It is also quite common for the sensor to be downsampled to provide a final image at a lower resolution. Cameras often provide an option for this and video produced by a still image camera will do it automatically.

0
hiroki On

The final size in the image would be:

D = (X/Z) * f / p

where

D: The object size in the image in pixels

X: The size of the object (mm)

Z: The distance to the object (mm)

f: The focal length of the lens (mm)

p: Pixel size of the sensor (mm/pixel) - you can find this in the sensor's spec sheet.