It was my understanding that when converting an image from BGR to LAB, that the L-component was supposed to represent the grayscale component of the image. However, when I convert from BGR to Grayscale, the expected values don't match. For example,
img1 = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
print img1[0][0]
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print img2[0][0]
The first pixel in my image in LAB produces [168 133 162] while the second produces 159. I was under the impression that they should be equivalent somehow (which is reinforced by the fact that there is no COLOR_LAB2GRAY constant).
Can someone clarify and explain why this is the case? Is my understanding of LAB incorrect, or am I just misusing something in my code?
If they are indeed different, then which is the better one to use? The rest of my application is manipulating images in the LAB model, so I am tempted to use the L-component as my grayscale baseline, but it some areas look lighter than they should be.... unlike in the BGR2GRAY scenario. Thoughts?
gray = 0.299R + 0.587G + 0.114B
But the conversion from RGB to the L channel of LAB differs. (which is a non-linear function)
The exact conversion can be found here.
And the non-linearity of LAB conversion explains the last part of your question.