I'm using QGraphicsColorizeEffect() in a python program to colorize a QGraphicsPixmapItem. I'm happy with how it works, as it's meant to highlight that the item has been "selected" by the user, however it made me wonder what is the exact logic or math behind the colorization process.
I understand that, when looking at the colors in the HSV color space, the colorization sets the hue of the target pixel to that of the tint color, but I'm not sure how it affects its saturation and its value.
So for example with a Base Color of (246, 134, 168) and a Tint of (120, 128, 128) the result is (120, 58, 176). How did we get here?
Here is a simple (I think) program that allow some experimenting with the QGraphicsColorizeEffect(). Every time it runs it prints to the console the Base color, Tint and Final color (in HSV).
Ideally I'd like a formula or the explanation of the algorithm so that I could predict what the final color would look like based on the Base color and the Tint that's going to be applied to it.
import sys
from PyQt5.QtGui import QColor, QBrush, QPen, QColor
from PyQt5.QtCore import Qt, QRect, QPoint, QSize
from PyQt5.QtWidgets import QApplication, QGraphicsView, QGraphicsScene, QGraphicsRectItem, QGraphicsColorizeEffect
app = QApplication(sys.argv)
# Create a scene
scene = QGraphicsScene()
# Create a rectangle item with the specified size and color
rect = QGraphicsRectItem(0, 0, 300, 150)
base_color = (246, 134, 168)
color = QColor.fromHsv(base_color[0], base_color[1], base_color[2])
rect.setBrush(QBrush(color))
rect.setPen(QPen(Qt.NoPen))
print(f"Base Color: {base_color}")
# Add the rectangle to the scene
scene.addItem(rect)
# Create the effect
tint = (120, 128, 128)
effect = QGraphicsColorizeEffect()
effect.setColor(QColor.fromHsv(tint[0], tint[1], tint[2]))
print(f"Tint Color: {tint}")
# Create a view and set the scene
view = QGraphicsView()
view.setScene(scene)
# Add the effect
rect.setGraphicsEffect(effect)
#Check color in the middle of the rectangle
pixmap = view.viewport().grab(QRect(QPoint(150, 75), QSize(1, 1)))
image = pixmap.toImage()
color = image.pixelColor(0, 0)
color = color.getHsv()
print(f"Final Color: ({color[0]}, {color[1]}, {color[2]})")
# Show the view
view.show()
I've looked into the documentation, but it doesn't seem to go in great detail on this.
The implementation is not documented as it's not considered to be important to the normal developer. The API is not fully exposed (as there's usually no need for that), and for the same reason, the implementation is written for optimization, rather than "dev usability".
Also, consider that most color transformations are done using the RGB color model: HSV, HSL, etc are alternate color models that are normally intended for different requirements.
The problem in finding how the colorize effect actually works is that it uses lots of internal functions and private classes; while you can use a smart code browser (like the one provided by woboq), some functions are created and accessed dynamically within the code, making their research quite difficult. You can usually easily access functions and definitions that are publicly available in the API (such as the basic implementation of a QGraphicsEffect), but finding out what they actually do is quite another story.
First of all, QGraphicsEffects classes must implement a
draw()
function, but graphics effects normally use advanced painting functions that are not part of the public API.After some research, I can tell you how it works:
draw()
function of the filter;CompositionMode_Screen
on the painter for that pixmap (more on this later);strength()
as opacity for the painter, thus creating the "colorize" effect due to the composition mode;Now, how can we do this on our own?
Considering your example, we can implement that with a single color, and we need two functions:
The first function is quite simple, there are various ways of doing it (see this related post), but Qt uses this simple formula:
Then the blending is done using the
CompositionMode_Screen
, which the documentation explains:How it actually does that is a bit difficult to find, as compositions are helper function accessed "by attribute" (I believe); the
Screen
composition works like this (woboq):Considering the above, we can get more or less the correct result:
The above, based on your colors, results in the following:
Not perfect (probably due to rounding issues), but close enough.
But there's another issue: the effect also supports the
strength
property.As said above, Qt does that by setting the opacity of the painter when drawing the source gray scale. But if we want to compute the color, that's not a valid solution: we want to compute the final color, not get its result after it's being painted.
In order to know the actual final result of the effect, we need to tweak the
blend()
function a bit, considering the original color:The above will compute the blended component as done before, but then uses the difference between that and the source, and returns the sum of the source plus the difference multiplied by the strength ratio.
The result is still not perfect in integer values, but quite close to the result.
In order to clarify all the above, here is an example that show how it works, allowing color changes and strength factors, and finally comparing the resulting "colorized" value and the computed one:
Finally, the above obviously doesn't consider the alpha channel of the source or the effect colors: the final resulting color can only depend on what the item is being painted on. Also, remember that the
grab
function can only consider the Qt context, if you're using transparency there is absolutely no way to know the exact result, unless you can access the OS capabilities: considering that, there's really no point in doing all this efforts, just grab a screenshot and get the pixel.