I am attempting to understand how CIAdditionCompositing works.
As part of my testing, I have created a square mid-gray image:


and a square black image:


When I combined these two square images using a CIAdditionCompositing patch, I expected to see a gray square whose color matched the original mid-gray square exactly (because all color components of the black image have value 0). However, the final result is actually brighter than the original gray image:


I don't understand how this result is produced. What am I misunderstanding about how CIAdditionCompositing works?
So here is how I experimented with this. I generated images using Python PIL and numpy using below
Then I wrote a
XCodecode to check your filterAnd then I ran below code in python to print the pixel values
After that I plotted them all in an excel sheet. And here is my observation
Now the delta that I added is nearly equivalent to
ROUNDUP((<sum of pixels>-16)/2-1,0). I saynearlybecause I could workout an 100% exact formulaSo if
Ais the background image andBis the foreground image then below is the data from excel. The excel formula that I used wasIF(ROUNDUP((D2-16)/2-1,0) <0, 0,ROUNDUP((D2-16)/2-1,0) )So unfortunately, they do say they use the formula described in
https://keithp.com/~keithp/porterduff/p253-porter.pdf
But the delta function is custom. Also I believe the formula from that PDF will come into picture when there is a custom alpha channel in the image that you use