Numbers game

Several days back, The Girlfriend found a potted blue lobelia for me, which I’ve been intending to get for a while, because they’re blue, and I mean, seriously blue – more blue than any flower I’ve seen, more blue than almost any thing I’ve seen. Note that this is not the US native great blue lobelia, or blue cardinal flower, but an African import, Lobelia erinus, of the Campanulaceae Family instead. And one day in passing, I decided to do a couple of frames of it to record that color. But even in the LCD as it did that 2-second preview, I could see things were off. This is what the camera produced:

blue lobelia Lobelia erinus rendered far too purple in-camera
That’s… not at all a color match, or even close. The flower is by no means purple, it is as pure blue as one could reasonably expect or define. I had to do a bit of tweaking to get the image close to what the flower actually looks like to our eyes (or at least to mine):

blue lobelia Lobelia erinus image edited to reflect true colors more accurately
Note that this wasn’t a simple color-tweak, or adjustment to saturation in individual channels (like desaturating the Magenta, which I thought should have worked and instead made virtually no difference.) I had to not only desaturate Magenta almost entirely, I had to adjust the Hue of the Blue channel by no small margin to get it to look this way – thankfully, this had little effect on the rest of the image and thus looks pretty damn close to natural.

But this got me curious as to why this occurred, to such a large degree, and naturally how much this was affecting other images. My initial thoughts were that the sensor had a little too much sensitivity to the violet and ultra-violet, the latter being invisible to us, and this is what was captured in the image; I already know that CMOS sensors can reach a decent distance into the invisible-to-us infra-red. But no – they capture virtually no UV, and the answer instead appears to be complicated and a curious aspect of physics and CMOS sensors.

First bit: broken down into a spectrum, the sun emits less blue than green or red, though of course this is a spectrum and doesn’t bear these nice distinctions of “blue,” “green,” or, “red” that we want to apply to it. Nonetheless, both digital sensors and our eyes break down light into three primary colors in this way (no, yellow isn’t included – that’s a pigment-based thing from mixing paints and dyes.) Other colors fill the gaps and might be considered combinations of these three to varying degrees, but again, spectrum; it’s our eyes and digital sensors that count them as combinations.

Second bit: CMOS sensors, used in most commercial digital cameras, count photons in each of these three primary colors. But the shorter wavelength of blue means that blue has more energy per photon. To use a brief analogy, it hits fewer times yet harder. But CMOS sensors are only counting the hits, and so, blue isn’t getting considered evenly.

Then there’s part three: The way CMOS sensors are made, blue has a tendency to scatter a little before reaching the sensor itself, so it gets reduced even more. Chances are, the software that interpolates the sensor output makes some adjustments for this, but if it isn’t captured/measured by the sensor in the first place, the software boost won’t have much effect.

So, just now, I decided to go into the individual color channels and see how they looked – they are below in order of Red, Green, and Blue:

unaltered image of blue lobelia Lobelia erinus separated into primary RGB channels
As one would expect, Blue is very bright in the flowers themselves, which is as it should be. But the flowers also have a distinctive presence in Green and especially Red, which isn’t, or at least, not to my expectations. I’ve broken down images of red flowers into separate channels and Red is of course bright, while Green and Blue drop almost to black in the channel rendition – like, below.

image of hibiscus blossom with separated RGB channels
[Note, too, that the Blue channel is often the blotchiest and least detailed within most images when broken down in this way, probably due to that photon count vs. energy bias.]

Now, is there a way to fix this? No – not without a new sensor/camera that is probably very expensive, and quite frankly, the impact is trivial; this is the first circumstances where it became really noticeable, though since I shoot nearly all the time in Daylight White Balance, I tend to tweak images that need it anyway. It’s easy to get bogged down in pursuit of some definition of “accurate,” but ultimately pointless; between the shortcomings of dynamic range in both sensors and monitors, and the subjectivity of individual perception (is the blue I see the same as the blue you see?), there’s no way to define “accurate” reasonably anyway. In the situations that call for it, I’ll fix it in post.

* * *

Information sources for this post:

Why are sensors less sensitive to blue light?

Why is the blue channel the noisiest?

« [previous]
[next] »

Leave a Reply

Your email address will not be published. Required fields are marked *