Chasing megapixels

Source: Hubblesite.org
Source: Hubblesite.org

Some time back, I’d started a post on this subject, partially in response to a thread somewhere, but when I took too long to finish it I realized it was, in webby terms, no longer current, and simply let it go. But after another prompting, this time from The Straight Dope, I decided it’s worth pursuing anyway, and may provide a little insight into the whole digital photography thing.

As mentioned previously, many people are shocked when they hear that some space probe or lander currently making news has a measly two megapixel digital camera in it – “What the hell? I’ve had a twelve megapixel camera for six years!”… closely followed by various speculations on what’s wrong with NASA and so on. And in like vein, many people seem to think that more megapixels equals better camera. As a measuring stick, however, there’s a tremendous amount wrong with this.

Let’s start with resolution. The general idea is the more megapixels, the better detail, and the bigger print. Now, space agencies aren’t in the business of making prints – they’re after details, and it’s not megapixels they need, but resolving power. This is chiefly up to the lens, and as a basic guideline, any digital sensor should be slightly higher in resolving line-pairs-per-millimeter (how resolving power is actually measured) than the lens itself, so the limitation comes only from the lens. This is a subject we’re going to come back to shortly.

Second, we cannot mistake the idea of the typical photographer’s camera, intended to capture an entire scene at once, with a geological or astronomical survey camera, which may take numerous images while orbiting or intercepting any given subject. A 2Mp sensor that takes hundreds of frames during a close pass can produce a stitched composite image of humongous proportions. So it’s not really the size of the sensor, it’s how it’s used.

Also note that consumer cameras, especially SLRs, are largely built around a commonality, originally using many of the same lenses as their film counterparts and usually following some particular frame standard. In fact, many of the limitations of today’s cameras comes from the mirror box, a big open space between lens and sensor that houses the reflex mirror, which sends the view up to the eyepiece or, during exposure, flips out of the way. Lenses must be designed to accommodate this space in front of the sensor, but nothing of the sort is needed for any space probe. Without this design standard, the lens can be as close as necessary to the sensor, and potentially much sharper. There is certainly less attenuation of the incoming light and a lower percentage of chromatic aberration. We’re coming back to that too.

Plus, it all depends on the actual size of the sensor itself. A 2Mp sensor that’s only 5mm square is actually far higher in resolution than most cameras on the market. Megapixels really only tells how many points are captured at a time, and nothing at all about resolution.

Now let’s talk about the limitations of remote photography. Every digital sensor spells mass that has to not only be boosted out of earth’s gravity, but must be manipulated for any given task – orbit, rotation, and even driving across the landscape of Mars. So, the smaller the better. Then there are the power requirements, since the higher the megapixels, the more power drain for each image, and this has to be weighed against the other power demands within the probe/lander, and the ability to obtain more from solar cells or internal generators. Again, lean is good here. And let’s not forget that every image is worthless until it makes it back to earth, which means each little bit of information has to be transmitted a long ways, again at the cost of power.

The sensor does not stand alone, of course, but needs to have circuitry that supports its functions, with all its own operating parameters. The computer that you’re using to read this probably has a plethora of additions to handle the waste heat from the processors, trading power efficiency for functions, but the circuitry on a probe has to be much slicker, and may well define what sensor is going to be used in the first place. And for doing something like orbital surveys, it also needs to maintain a certain processing speed to prevent gaps or the necessity of multiple passes.

Add in the operating environment, which may be extremely hostile to simple electronics and require a certain amount of shielding, as well as temperature regulation. Most digital sensors have a certain susceptibility to noise, or false readings due to stray electron signals, heat, and simple failures within the sensor. While we may be able to spot these easily in our own images, turning up in space pics can be a bad thing, so the housing of the sensor itself must be optimal for its operation, and some sensors may be much better suited for such environments than others.

Which brings us to another resolving issue. The range of ‘light’ (electromagnetic radiation) that is available to be captured is vast, far in excess of any sensor’s capabilities, but some are still going to be better than others. In some conditions, sensitivity to extremely low light is paramount, while in others, the levels of contrast may be important, or heightened sensitivity to infra-red. It doesn’t really matter that the sensor is ‘only’ 2Mp if it’s the only one that will obtain the image needed.

Now that we have this pile of factors in mind, let’s consider that planning the construction of such a probe/lander takes years, with reams of careful calculations about fuel needed to boost it all, insertion times, power distribution, and so on. Once a design is pinned down, changes are almost always a very bad thing, because they’ll have ripple effects throughout the process. Even if a brand new sensor promises to perform better than the original design, incorporating it will require changes to hundreds of different aspects (including budget.) So, by consumer standards the sensors may seem woefully outdated, but in terms of space exploration, what’s important is actually getting it into space and performing some function. Feelings of consumer inadequacy play no part at all (and to be frank, shouldn’t play a part in anyone’s decisions…)

We’re not done yet. As mentioned both in that previous post and in the Straight Dope column, digital sensors don’t capture color at all; they just capture light intensity – call it greyscale if you like. Consumer digital sensors have little colored overlays to filter each pixel for a particular color, with a choice of three – a fixed value of red, green, or blue. This is actually one of the major issues with digital color rendition, which most people ignore for the sake of digital’s ‘convenience’ but some have recognized as fairly restrictive. The actual resolving power of any consumer camera is actually 1/3 (or even less) of the megapixel count, because each pixel in the final image must have a value for red, green, and blue, which must come from at least three different pixels output from the sensor. So they are interpolated, calculated as the likely ‘true’ color based on the values from the three. Worse, to get to the monitor in front of you they are then split back into three separate colors, giving another hit to resolution.

The cameras on probes (and landers, to the best of my knowledge) don’t suffer from this. The sensors remain monochromatic, and a filter assembly between lens and sensor can be switched at will, allowing not just the fixed three colors that we consumers are stuck with, but a very wide range including infra-red, ultra-violet (likely a range within these broad definitions,) and even filters specific to hydrogen and oxygen emissions. Each pixel is put to work, and the resulting images are transmitted separately for recombining, as needed, back on earth. So that little 2Mp camera can likely produce twenty times the info of any consumer hotsy-totsy model, regardless of megapixel count.

No, not done yet. Everyone knows lenses bend light, but often don’t realize that they bend different wavelengths by different amounts, a little something called chromatic aberration, often resulting in color fringing. Consumer lenses, especially the expensive ones, have lots of little workarounds like multiple coatings to filter out the worst effects, and various elements added to the glass to try and control everything at the same time; again, the demand of capturing the best picture all at once means a lot has to be done to overcome the routine physics of light. And when shooting with infra-red film, focusing actually requires readjusting the focal distance once the filter is in place. Ah, but with the multiple filters of probe cameras, the focal distance can be changed to optimal for each filter! This means that every color can be the sharpest that the lens can produce, rather than an average of fuzziness that most consumers end up with.

So let’s come back down to earth and discuss how this affects your own photography. First off, megapixel count is meaningless unless you’re getting the optimum resolution from your lens, and if you bought a 16Mp camera with a ‘kit’ lens, you’re simply wasting loads of pixels because the lens cannot resolve to use them – and by extension, eating up loads of memory to store fuzziness. Plus, there are so many other factors affecting the quality of your images, including the aforementioned chromatic aberrations, and lens distortions, light falloff, and even color cast. Your sensor may be very susceptible to noise or render contrast poorly, it may be coupled to firmware that doesn’t interpolate well or does terrible white-balancing (another bit of post-processing,) and as hinted at above, it can put a drain on your batteries and memory that shortens your effective shooting time.

Then, there’s the end result to consider. I actually do a pretty good number of prints from my images, both film and digital, and have never shot with anything more than 6Mp. I rarely see anyone who has printed their work out larger than 11 x 14 inches, but even so, let’s think about this. Ignoring the lens considerations, most prints are best viewed all at once, so as the display size goes up, so does the typical viewing distance – commensurately, the ability to make out fine detail drops. Only a very small percentage of images can stand up to getting in close to see tiny details, which is the only place where megapixel count is going to matter… and those that aim for such displays stitch together multiple images, as often as not ;-). Such images have to be near perfect in focus, depth-of-field, exposure, contrast, and so on, which means having a lot more going for you than megapixels.

And finally, the old saw: the camera doesn’t take the pics, it’s just a tool that reflects the skill of the photographer. Even more importantly, technical perfection isn’t what makes a good photo; concentrate on content, composition, emotion, appeal, balance, framing… you get the idea. Don’t get me wrong – resolution can certainly help an image stand out, but that’s not actually determined by megapixels, which mean nothing until you’ve got lots of other things under control first.