35 years ago, Viking 1 shakes hands with Mars

On this date in 1976, the NASA Viking 1 lander touched down on the surface of Mars, becoming the first manmade object to contact that planet. The US space program, until that time dealing largely with the moon missions, satellites, and Skylab, had now extended its reach phenomenally.

Now, I’m going to put a damper on nationalism in the interests of accuracy, for a moment. The Soviet Union had crashed a Venera probe on Venus just a wee bit earlier – like a decade. If it makes you feel better, they’d lost communication before contacting the surface so the mission returned no data, but credit where it’s due. Functioning probe missions had succeeded in 1970, still well ahead of Viking.

One of the things that distinguished Viking, however, was its ability to transmit high definition images back to NASA, and in turn (because NASA shares its images as a matter of policy) to the rest of the world. Now, you get to hear some of this from a photographer’s viewpoint – lucky you!

Your digital camera renders color by having a teeny little bit of colored plastic over each individual “pixel” sensor, in a standard pattern just like a TV screen. Digital sensors can only read light intensity (or brightness if you prefer,) not color, so each pixel has to be dedicated towards a particular color by filter. What this means, however, is that red pixels only represent 1/4 the resolution of the camera. The camera software, once the image is captured, interpolates the color of each pixel in the finished image by comparing the intensity of each color in the pattern against one another, and then changing them to try and represent “true” color. Of course, it matters a bit just what color filter is over the pixel in the first place, and what settings for images the user has chosen – high contrast, more saturated colors, and so on. The long and short of it is, there is no particular way to tell what the most accurate rendition of an image is.

Most astronomical cameras do not have color ability built into the sensor; instead, they have color filters within the lens arrays that can be switched at will, and creating a full-color image takes at least three separate exposures that are transmitted separately back to NASA. But there can also be additional filters, for more colors, infrared, ultraviolet, and so on, allowing the camera a lot of versatility. It’s not going to get updated next year, after all. A small aside, too: development of a satellite, probe, or lander requires the coordination of every component, its weight, power needs, ability to transmit data back, and so on. Because of this, new developments and upgraded hardware rarely ever make their way into a space vehicle under construction, so the digital sensors within are often very far behind the abilities of the current consumer cameras, and would typically be considered seriously obsolete in comparison.

Image courtesy of redOrbit.com
Personally, I’ve spent no small amount of time in scanning slides and trying to adjust them to appear the same on the monitor as they do on the light table. And I live on the planet where the images were taken! When the atmosphere is entirely different and vital information about the environment is determined by what’s showing in the images, calculating what the most accurate color rendition is becomes a matter of no small debate, and the application of some pretty advanced science. Seen here at the base of the antenna arm, the lander has several color targets within visible range of the cameras, which provide known baseline colors (and resolutions) to help calibrate the resulting images. In a way.

The color register on another planet should be different, from the sunlight being filtered through a different set of gases in the atmosphere. As I mentioned earlier, this color cast is something that you may not want to correct for, most especially if you’re trying to see what the planet is like. But it does present some issues with figuring out how to interpret the color channels of the images. On earth, the lander may be white (at noon, anyway) but on Mars it probably never gets brighter than pale apricot, and during a dust storm, who knows what color the light turns? Figuring this out is what makes the technicians’ jobs interesting.

At the time, one of the prime questions that Viking was supposed to answer was regarding the possibility of life. Nobody expected to find little green men, but the speculation about ice and organic compounds made it clear that the possibility was real. The landscape of Mars also gave distinct indications that some time in the past Mars had featured a more hospitable atmosphere, meaning that traces of past life might also exist. Of the four different methods that the lander possessed to try and determine answers to these questions, three came up negative, and one positive. Later analyses, with better understanding of the nature of the atmosphere, still haven’t actually resolved the question; the possibility is still there, and we may have found evidence of it. Not anything really compelling like bacterial traces or microorganisms, but the chemical aftereffects. In fact, Viking may have actually damaged the traces it was trying to detect.

This is part of the challenge of investigating things such a vast distance away. The tests have to be planned well in advance and incorporated into the lander, and obtaining new samples usually takes another mission. The sudden insight – “What if we tried this?” – requires a decade of planning and a few million dollars to implement, or the ability to find a way to deduce or infer the answer with existing data. Impatient people don’t get assigned to work planetary probes.

Today, and tomorrow, marks another anniversary by the way, but I’ll refer you back to an earlier post for that one. If you’re confused, bear in mind that the mission was run by Universal Constant Time (UTC, or Greenwich Time) and the delay between landing and EVA meant Armstrong actually stepped out the next morning ;-)