The road’s longer than it looks

Okay, let’s do a variation of an exercise I perform with my students sometimes. I’ll preface this by saying, this isn’t a trick question, nor is there a ‘right’ answer. It’s simple: look at this image and tell me what you can glean from it.

Well, it’s a stream or pond, with a rock at the edge or perhaps in the middle. The air appears reasonably still, from the smoothness of the water, and the water is certainly shallow. There’s a fairly modern watch sitting on a rock. It’s bright sunlight, around midday, and this is supported by the time shown on the watch. It’s probably sometime around the spring or summer months, judging from the foliage visible – and this is supported by the date shown on the watch (let’s assume you can actually read the “TU 5-1” it displays.) Which would probably indicate northern hemisphere, since May is spring there – but maybe not, since it might be the European format and indicate January 5 instead, so this might actually be the southern hemisphere.

The watch itself has some mystery behind it. It’s not natural, so it appears to have been placed or lost here. It is clean, and still displaying a time, indicating that this might have happened recently, but there’s no one visible, supporting the idea that the watch might have been lost. It is a man’s style wristwatch, electronic LCD, battery-powered, probably (judging from the buttons and display) with several options other than simply timekeeping. Anything else that you’d like to add?

What about mood? Does the image evoke any particular emotion, like mellow feelings over a pleasant day, or maybe commiseration over a lost watch, or just curiosity over how such a scene might have come about? Are the memories of any smells or sounds stirred up, or thoughts about fishing or exploring, or recollections of camping or salamander chasing? Can you almost feel the cool water around your ankles, and the tricky footing beneath your feet? Or perhaps there’s even some feelings of distaste over an environment that’s almost impossible to stay clean within, with no ready-made meals and no entertainment.

The student exercise is about metaphors, and what portions of an image tell the viewer (and thus, how a photographer can use this to their advantage.) Now instead, imagine what a computer program could tell us about this image.

Well, virtually none of that (“virtually,” heh! I kill me). The image is simply a collection of pixels of certain colors. There is nothing three-dimensional about it, and in fact, it vanishes without the input of electricity. To actually determine anything specific like “watch” and “water,” a significant amount of programming would have to be done to differentiate all of the myriad ways such things could appear. Determining even if the image is level would require more than a method of finding a horizon, since none actually appears, so any program would have to include some algorithm to average out the waterlines against the rocks. Even just a method of picking out the metal construction of the watch would require a comparison of the lighter, less-saturated portions of the image (what we call “reflections”) against the remainder of the image, probably linked to uniformity in shape so the program wouldn’t mistake reflections from the water as being ‘metallic.’

Is it possible for a program to interpret the time from the watch? Yes, provided it can also handle the oblique distortion of a watch at an angle rather than almost face-on as seen here, or upside down or sideways. To render this as time, however, would also need supporting details to recognize the shape of a watch. Had the lighting and colors indicated that the time was closer to sunset, any computer would need another algorithm to even detect the anachronism, much less recognize that the watch was probably set for the wrong time. It would have to be told (programmed) to recognize that a watch being incorrect is infinitely more likely than the sun abruptly setting at quarter to two.

I’m betting your eye jumped to the watch quickly when you first saw the image, and this is partially because of contrast, but more due to pattern recognition – it’s why you likely noted the time on the watch but paid little attention to the foliage or the lichen, which are also high in contrast, but lacking the strong patterns of the watch. How many different kinds of plant life are visible? It’s a low number, but I’m betting you still have to go back and count, since this aspect wasn’t something we typically look for. We attach varying levels of importance to different factors within the image, so any computer program to interpret such images would either be lacking such biases, or need to have them specifically delineated.

And then, there’s the emotional aspect. Whatever feelings you may have gotten from the images are a result of your past experience, and the connections you made at those times. They are a product not just of sensory input, but your personal evaluation of whether, for instance, wading in a stream is a good or bad thing. You can almost hear the sounds, feel the mud, and smell the water because your brain makes connections of all of these inputs in a common manner, so seeing a two-dimensional representation of a stream automatically connects to the other elements. And your emotions are guides towards learning, and better experiences. They exist because they served a survival purpose that gave an edge over our ancestors that didn’t possess them.

That’s the underlying message here: everything that we can interpret from such images is the product of millions of years of selective pressure, filtered through an organism that has a remarkable network of interacting neurons to build memories and connotations – and is building even more right now. I could tell you what kind of plant produces those bright green leaves you see (well, okay, I honestly can’t, but let’s pretend for a moment,) and you may retain this info for a long time – you might even remember it every time you think of a stream or river. Yet any program written to interpret this image will gain nothing new from my explanations unless it is either reprogrammed to include the info, or has an elaborate mechanism of receiving, interpreting, and connecting information. You know, through experience, that some plants grow only near water, and may wonder if what you see here qualifies, but even that information would have to be input into any program.

Even if we had the ability to create a computer, a robot if you will, that could walk into a river, feel the water, smell the sounds, and put these together into a coherent whole that permitted both extrapolations and comparisons to similar experiences later on, there remains the curious aspect of what weight it could give these. What would have to be included to have it react to the water being cold? We do this because it harms our circulation or even damages our cells, so there’s good reason to dislike cold water – but our discomfort is automatic, not reasoned. We could see the rocky bottom and know the footing is treacherous, potentially resulting in injury or even just wet clothes that we wouldn’t be able to change out of for a while, but a constructed mechanism would have different standards of danger or ‘discomfort.’

We could simply skip the mobile part altogether, and concentrate on electronic brains – this has long been a goal ever since calculating machines have been refined. Yet what defines a ‘brain,’ for our purposes? And what could we use it for? The military, for instance, may derive some benefit from an autonomous device that can deal with difficult and dangerous situations, without putting a human in harm’s way. Which sounds very good, until we consider the implications of a device, perhaps bearing weapons, that has no empathy or fear of consequences. Given a situation that we’re unfamiliar with, we humans can extrapolate on the spot, but such computations have to be either programmed in, or the ability to extrapolate programmed in, to any machine that faces unique situations. Not to mention, a failsafe mechanism of ‘fear’ regarding a wrong decision, lest we deal with a highly dangerous machine or one that simply stalls out in the vacuum of input it was made to deal with. Keep in mind that police officers have to determine levels of threat from any given situation, with occasional gross errors – imagine an autonomous machine trying to do the same in an area filled with civilians.

Electronic brains are also something that space exploration might benefit from. Forget the ‘humaniform’ robots of Asimov’s – what we’re most likely to aim for are planetary probes that can cope with unforeseen conditions, without having to wait for the speed-of-light lag in communicating with operators on Earth. This might be very useful in detecting a collision with a bit of space debris without external help, or shutting down a delicate sensor during a gamma ray burst – but are such abilities much easier to program in, saving the time and effort to develop a ‘thinking’ brain that has some form of self-preservation instinct? What about spotting a curious geological structure that deserves more attention during a planet flyby? What criteria do we use to define “curious?” Planetary geologists would certainly define this more usefully than I, so it’s not even a brain that would be useful, but a brain with the right experience.

This isn’t just an interesting exercise. There really are people who claim that we’ll surmount all of these issues to not only produce a working brain, but one that will surpass humans in ability – and that this will occur ‘soon.’ Most of their support for this claim comes from something called “Moore’s Law,” an examination of trends in computing power – essentially, the density of silicon ‘gates’ in microprocessors has been rising exponentially for years. From this, they conclude that it will continue to rise, at the same rate even, which ignores two important factors. The first is that extrapolations of this kind cannot be considered ‘law,’ just guesses at trends, and plenty of things can influence a trend (experience with the and housing bubbles shows us where counting on trends can sometimes end up). The second factor is the very real limits of physics, electrical resistance, and flattening signal-to-noise ratios, which means that we would need to develop entirely new technology to progress beyond a certain point – and that this development would fit into the projected timeline.

Worse, the functions of microprocessors aren’t even close to the functions of brains, in structure, operation, charge, or method of programming; even while the comparison has been made for decades in popular media, the analogy is weak and facile. The chemical functionality of neurons does not translate well to the electrical resistance aspect of silicon chips. And to be blunt, we have only tantalizing hints of how the brain functions anyway, and what aspect of its development sets it apart from other species’. We still struggle with epilepsy, autism, schizophrenia, and countless other mental issues precisely because we don’t really know what causes them.

So let’s say that we’re not looking to emulate the human brain itself, but trying instead to build a machine of high intelligence in whatever way possible. This leads to a concept often referred to as the ‘technological singularity,’ differentiating it from a gravitational singularity whence it plagiarized its name. A gravitational singularity is a theoretical state where gravity collapses matter below its standard atomic size and departs from the standard behaviors of space-time – black holes are the typical example, and they possess an ‘event horizon’ where matter can only pass one way. A technological ‘singularity’ borrows the event horizon idea to refer to the point where machines surpass human intelligence. Is this more likely than an electronic human brain, and somewhere in the near future? The possibility exists, if one considers ‘intelligence’ to be defined as the collection of facts and the ability to interconnect them plausibly – yet, what constitutes ‘plausibly’? When we examine what it is we would expect any such device to accomplish, we have to define every last goal within. Let’s say that we want it to solve a food shortage. There might be many ‘plausible’ options, such as killing off enough people so there no longer remains a shortage – this is a simple application of math, after all. Or food might be reduced to bare essentials of protein and fiber and such, making it remarkably efficient and completely joyless. It’s true that this might result in population decline by itself – I’d kill myself if I could no longer have ribs – but the key lesson in such exercises is that the solutions themselves aren’t necessarily going to be what we want. In order to be functional, or compelling enough to be implemented, they will have to be solutions from a human perspective.

As mentioned several times before, we have drives to figure out how things work, and to explore. These are survival traits bred into us over thousands of generations. Moreover, we start life with virtually nothing, building our intelligence through the near-constant input of information with numerous functions to emphasize connections and associations. The same functions are what leads us to the inductive and creative leaps that define so many of our advances in science. Nobody, as yet, has been able to duplicate the processes that led people like Maxwell, Einstein, and Feynman towards their remarkable discoveries – yet, people like Ray Kurzweil think that we can create a circuit-based intelligence? We’re still finding the similarities among the differences between human and chimpanzee brains, seeing birds that can use tools, and wondering how much of our personalities are defined by genetics. I feel obligated to point out that numerous predictions were made for the year 2000, almost none of which have come to pass. Even something that demands no actual intelligence, like speech recognition software (something that Kurzweil himself has worked on,) has been in development for decades with astoundingly poor progress.

The question also remains as to what we could do with a hyperintelligent computer. Solve some of the more pressing world problems? Such solutions could not revolve around mere logistics, because those aren’t the cause of many of our problems now – we’re hampered by politics, tribalism, competition, and even selfish emotions. Most of our issues are solely because we’re human, and even if some machine could produce a solution, you’d have to get people to actually implement it. Hell, we can stretch our limited resources, right now, by the very well-known solution of using less energy – getting right on that, aren’t we? No, we’re arguing over how much and how and when and if it’s justified by its cost and “Hey, I need my truck!”

Kurzweil has provided another argument for such machines, however: that we could download our memories and brain functions into them and, in essence, live forever. Yet there are so many issues with this that it becomes hard to believe anyone with any knowledge of cognitive function could take this seriously. Human memory is not like a recording – it changes constantly due to sensory input and new associations, and in too many cases contains nothing but imagination. Worse, we already know what happens with simple factors like sensory deprivation, or sleep deprivation – we start going psychotic. So even if we could actually read memories, and store them, and did have an artificial brain capable of not just storage, but cognition, the ‘intelligence’ that it produced would almost certainly be useless (not to mention creating a frothing mob of activists eager to shut it down.)

Even those who think we can create artificial intelligence by going the same route that nature took, starting with some basic artificial neurons and selecting for best functionality, are unlikely to achieve much of anything. Our development as a species, in fact every species’ development, was and still is shaped by the conditions and demands of the environment. When we think about solving the limited resource question by killing excess population, we balk at this precisely because we have social instincts that have developed over millions of years, almost certainly because the competition with other species that could either outrun us or eat us required an edge – thus, cooperation. We have standards of beauty because our successful reproduction was enhanced by certain traits revolving around health, stamina, a birth canal that could pass our offsprings’ oversized heads, and so on. And as indicated above, some of the things that we’d like to improve in our world are directly caused by the functions that provided for survival, that are (in some cases) now outmoded within our new cultures. Even the random mutations that DNA undergoes contribute, unpredictably. Everything interconnects, and like the philosophical pondering of what could have happened if some ancestor had turned left instead of right, a subtle difference in conditions during the development of brains might lead in wildly disparate directions. Any attempt at an evolved artificial intelligence is virtually guaranteed to produce nothing that we would even recognize.

The same applies for any concepts of extra-terrestrial intelligence as well. The environmental conditions would be so dissimilar that the very definition of ‘intelligence’ becomes meaningless. Even communication is highly likely to be impossible, since we would have no abstract concepts that are similar, no desires, no emotions, and not even any basic needs. Like concepts such as ‘consciousness,’ intelligence is an arbitrary distinction that mostly serves to feed our own ego, but lacks any pertinent definition when we consider finding it in any other species, or creating it artificially.

This is what’s so fascinating about natural selection, since it produces remarkably unique traits from some of the simplest ‘rules,’ forging a path that could have led in any direction. Our minds are in some ways quite impressive, like when we realize that our species alone can contemplate subatomic processes and galaxy formation. And in other ways they’re quite inept, constantly hampered by petty demands and filters, trashing our cognitive functionality with emotional sidetracking about whether we’re cool enough, or if our success should be measured by having a sun room. One begins to wonder if we really should try to duplicate this – or whether eliminating what we consider our ‘imperfections’ would stop us from being human.

« [previous]