Final answers aren’t

Over at EvolutionBlog and Why Evolution Is True, Drs. Rosenhouse and Coyne have taken down the same philosophical question posed by Dr. Elliot Sober, to wit: Can science establish that genetic mutations are not caused by god?

It is questions like this that have guided my abiding dislike of philosophy, since a tremendous amount of time has been spent on a question that is totally backward. Aside from the basic idiocy of attempting to prove a negative, something no PhD of anything should commit (much less base an entire lecture on,) there is also the issue that one can replace the word “god” with anything at all and not change the question in the slightest. The question doesn’t have any meaning unless we assume that ‘god’ has specific and defined traits, up to and including a particular intention in causing mutations, an explanation why it would choose such a feeble way of evoking change, and a reason why this has any bearing on knowledge whatsoever.

Let’s put it this way: If we asked whether atomic decay (‘nuclear radiation’) can be ruled out in causing mutations, we at least know decay exists and has certain properties, and answering this question might tell us not to worry about exposure in certain circumstances. But ‘god’ doesn’t even have a clear definition nor any evidence of existence – what the question implies is that there is a possibility of such existence in the very lack of absolute surety, an impossibly tenuous avenue towards belief. And so, the voluminous discussions about scientific knowledge are subverted because the entire question isn’t about knowledge, but emotional supplication. Any and all concepts of deities are cultural structures, in most cases claimed to be openly and distinctly outside of empirical demonstration (that’s what ‘supernatural’ means,) so science is not even supposed to have any input into the question in the first place. But even proposing, for the sake of argument, that there simply exists a being as yet beyond detection, what would make us insert such a concept into genetic behavior, or anything else for that matter? We could propose the same thing to explain dark energy, but what does that do for us?

Moreover, you would think that someone who actually makes their living with philosophy would tumble to the fact that ‘god’ is a catch-all term for a plethora of remarkably personal properties – does the question refer to the christian god, or that of the Kalahari Bushmen? It would be nice if the choice was only two, wouldn’t it? It might have demonstrated some real thought had already been applied, anyway. One might argue that only the christian god is intended, which raises the question of how several hundred others were ruled out (something that not one philosopher, theologian, or devotee that I have ever encountered has answered); alternately, one might say that the term “god” is applied generically to any and all theology, which in essence departs from the realm of science since it has changed the nature of the question into an abstract – one might as well ask if ‘happiness’ can be proven to have no effect on mutation.

I said that the question was backwards, and in the realm of science, it is; biologists routinely ask questions more along the lines of, “What causes genetic mutations?” – you’ll notice that there isn’t any bias towards a particular answer in there, but instead an honest inquiry to gain knowledge. Instead of assuming a cultural posit, science relies on what evidence we can find to suggest the existence of anything. True enough, sometimes a temporary speculation is entertained – “I wonder if it’s affected by endocrine levels?” – but such things serve to provide avenues of specific research guided by known properties, something that cannot possibly be applied with an abstract term such as ‘god.’ And therein lies the trap that Sober hoped to spring when he outright said that science operates to rule out god. Yet, god is ruled outside science in the first place, according to most definitions of such, but ignoring that, how do you rule out something so vague? Is it being ruled out when it does not have any measurable effect in the first place, or has it never been ruled in? Can I accuse science of ruling out Darkwing Duck as a possibility? I can, apparently, if I’d wasted my life thinking that philosophy gives value to every inane question anyone raises.

What Sober probably wanted to imply was that, without a specific answer, then “god” should have been inserted as a possibility, a default answer in the face of uncertainty. Yet, we have a long history of how little use that’s been, from disease to weather to geothermal activity, where ‘god’ not only turned out to be wrong as an answer, it provided nothing of any use anyway. This is already well recognized by a fallacy called god of the gaps, which basically continues to relegate a deity’s possible influence into the smaller and smaller areas of mystery within our knowledge base. But worse, it is a non-answer, a dead-end in inquiry. If we knew what a god actually was and how it operated, we might have some use to which this could be put – praying for specific mutations, for instance – but god is instead a mystery beyond our reach. I feel obligated to note that this very trait was provided by theologians as the reason why god has no evidence or dependable responses and is indistinguishable from random events that can be explained without the need of divine intervention. The nature of science, however, does not take “we don’t know” as an answer or a stopping point, but as a challenge instead, which is the most damning factor against the compatibility of science and religion.

Part of human nature is to seek answers, which has worked pretty well so far. Interestingly, every answer that honest inquiry provides, that science provides, leads to yet another question or three – while at the same time providing applicable traits that we can put to use. Religion is entirely different. While frequently credited with providing answers in and of itself, religion serves instead to halt inquiry and constantly hide behind a claim that we are not allowed to see beyond a certain point, and its answers explain nothing. Religion did not provide us with the idea of genetic mutation itself; science did, and it served to explain how natural selection could shape so many different species over long periods of time, fitting perfectly with both the similarity of genetic makeup of every species on earth, and the curious progression of traits among fossil species. It bears noting that most concepts of gods are provided by creation legends that science, including genetics, has already trashed resoundingly. Trying to save a tiny vestige of such legends by glomming it onto functional science like some kind of parasite is evidence only of pathetic desperation, not honest inquiry.

Even if we found some fantastic, deliberate force within those mysteries still open to us, this cannot change the fact that every creation legend from every culture on the planet has been shown to be bollocks. Should we choose to call this force “god,” it will never be the god that any individual has envisioned, and its properties will remain to be determined. The chances are very great, given the long and detailed history that we already have, that our human desires and emotions are not going to be a prime concern of such a force – in other words, cosmic daddy is way too farfetched for serious consideration. It’s about time we grew up, stopped trying to find ridiculous ways to maintain emotional crutches, and faced what we can learn with eagerness and pragmatism.

And when we ask questions, let’s first try to determine that they’re useful, and not just self-indulgent horseshit.

A year goes by fast

Last year about this time, I published a post about my little friends the fishing spiders, whom I call ‘friends’ not because we hang out and hammer down Pepsi together, but because my first photo sale featured one as a subject. Lately, a few have been making themselves obvious, clearly begging to be featured again, so who am I to crush their little spirits? And I say with all honesty, it’s not that I’m avoiding bunny rabbits and ducklings, it’s that I simply have not seen anything cute at all. But still, I know some people don’t want to be greeted with spiders all the time, so I’m including the detailed pics below the break.

A little over a week ago, while staging the photo for the previous post, I espied something that can occasionally be found at the edges of ponds and streams that have plenty of reeds, seen to the left: the molted exoskeleton of a fishing spider. Spiders, and most insects, shed their ‘skins’ as they grow larger, splitting the chitin and squeezing out backwards, and then usually hiding for a while since their new exoskeleton is soft, leaving them much more vulnerable to predators. The translucent molt is left attached to whatever surface was handy, usually mistaken for a dead insect, but it’s instead a clue to be watching for the former owner nearby. When I sat down to take this image, I soon spotted the culprit hiding in the tall grasses. With a stick, I carefully flushed him out, whereupon he panicked and scampered for cover practically underneath me, but then froze and held perfectly still for some really tight closeups.

The road’s longer than it looks

Okay, let’s do a variation of an exercise I perform with my students sometimes. I’ll preface this by saying, this isn’t a trick question, nor is there a ‘right’ answer. It’s simple: look at this image and tell me what you can glean from it.

Well, it’s a stream or pond, with a rock at the edge or perhaps in the middle. The air appears reasonably still, from the smoothness of the water, and the water is certainly shallow. There’s a fairly modern watch sitting on a rock. It’s bright sunlight, around midday, and this is supported by the time shown on the watch. It’s probably sometime around the spring or summer months, judging from the foliage visible – and this is supported by the date shown on the watch (let’s assume you can actually read the “TU 5-1” it displays.) Which would probably indicate northern hemisphere, since May is spring there – but maybe not, since it might be the European format and indicate January 5 instead, so this might actually be the southern hemisphere.

The watch itself has some mystery behind it. It’s not natural, so it appears to have been placed or lost here. It is clean, and still displaying a time, indicating that this might have happened recently, but there’s no one visible, supporting the idea that the watch might have been lost. It is a man’s style wristwatch, electronic LCD, battery-powered, probably (judging from the buttons and display) with several options other than simply timekeeping. Anything else that you’d like to add?

What about mood? Does the image evoke any particular emotion, like mellow feelings over a pleasant day, or maybe commiseration over a lost watch, or just curiosity over how such a scene might have come about? Are the memories of any smells or sounds stirred up, or thoughts about fishing or exploring, or recollections of camping or salamander chasing? Can you almost feel the cool water around your ankles, and the tricky footing beneath your feet? Or perhaps there’s even some feelings of distaste over an environment that’s almost impossible to stay clean within, with no ready-made meals and no entertainment.

The student exercise is about metaphors, and what portions of an image tell the viewer (and thus, how a photographer can use this to their advantage.) Now instead, imagine what a computer program could tell us about this image.

Well, virtually none of that (“virtually,” heh! I kill me). The image is simply a collection of pixels of certain colors. There is nothing three-dimensional about it, and in fact, it vanishes without the input of electricity. To actually determine anything specific like “watch” and “water,” a significant amount of programming would have to be done to differentiate all of the myriad ways such things could appear. Determining even if the image is level would require more than a method of finding a horizon, since none actually appears, so any program would have to include some algorithm to average out the waterlines against the rocks. Even just a method of picking out the metal construction of the watch would require a comparison of the lighter, less-saturated portions of the image (what we call “reflections”) against the remainder of the image, probably linked to uniformity in shape so the program wouldn’t mistake reflections from the water as being ‘metallic.’

Is it possible for a program to interpret the time from the watch? Yes, provided it can also handle the oblique distortion of a watch at an angle rather than almost face-on as seen here, or upside down or sideways. To render this as time, however, would also need supporting details to recognize the shape of a watch. Had the lighting and colors indicated that the time was closer to sunset, any computer would need another algorithm to even detect the anachronism, much less recognize that the watch was probably set for the wrong time. It would have to be told (programmed) to recognize that a watch being incorrect is infinitely more likely than the sun abruptly setting at quarter to two.

I’m betting your eye jumped to the watch quickly when you first saw the image, and this is partially because of contrast, but more due to pattern recognition – it’s why you likely noted the time on the watch but paid little attention to the foliage or the lichen, which are also high in contrast, but lacking the strong patterns of the watch. How many different kinds of plant life are visible? It’s a low number, but I’m betting you still have to go back and count, since this aspect wasn’t something we typically look for. We attach varying levels of importance to different factors within the image, so any computer program to interpret such images would either be lacking such biases, or need to have them specifically delineated.

And then, there’s the emotional aspect. Whatever feelings you may have gotten from the images are a result of your past experience, and the connections you made at those times. They are a product not just of sensory input, but your personal evaluation of whether, for instance, wading in a stream is a good or bad thing. You can almost hear the sounds, feel the mud, and smell the water because your brain makes connections of all of these inputs in a common manner, so seeing a two-dimensional representation of a stream automatically connects to the other elements. And your emotions are guides towards learning, and better experiences. They exist because they served a survival purpose that gave an edge over our ancestors that didn’t possess them.

That’s the underlying message here: everything that we can interpret from such images is the product of millions of years of selective pressure, filtered through an organism that has a remarkable network of interacting neurons to build memories and connotations – and is building even more right now. I could tell you what kind of plant produces those bright green leaves you see (well, okay, I honestly can’t, but let’s pretend for a moment,) and you may retain this info for a long time – you might even remember it every time you think of a stream or river. Yet any program written to interpret this image will gain nothing new from my explanations unless it is either reprogrammed to include the info, or has an elaborate mechanism of receiving, interpreting, and connecting information. You know, through experience, that some plants grow only near water, and may wonder if what you see here qualifies, but even that information would have to be input into any program.

Even if we had the ability to create a computer, a robot if you will, that could walk into a river, feel the water, smell the sounds, and put these together into a coherent whole that permitted both extrapolations and comparisons to similar experiences later on, there remains the curious aspect of what weight it could give these. What would have to be included to have it react to the water being cold? We do this because it harms our circulation or even damages our cells, so there’s good reason to dislike cold water – but our discomfort is automatic, not reasoned. We could see the rocky bottom and know the footing is treacherous, potentially resulting in injury or even just wet clothes that we wouldn’t be able to change out of for a while, but a constructed mechanism would have different standards of danger or ‘discomfort.’

We could simply skip the mobile part altogether, and concentrate on electronic brains – this has long been a goal ever since calculating machines have been refined. Yet what defines a ‘brain,’ for our purposes? And what could we use it for? The military, for instance, may derive some benefit from an autonomous device that can deal with difficult and dangerous situations, without putting a human in harm’s way. Which sounds very good, until we consider the implications of a device, perhaps bearing weapons, that has no empathy or fear of consequences. Given a situation that we’re unfamiliar with, we humans can extrapolate on the spot, but such computations have to be either programmed in, or the ability to extrapolate programmed in, to any machine that faces unique situations. Not to mention, a failsafe mechanism of ‘fear’ regarding a wrong decision, lest we deal with a highly dangerous machine or one that simply stalls out in the vacuum of input it was made to deal with. Keep in mind that police officers have to determine levels of threat from any given situation, with occasional gross errors – imagine an autonomous machine trying to do the same in an area filled with civilians.

Electronic brains are also something that space exploration might benefit from. Forget the ‘humaniform’ robots of Asimov’s – what we’re most likely to aim for are planetary probes that can cope with unforeseen conditions, without having to wait for the speed-of-light lag in communicating with operators on Earth. This might be very useful in detecting a collision with a bit of space debris without external help, or shutting down a delicate sensor during a gamma ray burst – but are such abilities much easier to program in, saving the time and effort to develop a ‘thinking’ brain that has some form of self-preservation instinct? What about spotting a curious geological structure that deserves more attention during a planet flyby? What criteria do we use to define “curious?” Planetary geologists would certainly define this more usefully than I, so it’s not even a brain that would be useful, but a brain with the right experience.

This isn’t just an interesting exercise. There really are people who claim that we’ll surmount all of these issues to not only produce a working brain, but one that will surpass humans in ability – and that this will occur ‘soon.’ Most of their support for this claim comes from something called “Moore’s Law,” an examination of trends in computing power – essentially, the density of silicon ‘gates’ in microprocessors has been rising exponentially for years. From this, they conclude that it will continue to rise, at the same rate even, which ignores two important factors. The first is that extrapolations of this kind cannot be considered ‘law,’ just guesses at trends, and plenty of things can influence a trend (experience with the dot.com and housing bubbles shows us where counting on trends can sometimes end up). The second factor is the very real limits of physics, electrical resistance, and flattening signal-to-noise ratios, which means that we would need to develop entirely new technology to progress beyond a certain point – and that this development would fit into the projected timeline.

Worse, the functions of microprocessors aren’t even close to the functions of brains, in structure, operation, charge, or method of programming; even while the comparison has been made for decades in popular media, the analogy is weak and facile. The chemical functionality of neurons does not translate well to the electrical resistance aspect of silicon chips. And to be blunt, we have only tantalizing hints of how the brain functions anyway, and what aspect of its development sets it apart from other species’. We still struggle with epilepsy, autism, schizophrenia, and countless other mental issues precisely because we don’t really know what causes them.

So let’s say that we’re not looking to emulate the human brain itself, but trying instead to build a machine of high intelligence in whatever way possible. This leads to a concept often referred to as the ‘technological singularity,’ differentiating it from a gravitational singularity whence it plagiarized its name. A gravitational singularity is a theoretical state where gravity collapses matter below its standard atomic size and departs from the standard behaviors of space-time – black holes are the typical example, and they possess an ‘event horizon’ where matter can only pass one way. A technological ‘singularity’ borrows the event horizon idea to refer to the point where machines surpass human intelligence. Is this more likely than an electronic human brain, and somewhere in the near future? The possibility exists, if one considers ‘intelligence’ to be defined as the collection of facts and the ability to interconnect them plausibly – yet, what constitutes ‘plausibly’? When we examine what it is we would expect any such device to accomplish, we have to define every last goal within. Let’s say that we want it to solve a food shortage. There might be many ‘plausible’ options, such as killing off enough people so there no longer remains a shortage – this is a simple application of math, after all. Or food might be reduced to bare essentials of protein and fiber and such, making it remarkably efficient and completely joyless. It’s true that this might result in population decline by itself – I’d kill myself if I could no longer have ribs – but the key lesson in such exercises is that the solutions themselves aren’t necessarily going to be what we want. In order to be functional, or compelling enough to be implemented, they will have to be solutions from a human perspective.

As mentioned several times before, we have drives to figure out how things work, and to explore. These are survival traits bred into us over thousands of generations. Moreover, we start life with virtually nothing, building our intelligence through the near-constant input of information with numerous functions to emphasize connections and associations. The same functions are what leads us to the inductive and creative leaps that define so many of our advances in science. Nobody, as yet, has been able to duplicate the processes that led people like Maxwell, Einstein, and Feynman towards their remarkable discoveries – yet, people like Ray Kurzweil think that we can create a circuit-based intelligence? We’re still finding the similarities among the differences between human and chimpanzee brains, seeing birds that can use tools, and wondering how much of our personalities are defined by genetics. I feel obligated to point out that numerous predictions were made for the year 2000, almost none of which have come to pass. Even something that demands no actual intelligence, like speech recognition software (something that Kurzweil himself has worked on,) has been in development for decades with astoundingly poor progress.

The question also remains as to what we could do with a hyperintelligent computer. Solve some of the more pressing world problems? Such solutions could not revolve around mere logistics, because those aren’t the cause of many of our problems now – we’re hampered by politics, tribalism, competition, and even selfish emotions. Most of our issues are solely because we’re human, and even if some machine could produce a solution, you’d have to get people to actually implement it. Hell, we can stretch our limited resources, right now, by the very well-known solution of using less energy – getting right on that, aren’t we? No, we’re arguing over how much and how and when and if it’s justified by its cost and “Hey, I need my truck!”

Kurzweil has provided another argument for such machines, however: that we could download our memories and brain functions into them and, in essence, live forever. Yet there are so many issues with this that it becomes hard to believe anyone with any knowledge of cognitive function could take this seriously. Human memory is not like a recording – it changes constantly due to sensory input and new associations, and in too many cases contains nothing but imagination. Worse, we already know what happens with simple factors like sensory deprivation, or sleep deprivation – we start going psychotic. So even if we could actually read memories, and store them, and did have an artificial brain capable of not just storage, but cognition, the ‘intelligence’ that it produced would almost certainly be useless (not to mention creating a frothing mob of activists eager to shut it down.)

Even those who think we can create artificial intelligence by going the same route that nature took, starting with some basic artificial neurons and selecting for best functionality, are unlikely to achieve much of anything. Our development as a species, in fact every species’ development, was and still is shaped by the conditions and demands of the environment. When we think about solving the limited resource question by killing excess population, we balk at this precisely because we have social instincts that have developed over millions of years, almost certainly because the competition with other species that could either outrun us or eat us required an edge – thus, cooperation. We have standards of beauty because our successful reproduction was enhanced by certain traits revolving around health, stamina, a birth canal that could pass our offsprings’ oversized heads, and so on. And as indicated above, some of the things that we’d like to improve in our world are directly caused by the functions that provided for survival, that are (in some cases) now outmoded within our new cultures. Even the random mutations that DNA undergoes contribute, unpredictably. Everything interconnects, and like the philosophical pondering of what could have happened if some ancestor had turned left instead of right, a subtle difference in conditions during the development of brains might lead in wildly disparate directions. Any attempt at an evolved artificial intelligence is virtually guaranteed to produce nothing that we would even recognize.

The same applies for any concepts of extra-terrestrial intelligence as well. The environmental conditions would be so dissimilar that the very definition of ‘intelligence’ becomes meaningless. Even communication is highly likely to be impossible, since we would have no abstract concepts that are similar, no desires, no emotions, and not even any basic needs. Like concepts such as ‘consciousness,’ intelligence is an arbitrary distinction that mostly serves to feed our own ego, but lacks any pertinent definition when we consider finding it in any other species, or creating it artificially.

This is what’s so fascinating about natural selection, since it produces remarkably unique traits from some of the simplest ‘rules,’ forging a path that could have led in any direction. Our minds are in some ways quite impressive, like when we realize that our species alone can contemplate subatomic processes and galaxy formation. And in other ways they’re quite inept, constantly hampered by petty demands and filters, trashing our cognitive functionality with emotional sidetracking about whether we’re cool enough, or if our success should be measured by having a sun room. One begins to wonder if we really should try to duplicate this – or whether eliminating what we consider our ‘imperfections’ would stop us from being human.

One good reason

Did I mention that, to be a nature photographer, you had to get up early? No one ever looks back on their life and says, “I wish I spent more time in bed.”

Okay, wait, that’s probably a tad inaccurate. It likely happens a few hundred thousand times daily. That doesn’t make it a bad proverb, though.

Okay, yes it does. But ignoring all that, if you want to get interesting nature photos, get your lazy ass out of bed anyway. And be aware that the sun moves very quickly when you’re counting on backlighting, and may simply stop throwing light through your chosen subject even as you’re trying to focus. Seriously, I had to abandon another, even more photogenic leaf as it dropped into shadow again. But I guess I can’t complain.

Okay, yes I can, and frequently do. Cuss a lot while shooting, too (and, for that matter, at all other times.) Maybe I should quit here…

Pride

Yes, I know this appears to be a crass copy of the pose seen here, but unless that otter is actually eating a vole, I’d hazard that the evidence leans towards coincidence.

I had earlier spotted the same species jumping spider as this one, atop a log and showing off its vivid rust-colored abdomen, but it was so shy that I never got remotely close enough to photograph – the same can be said for some vivid green tiger beetles. But this one held still quite cooperatively, and it wasn’t until I was looking at the magnified image in the viewfinder that I tumbled to why. Jumping spiders are just like kids in this regard: give them a treat and they’ll cooperate for a while.

I went down to the river specifically to stage a shot for a post, which will be coming shortly, but got several images unrelated to that topic, so there’s at least one other post coming from the trip too. And if you think this one’s creepy, you ain’t seen nothing yet. This one’s cute in comparison.

… and part two

There are actually two themes I’m continuing here. The first is the limits of our knowledge, which is a “half-empty” perspective; there’s a better way of expressing it, which we’ll get to in a moment. The second theme being continued here is special efforts made by scientists to communicate their work to the general public. The previous example (last post) was an individual contribution, though also connected to the student exercises linked to earlier at the MultipleOrganisms.net site. This one is aimed directly at public consumption, and does a remarkable job in a very short space of time.

It’s very likely that you’ve heard of the Large Hadron Collider (or LHC) at CERN, possibly because of the vapid concerns over it destroying the earth that gained far more media attention than was warranted. It’s also likely that you have no idea what it is that they’re trying to do, or that you know it has something to do with the ‘Higgs Boson’ but aren’t sure exactly what. If so, this short video animation will almost certainly help:

[The Higgs Boson Explained from PHD Comics on Vimeo].

As far as I’m concerned, this is a very effective presentation. Nothing fancy or flashy needed – just a good narrator and some visual assistance.

The underlying message is interesting, too – this is a realm of science that is wide open for surprises and new discoveries, and it highlights how much we still have yet to learn. In the past century, we explored nearly all of our planet’s surface and turned our eyes to the stars, reaching farther and farther out – but another faction of explorers started reaching farther and farther inwards, delving into realms that continue to get even smaller. The very word “quantum” is a reference to the smallest possible amount that something could be reduced to. The first written concepts of this considered everything to be made of five perfect geometric shapes – this was a few thousand years ago. Much later on, we figured out that everything was made of atoms, a word that means something that cannot be divided or reduced further. The name stuck, the supposed property didn’t, as we discovered the bits that atoms are made of. And while doing all this, we narrowed down the four basic forces which govern all matter – so far, anyway.

It’s fairly common knowledge now that quantum physics has rules all its own, surprisingly different from standard physics, and it’s been a huge field of study. At the subatomic level, matter doesn’t act as it does ‘normally,’ and we still don’t know why, nor how particles that behave one way form a collective atom that behaves another. There is at least one fundamental law governing this, probably more, and it’s very likely that once we find out about it all, there will be numerous new applications in materials, communication, and potentially even travel and energy.

It’s very easy to ask questions about how or why this is important, especially in the face of more immediate concerns locally or worldwide. Yet, roughly a century ago when some of the most astounding findings of both particle physics and astronomy were made, there were countless immediate concerns too, like The War To End All Wars and anarchists in the US. They’re long past now, but the science remains. We have a serious problem with repeating history, yet knowledge moves forward constantly, and the LHC stands a good chance of being the location where another leap occurs. There’s a lot still to be discovered, and for those who favor the ideas of exploration and learning, it really is pretty damn cool.

*      *      *      *

Thanks to Cosmic Variance for the initial introduction to the video, and PhD Comics for their great efforts to communicate these things effectively.

There isn’t always a complete answer, part one…

For those of you who have been hanging on the edge of your seat, checking thrice daily to see if I’ve offered an update, I apologize for keeping you in suspense. Actually, no I don’t – suspense is good for you, and anxiety strengthens the heart. Well known fact.

Anyway, I mentioned trying to follow-up on the attack snail, and I did; in my online searches I came across the name, repeatedly actually, of Kathryn E. Perez, Ph.D., who has published a fair amount about land snails. She had also done postdoctoral work at two of the nearby universities, Duke and UNC, so it seemed likely that she was directly familiar with the species in the area. I dropped her an e-mail and got a prompt response – yet, not a definitive answer. Here’s how that goes sometimes:

First, while I did several direct measurements of the snail while I had it (guided by a PDF on snail identification) and got lots of images of my subject, I didn’t pay attention to the umbilicus area. Snail shells form in a spiral, of course, but they may do a flat spiral, or they may ‘stack up’ a bit making a cone, which would leave an empty space on the ‘underside’ of the spiral. The umbilicus is the axis around which the spiral twirls, and I paid attention to the top side in detail, but simply never thought to take note of the underside, which would have narrowed down the species choices a bit. The other aspect that would have given more clues was the lip of the aperture, which is the opening of the shell itself. In this case, I got a few measurements and examined it closely, but the snail wasn’t cooperating, and simply refused to retract fully so the aperture was unobstructed. What I have is a tentative identification of Neohelix albolabris, with a possibility of it being either Mesodon thyroidus, Mesodon zaletus, or Allogona profunda. These are all members of the Polygyridae family, so at least I’d gotten that correct, even if I copied a typo when relating that for the initial post.

As for the burning sensation when I contacted it? Dr. Perez confirmed that many snails have such defenses, also including yucky-tasting mucus (I know that shocks most of us who imagine snails to be succulent and fruity,) but it appears not to be known if this species in particular sports such a defense. In fact, from the dearth of information I found about this on my own, this topic hasn’t been a matter of too much study. I don’t feel bad about not finding this, since the mention of the chemical composition of snail mucus that Dr. Perez forwarded me was buried in a scientific paper.

I mentioned this before in the Amateur Naturalist series of posts, but we’re still finding out a lot of details about species as we go – biology and taxonomy are not as well-explored as we might believe. Among the smaller and more prolific members of the animal kingdom, there are such large numbers and subtle divisions that biologists are still slogging through them all, so it’s possible to come up against questions where the answer either isn’t known, or is still kind of vague. Which means that if my finger turns mauve and drops off tomorrow, I may be the catalyst for a new avenue of research, possibly resulting in a toxic snail snot being named after me. So there’s that to look forward to.

Dr. Perez provided more info than expected, especially now as colleges approach final exams and the workload gets heinous, so I’ll take the opportunity to thank her once again, publicly. There is often a disconnect between the ‘scientific community’ and information readily available to the public, even in this age of electronic publication; working scientists often don’t have the time or funding to create general education works, and most papers are too specific and dry to attract a serious consumer market. I’ve had very good luck contacting universities with questions, but am always a little circumspect, since the people within these departments have their own work to do, often quite a bit. This is also coupled with the fact that many people specialize in a narrow field, and finding one that knows your topic may take some searching. So while I don’t want to encourage anyone to immediately contact their local universities with all questions, and will stress that numerous answers are available online with a bit of effort, sometimes this is still a worthwhile avenue of information.

I’ll use the idea of special efforts among working scientists to educate the general public as a springboard for the next post, which is unrelated enough that I decided not to pursue it in just one ;-)

That’s 154 to you and me

The Cat's Eye Nebula: A Dying Star Creates a Sculpture of Gas and Dust
Source: Hubblesite.org


On this date 22 years ago, the Hubble Space Telescope was borne into space on Shuttle Discovery, the one that recently did its last flyby over DC (well, okay, it had help) before delivery to the Udvar-Hazy center. The Hubble will be retired soon, and while this is viewed with some disappointment by everyone who has even a faint interest in astronomy, it’s not like anyone can complain. The images alone have been stunning, revealing a universe that is fascinating in its complexity and variety – but this is a little of a mixed blessing, too. I’m not alone in wondering how breathtaking it would be to travel to some of these cosmic locations like the Cats Eye Nebula (NGC 6543) above, diving through its diaphanous bubbles like a stormchaser circling the eye of a hurricane, but let’s face it – we’re virtually guaranteed never to be able to do something of this sort. The distances are just too vast [you are required by law to use the word “vast” when talking about space], the energy and time required far beyond the reach of our human efforts. And we are restricted to one vantage point as well, save for three-dimensional renditions by clever programmers. Yet, we also have to temper this with the knowledge that getting too close to some of these distant neighbors would be, as they say, “bad.” We’re not getting these light shows at this distance because of a laser in a smoky disco.

Yet, being the source of pretty pictures is the superficial way to look at Hubble, like judging someone by their shoes. We have obtained a tremendous amount of information from these optical observations as well, such as refining the measurements that led to the concept of “dark energy.” In a nutshell: after the initial acceleration of all the mass in the universe from a very small point, gravity should have been slowing things down, dragging its metaphorical feet against the coasting bike of space-time (no, I’ll never be asked to write popular science articles.) Instead, the expansion of the universe is accelerating, and something must be feeding energy into this. I could have continued the space-time bike simile by comparing it to going downhill, but that acceleration is caused be gravity and I’m now confusing the hell out of even myself. Let’s let someone else do this (autoplay video at that link – I wish people would stop doing crap like that.)

Hubble has also contributed a lot to our knowledge of planetary formation, as well. The photos that I highlight in this post disproved a prediction by astronomers that planetary discs would typically remain hidden from our view by surrounding dust clouds. Hubble has even imaged a planet itself around another star, something that is remarkably hard to accomplish:


There’s a little bit of trivia that is worth knowing, if you’ll permit me to return to the idea of Hubble as a camera (just try and stop me!) The bare truth is, every camera, every method that we have of producing images from light, fudges things a bit. Film emulsions contain metals that change their nature when exposed to light, forming crystals, and digital sensors generate a difference in electrical charge. But neither of these can determine the difference between wavelengths except in a very broad range, mostly what we call visible light – in other words, they cannot differentiate color. To accomplish this, they must filter light through substances that permit only specific wavelengths; in film, that’s the emulsion base, a colored gel in which the metals are suspended, and in digital, it’s a membrane over top of the digital sensor. It’s no different for the Hubble Space Telescope, which has colored filters that can be interchanged over its own digital sensors. Every color image from Hubble is a composite of several strictly monochrome images sent back to earth, edited to reintroduce the color, and in most cases enhanced to increase the contrasts between them. A typical computer display does not even remotely approach the range of light and color that our eyes can see, so to provide a better idea of the subtle differences within any photographic target of the HST, the images must be altered. It’s no different than any image I produce myself and put here on the site. This article from Sky & Telescope magazine, used with permission by Hubblesite.org, explains it in more detail.

And finally, I refer you back to this post from two years ago, which contains the video made from the Ultra Deep Field photos, simply because it’s one of the coolest animations ever made. Yeah, you might have seen it already – so? Watch it again. It’s a great dose of perspective, in both directions. While it is easy to feel insignificant in comparison to the unfathomable distances involved, there’s the other side of the coin: we figured out how to actually see this. Damn clever little apes, aren’t we?

But then, I guess we would think that…

Back atcha

Last year, I did a post on macro photography that featured some detail pics of a Giant Water Bug, also called an Electric Light Bug but better known by the scientific name Belostoma flumineum. This post totally rocked the internet, and by that I mean, was just another post on just another blog, probably read by five people. My definition of “going viral” seems to be, “really really small and not moving.”

Yet, it garnered the attention of a couple of biology students who were doing a project on the species, and they asked permission to use the images therein. I’m virtually always cool with that, since it wasn’t for profit, was a good cause, and proper attribution was given. I’ve just been notified that their project website is now online, so in return, I’ll send you over there. It’s a nice collection of information on the species, certainly more than I usually impart, and if most websites were as clean and well-organized as theirs, there would be far less strife in the world. I also want to note that this is a portion of the larger site devoted to student projects from the University of Wisconsin-La Crosse, known as MultipleOrganisms.net (that’s organisms, don’t get excited,) also worth the visit.

I have to add in a small note: When I remarked about the snail that might have attacked me with acid a few days ago, I had spent a fair amount of time doing internet research on snail species, eventually finding the name of someone who seemed to know quite a bit about snails. I set her name aside to contact as a side project, and now realized that she’s a biology professor at the same university, even linked on that MultipleOrganisms site. Small world, but now I’m obligated to follow this up. I’ll let you know what I find.

And good luck with the project, guys!

Good morning!


I thought I was pretty fortunate to discover a few tiny praying mantises on the azalea bushes out front yesterday, until I went out this morning right after sunrise when the dew still hadn’t cleared…

If you look closely at the top pic, you’ll see a large dewdrop adhering right between the mantis’ eyes. Which means, if you look at the image to the left, that forward bump by the antennae isn’t the other eye on the far side, but that dewdrop again.

My model here is about 20mm long (less than an inch.) These were taken with the Vivitar bellows and the Vivitar 135 2.8, Metz 40MZ-3i strobe on-camera direct (top) and off-camera above subject with Lumiquest Big Bounce diffuser (bottom). Oh, and a Canon 300D/Digital Rebel – yes, the first one. Now do you think you really need the latest and bestest? In fact, everything used today except for the tripod was bought used – and the tripod’s fourteen years old…

1 280 281 282 283 284 318