Silly caterpillars

PlayingInTheSprinkler
I know, I know, that’s not a caterpillar. Earlier today Yesterday [I have to stop doing these so late at night, or start ignoring the midnight change] I had checked on the green lynx spider young’uns, which are surprisingly still around, kinda. The ones on the dog fennel have largely dispersed, as have most of the orphans on the rosemary. But there remains a close-knit family on the butterfly bush, still sheltered under the wing (okay, bad metaphor) of the mother, who’s looking pretty emaciated right now. They’re certainly considering leaving home – right as I focused on one, it abruptly vanished from the viewfinder, having successfully cast a line into the wind and ballooned off. Unfortunately, like a few weeks back, it successfully transported itself all the way to my right sleeve, effectively negating any pics and sending it into panic mode since the terrain spelled ‘inhospitable’ to its little brain. I gathered it up and redeposited it back with the others, which possibly created a confusing cause-and-effect pattern to form, if spider minds even work in that way. But while doing some shots, I misted the web to make it stand out better for some experiments, sending a great number of spiderlings, and momma herself, into a frenzy – not of fear or the desire for shelter, as you might expect, but to collect the drops before they evaporated, since there’s been too many dry days recently.

But all that’s not getting us any closer to the title. On returning from that intrepid excursion six meters out the front door, I spied a curious patch on the wall under the porch light, recognizing them for tiny eggs.

SilveryEggs
These are a measured 0.6mm across, and the entire patch can be hidden under the tip of your little finger. Like most insect eggs and many chrysalises, the shells are clear, and the coloration is all coming from the occupants. The few pale ones that can be seen are eggs that failed to develop, while the darker ones are the soon-to-emerge… somethings. At higher resolution, the details are a little curious:

IckyDetail
I figured the dark spots were heads, and certainly seemed like caterpillar to me, but the stripes or hairs seemed off, because I didn’t think any caterpillars were born with them. But on stepping out tonight for other reasons, I spotted the first emergent, who confirmed that I was wrong (as had a friend of mine):

Puberty
Yep, that’s quite a ‘do it’s sporting, as it poses near its recently-vacated microcondo (with rotten privacy.) Moreover, you can see the breaches in several other eggs indicating that siblings are soon to follow. Last one, or few in this case, out really are rotten eggs…

I can’t help but think this is lousy timing, since most vegetation is going into dormancy for the winter and we’ve already had several frosts, stomping with finality on the desperate attempts of the basil and morning glories to keep going. It was pretty chilly as I got the shots of the newborn, and despite the lovely weather today we’ve got a chance of snow tomorrow. While those are some impressive follicles on my caterpillar friend, I’m suspicious of their ability to serve as a winter coat effectively.

FingerOfDoomYet, nature has a pretty good grasp on things, and the mother who deposited these eggs was following a behavioral plan dictated by thousands of years of selection – unless, of course, something went wrong. Of 138 eggs (yes I counted them, but in the image where I could Photoshop a colored dot to hold my place,) four are visibly non-viable, and who knows how many more genetic defects might be present, or were present in the previous generation? It remains possible that the mother who laid these was a bit funny in the head. However, I’m betting that this is business as usual and these little wrigglies are well-equipped to deal with the conditions. The newborn is visible in this pic too, just right of center, so as you can imagine from that, following their progress once they leave the distinctive surrounds of the egg cluster would be next to impossible. While I might find larger caterpillars later on, there would be little chance of determining if they were these hatchlings or not, and even keeping a few in a terrarium would require finding the right food for them, which would be only a wild guess on my part. I’ll assume nature’s got a handle on it.

The Street Cafe at the End of the Universe

Just a throwaway post for the time being – there are other things in the works but they’re not done yet.

Many years ago, a friend and I tried an idle challenge to create a soundtrack for a (then nonexistent) movie based on a book we both knew – the idea was to create the soundtrack, then play it and let the other guess the book from the progression of music. This was before the internet, when CDs were still considered pretty cutting-edge so music stores had much larger cassette sections (with ankylosaurs roaming between the bins – one had to be alert in those days.) So I had to work from my personal music collection, much smaller than it is now of course.

The book I chose was Hitchhikers’ Guide to the Galaxy, and for a closing credits theme, I selected the song below, largely because the last lines of the book have the characters heading off to grab a bite at the Restaurant at the End of the Universe, later to become the title of the sequel. Also because it’s a nice mellow end-credit song.

Almost immediately, I was struck by how remarkably well the song really fit, but of course you need to be familiar with the sequel first to know this. The Restaurant at the End of the Universe is not so-called because of its physical location, but its temporal one – you had to time-travel to the point where the universe was boiling off and dissipating into non-existence, whereupon you could sit at a table and watch this occurring through the big dome overhead, and later the entire restaurant would be pulled backwards in time a few hours and do it all over again – the ultimate dinner show, as it were. Ignore all the problems with paradoxes and trouser legs and how big the place would have to be to accommodate a few trillion years of potential customers, because Douglas Adams did and he’s worth taking cues from. But literally, this is a place where there’s no tomorrow.

Which is where the song comes in:
Street CafĂ© – Icehouse

It’s better if you listen to it without any visuals at least once, which is why I uploaded the MP3 file rather than sending you to the video. The song is Street Cafe by the band Icehouse, off the album Primitive Man and later Great Southern Land (and yes, dating from the 80s, but nearly all my music does.)

It’s almost disturbing how many of the lyrics fit so well into the whole idea of the end of the universe, if one allows a little poetic license. I toyed with the idea that Icehouse might have done this intentionally, but I’m more inclined to think it’s just a cool coincidence. The producers of the movie really missed the boat when they failed to use this song on their own soundtrack, but that’s what they get for failing to check with me first.

And if you went to the video and thought it seemed a bit familiar in style, almost derivative in fact, that’s because they have the same director: Russell Mulcahy, also known for Highlander. In this case, I guess there could be more than one…

Seeing is believing. But not necessarily true

Perspective, in the usage of considering some topic from a different standpoint than originally, is a great thing, and something I play with a lot on this blog. In the usage of how things appear to us visually, based on our position, it’s a useful thing to play with in photography as well. But sometimes, it’s hard to override our mental perspective to recognize the visual one.

AdvancingCrepuscular rays are the beams of light, typically from the sun, that emerge from a break in the clouds to throw a spotlight effect someplace – we can generally make them out because of the humidity and dust in the air being illuminated. My example seen here is almost the opposite, being a shadow thrown by the cloud blocking the sunlight, but essentially the same effect. Taken at sunrise, there needed to be several conditions in place to see this. The first is a high-altitude layer of clouds forming a screen across the sky. The second is the lower cumulus cloud, the puffy one in the image, which is hundreds or perhaps thousands of meters lower that the screen layer. And of course the sun had to be very low on the horizon so that its light would not only be thrown upwards against the underside of the screen layer, but also blocked by the puffy cloud.

We tend to throw around phrases like the sun being “within the clouds” or such, going with what it appears to be rather than the reality of it being 150 million kilometers away from the planet, “above” everything. So here it’s not even below the clouds, but the planet and the clouds immediately above my head as I took this were in a very narrow set of acceptable angles, letting the sun’s light under a narrow gap off to the side, changing as the planet rotated and the sun “rose.” In fact, I can’t help when writing this but to think of a quote from Douglas Adams, in Life, The Universe, and Everything: “Several billion trillion tons of superhot exploding hydrogen nuclei rose slowly above the horizon and managed to look small, cold and slightly damp.”

But even the apparent spreading of crepuscular rays, or in this case a crepuscular shadow, is an illusion akin to the convergence of the road edges or train tracks as they trundle off into the distance, since the edges are parallel. Or very nearly. The sun is a hell of a lot larger than the Earth, and all parts of its surface are emitting light. So while light from the center of its face is throwing a shadow in one direction, light from the left edge is throwing a shadow in another direction, and so is the right, and every point in between all of them, resulting in a shadow with a dark central portion that gets narrower with distance, but diffuse edges that get broader. It looks like this:
Shadowclash
… except that, because of the very limited range of it we can see and the enormous distance of the light source from the Earth, we can essentially call the shadow edges parallel. While the angle will reduce with distance, the sun will never become smaller than the Earth and so the edges will not change in nature. Crepuscular rays appear so distinctly to be spreading only because the “wider” part is a lot closer to us than the origin at the cloud or mountain pass or whatever. You’ve undoubtedly seen more pronounced examples than what I’ve used here, and just keep it in mind that they are not really broadening, merely uniform stripes with (more or less) parallel edges.

But I used the top image for a reason, and that is, I also turned the other way and got a shot of the shadow disappearing over the opposite horizon, which illustrates the illusory nature of the phenomenon (and also, for trivia’s sake, called anticrepuscular rays.) Now, the edges seem to be converging back down, or thrown by another light source coming from the opposite direction:
Receding
If you think the fingers of shadows flanking it are that edge diffusion effect, remember that the light source is behind you now, and they should be going the other way. Instead, they are additional shadows thrown by other bumps on the cumulus cloud, becoming visible because the screen layer is curving downwards around the planet. Thinking two-dimensionally in this case is only going to be misleading.

This is all interesting by itself (well, I think so anyway,) but it also parallels situations I see all too often. When dealing with topics such as paranormal experiences, home medical remedies, or even gambling formulas, personal experience is one of the most prominent arguments in support; the trump card is, “I know what I saw” (or felt, or heard, or experienced in whatever way,) the undeniable firsthand evidence. Yet, there is a difference between the raw input of our senses and our interpretation of them – often colored by things we’ve heard from other sources. “Look! An alien spaceship!” Well, no – it’s a light in the night sky, at an unknown distance and moving, perhaps, at an unknown velocity because it’s not possible to determine speed without knowing distance. The ISS, crossing overhead at nearly 30,000 kilometers per hour, appears to be magnitudes slower than a news helicopter that cannot top 225. Mars, Jupiter, and Sirius all appear the same size to us, despite their vast differences in mass and distance. And of course, when someone is told they’re in a haunted location and inclined to believe it, every odd sound, every shadow seen peripherally, is simply more positive evidence. Most amusing about this is when someone who isn’t inclined to believe finds nothing out of the ordinary from the same experiences, they’re often accused of being dismissive or closed-minded – apparently you can be biased against evidence but not somehow for it.

Do these images make a case against paranormal phenomena? Does any optical illusion demonstrate that someone has misinterpreted their experience? Of course not. Yet, it does show that what we think we see and what’s really there are often different, and the true nature of having an open mind is not belief, but accepting that things are not always as our initial impressions portray.

I have my reasons

While this is a bit of pointless personal information that really isn’t going to change anything, I feel the need to make the statement in the face of rather obsessive popularity among the public at large (it is a blog, after all,) and so: I really don’t like cell phones.

Some of this is personal, I admit it. And some of it is because I tend to look at things critically, especially those things that become popular – I learned long ago not to trust the “judgment” of others, and in fact to be highly suspicious of it. But let’s look at it from the simple standpoint of reasons why I should get a fancy cell phone.

Emergency use – The last time I actually needed a phone in an emergency that would not have been available as a land line or nearby business was in 2001. No, seriously. Imagine the money I saved on monthly fees in the intervening 12 years. Sure, this is one anecdote and mileage varies, but let’s be real: no one uses their phone primarily for emergencies.

Business use – This was the reasoning behind making the damn things popular in the first place; apparently a lot of people were either too stupid to hire someone competent to take calls while they themselves were away from the office, or believed they were the only ones competent. Again, let’s be real. But this is the only reason I actually have a cell phone myself, a cheapass little pay-as-you-go jobby that costs me, literally, $20 every three months because I never use all the minutes on it. It’s there for students, for either of us to let the other know we may be held up or lost or something. It serves that purpose maybe three times a year. Everything else goes through the land line, which sits by my desk and computer and is handy for scheduling, looking up details, mapping, and all that rot.

Text messages – Quite possibly the stupidest thing our society has ever gotten involved in. Part of the reason it became popular was that it cost so much to make voice calls, but you know? The day I look to teen girls to help me judge the value of something is the day after a full-frontal lobotomy. Nearly two hundred years after inventing the telegraph, we reinvented it with pretty colors and thought this was cool.

Now, there are uses for messages that are not voice calls; that’s what e-mail is for. Same desk as above. And it has a better interface, takes place on a full-sized keyboard where even my rotten typing style pulls off 40 words a minute, and handles attachments of any kind. Not to mention it does not interrupt me when I don’t want it to.

To be “connected” – One of those things where its appeal comes from how cool it sounds. I’m plenty connected as it is, thanks all the same. I have my own website and blog, spend no small amount of time on others, and do most of my business through such means. I might have had one client that I lost because I didn’t reply to them right away; nobody else was so anxious (and to be honest, I’m not sure I’d want someone like that for a client anyway.)

True, I don’t Tweet, Facebook, or any other social dipfuckery, and take my time when writing posts because I’m serious about it – I couldn’t care less about the minutia of anyone else’s life and don’t imagine they need to hear mine (if you thought this wasn’t an interesting topic you already stopped reading long ago.) I had a Facebook account some time back, that I created when the place I worked for needed to know what social networking would do for their donations. The answer: not a fucking thing, and the account bored me so badly I ditched it. Even from the standpoint of a nature photographer, I already have a site, and it’s possible to “follow” me here.

Somebody told me that I would have gotten more from Facebook if I’d had the right Friends; considering he was one of my Friends, another photographer to boot, and not contributing much, I could only smile…

They’re so handy to have! You can look up anything, get directions, blah blah blah – Horseshit. First off, I’m actually capable of planning, since I was born decades before fancy phones existed. I know how to read a map. I am not flighty or ill-focused, so the number of times I’m “out” and change my mind about what I’m doing is almost nil. I have attempted to use someone’s smutphone for exactly this kind of connectedness, twice. Once there was no signal to be had (in the middle of Raleigh, for fuck’s sake!) and in the other case, the interface was so clumsy that I couldn’t actually locate the information I wanted, and seriously degraded my mood in the attempt.

We’ve reached a curiously stupid point in our lives, where phones are now too big to carry comfortably and yet still too small to use in the manner that drove their sizes up in the first place. My fingers are average size, and yet too big for most of the touch-screen uses I’ve attempted to put them to, and especially too big to try and type words on a fake keyboard. I don’t quite get the appeal of smacking the screen two or three times to get them to register the touch which is their sole point in existing. And maybe I’m getting old, but I don’t find looking at anything on a screen the size of a cigarette pack to be useful in any way. And my desktop computer still feeds a measly 17″ monitor…

I can’t buy the convenience factor when I see people constantly grappling with, fumbling, and dropping their handy toys; by a significant margin, most touch screens that I’ve seen have been cracked, and I’ve watched a few phones shatter when dropped in public – boy, wasn’t that handy? Worth spending another $400 on to get those important “LOL! 2 TRU!” messages, I’m sure. And yes, I’ve been recruited in the search for phones numerous times, and have heard the dead battery/lost charger mantra more times than I can count – somehow, the marvelous uses of these phones did not merit the extra effort of basic care…

The cell phone I carry with reluctance is yet another thing taking up space, formerly in my pockets until one day, when sprawled on the ground doing macro work, 911 called me back – I had apparently dialed them by pressing the phone against the ground. Notable about this is that I had the keypad lock on, requiring a one-second push of the * key before the phone could even operate. So now the stupid thing rides in a belt case to protect the keyboard, making me one of those people, but then again, imagine how much money I’d be out if I’d been carrying a fragile touch-screen phone instead.

You can count on them – The old emergency use, “I can call from anywhere” schtick. First off, I have no use for this in 99% of the cases. When I’m away from my desk, I’m busy, and not inclined to get into a phone conversation. Even the land line has call-waiting shut off because, quite simply, if I’m talking to you, it’s rude to dump your conversation in favor of anyone else. I’m crazy that way. Second, of the various places I’ve gone which might actually be dangerous enough to warrant having a phone handy, most of them don’t have any signal anyway – one of them is two kilometers from the house. And let’s consider the conditions where this would be useful, which are, a) too injured to walk out, but b) not so injured that I’d be unconscious, and c) managed not to damage the damn thing in doing so, and finally d) someplace where no one ever goes. That’s a pretty narrow set of circumstances, voided with e) letting someone know where I’ll be and for how long.

They’re handy when you’re bored – Let’s be real, this accounts for most of the uses, but that’s not me. It’s rare that I’m in a situation where I cannot wander outside to see what kind of bugs are about, or just people-watch, but when it does occur, I usually have the presence of mind to bring a book. They run less than ten bucks, actually have substance, can be dropped with minimal damage, and don’t need recharging. Few bright colors and no bonus points for finishing a chapter, true, but you can buy those little gold stars your teacher used to give out if it helps.

And then there are the explicit reasons not to get one. Like, they’re simply the most annoying things to talk on, with rotten connectivity. Just ten years ago we watched television commercials touting the remarkable clarity of telephone providers’ (land) services; now we see maps where you simply cannot get any. I have honestly had to ask my boss who they were, multiple times, because of the iPhone’s abysmal sound quality, compressed horribly to allow for all the other stupid shit to use the same bandwidth, and I have a friend who I miss almost half of the words she speaks because of the same thing. I have this little criteria, call me demanding: if it’s a phone, and sold as a phone, and even has phone in the name, then it should work, above everything else, as a phone.

Then, my dog, the photos! Look, I’m glad insecure teen girls can get piccies of themselves in the mirror without having to juggle a heavy camera one-handed, but using these godawful piece of shit phone cameras to attempt anything else is simply ludicrous. It’s astounding to come from a mere decade ago, when the resolving power of lenses was a major discussion point in photography, to images that resemble those from the Polaroid cameras that people were so quick to discard at garage sales. There really needs to be another term; we already have camera, which means a device for taking photos, and so we need, I dunno, perhaps a gossipa, for getting snapshots of drunk friends and bad concerts, something that will be deleted within a year and never serve a real purpose at any time. I have a strong stomach, I even like roller coasters, but I want a law requiring something like, “Warning! This video clip shot on a cell phone by someone who couldn’t hold steady to save their miserable life. May induce nausea, heart arrhythmia, epileptic fits, and extreme irritation. You won’t see a damn thing anyway, so don’t waste your time.”

The most interesting aspect of connectivity is how much of a lie it is. I send e-mails to people with smutphones, and rarely get a reply, and never a detailed one. I never leave voicemails anymore unless it’s business-related, because no one ever listens to them (yes, this might just be me – I’ll work on my people skills.) But most especially, I see nothing positive about masturbating with a toy in public. We’re beginning to have new social rules to deal with people so obsessed with these things that they cannot interact usefully, and need signs posted in places of business to remind total fucking morons (there are apparently a lot of them) to put the phone away before trying to use their services. In what brain-damaged way is this supposed to be beneficial, to us or anyone else? What about connecting with the people right smack in front of you, or does this not count?

And of course, this says nothing of the very real, and very distinct, danger of death – death, for fuck’s sake – from using the goddamn things when operating vehicles. I already know how distracting even a conversation in the car can be; I see no reason whatsoever to contribute to that, or to believe that I’m different in some way (the same argument used by drunk drivers, with the same results.)

I imagine there could be at least a few people reading who consider me a Luddite, or a technophobe, or something kneejerky like that, never realizing that they’re reading this on a website that I maintain myself, featuring regular posts of digital images and a few scripts running here and there. I just have a particular outlook: technology is not good or bad, it’s just intended to serve a purpose. And for me, the purpose isn’t to do what everyone else does, or obtain the latest gadgets, or make excuses for expensive toys. If the functionality isn’t there, or if it costs a hell of a lot more than its value, then it ain’t happening on my dime. People can consider that weird if they like.

Anyway, I feel better now.

Just because, part 12

ColorizedJust a pic from today that I liked. I took a few minutes to check out how the colors were advancing, but the thin overcast conditions weren’t going to bring them out very well. In this case, the muted light seemed appropriate for the subject, letting the greys come through. I kind of like the way the main plant came out in the crisp tones of well done B&W work, while the rest of the image has soft, kind-of pastel coloration – a faux mixed-media photograph.

If I was artistic in any way, I could go on for two more paragraphs on the spiritual or aesthetic meaning of this, but I’m more visceral, myself (well, okay, in my photography, anyway.) I’ll let the viewer derive their own impressions from it.

Happy Halloween, again

Yes, I have to do this:

L.Mactans-1
This is actually the first specimen I’ve found this year, which was a slight frustration since there was a particular image that I wanted and couldn’t find a model. Just in case it wasn’t immediately apparent, this is a southern black widow, Latrodectus mactans, distinguished from the northern variant (we have both here) by the lack of markings on the upper abdomen, and the joined halves of the hourglass. Kindly note the reflections from two of the eyes.

L.Mactans-2
Black WidowerThe image above is the belly of course, not quite the classic hourglass but close enough to get the job done. The markings work better than you might imagine, since widows usually sit belly-up in the web so the marking is obvious. My main subject is a female, able to be told by the size, shape, markings and, if all that wasn’t enough, the pedipalps. The males, seen in an archive image at left, are radically different, and I suspect remain unidentified in most encounters. They’re significantly smaller, and their chelicerae are inadequate to penetrate skin in most cases, so the threat is virtually nonexistent. The threat from the adult females is pretty low too, since they purposefully avoid contact and areas of human activity. To get bitten, you pretty much have to pin one down without going so far as to kill it, which is a rather narrow range of conditions for an accidental encounter. Much as I crawl around in prime environments and actually handle spiders, I might not ever have been bitten – the one illness that I originally attributed to a black widow bite is now highly questionable. This is true of many suspected bites, since a large variety of things can produce the same wide range of symptoms that spider bites are reputed to have – there are no key symptoms that point directly to spiders, and entomologists maintain that if you didn’t actually see the culprit, the chances are low that whatever reaction you’re having came from a spider.

L.Mactans-3
I like this shot for the ominous quality (well, even more ominous, okay?) – yes, the eyes are making an appearance again. Black widows are not the easiest subjects to work with – aside from the obvious reasons not to provoke a bite, they’re very shy and prefer to find shelter, and can be hard to convince to hold still, especially so if there’s a particular angle that you’re after. I can also vouch for the fact that when they dangle on a webline from the branch you’re using, it looks just as evil as is often depicted in movies and cartoons – lowering slowly with apparent menace, legs spread and gently stroking the air seeking purchase. I was unable to capture this because it was so brief and I couldn’t switch zoom settings fast enough. I had been smart, knowing this was going to happen, and suspended the branch and clamp rig in a shallow pan of water, which she encountered while trying to escape, sending her back up to the branch; widows cannot scamper across the water like many arachnids can.

But the shot I was really after was this:

L.Mactans-4
There’s a face only a mother could love. I was hoping to get a peek at the chelicerae, seeing as how they’re the business end of an infamous species, but not one angle I tried gave me a decent view. If you go back to the image at top, recognize that the abdomen is about the size of a large pea, so the cephalothorax is pretty small overall, and the chelicerae are thus described by the biological term, “eentsy.”

Since you’ve been examining these pics closely, you might be thinking that she’s missing a leg, but that’s only the way the legs are positioned in the top pic. True enough, a high number of arachnids that I find are missing a leg or two. Life can be tough for spiders, which is enough to make you have some sympathy for them, isn’t it?

No, huh?

So, why should we bother?

In a previous post detailing the difficulties and uncertainties of tracing our hominid ancestors, I kind of led up to a question, expressed now in this post: Why should we bother? It’s a lot of effort to determine something that happened in the past, which is highly unlikely to have much affect on anything going on right now. We are what we are, and it doesn’t matter too much what came before, does it?

A similar question is often asked when the topic of NASA’s budget comes up, or when some avenue of expensive scientific research ends without a breakthrough in technology or knowledge. Couldn’t this money have been better spent somewhere else? Are the questionable rewards worth the time and effort expended on them?

While in itself this is a different perspective, stepping back and looking at the big picture as it were, it also comes from its own narrow perspective, in that it reflects the basic concept of capitalism: everything must be regarded as an investment, capable of producing positive returns or it shouldn’t be pursued. The two flaws in this are obvious, once someone pauses to think about them. The first is that we only hold this view when it’s able to be used as leverage in an argument, because the vast majority of human activity can hardly be considered the pursuit of investment returns. The second is the simple economic problem that money spent cannot always result in greater money received unless we simply manufacture money, which isn’t the best financial structure.

The investment of time and effort is another matter, since we can’t expect to get more time, or more energy or whatever, in return, so the payout has to be something else. In the cases outlined above, that payout is knowledge, or at least that’s the goal – failing to find the cure for cancer means the time was wasted, right?

If we could always predict what we were going to find with any research, we’d have accomplished the stunning goal of achieving omniscience; it’s safe to rule this out for the foreseeable future, heh! But there’s even knowledge gained when goals or predictions don’t reach fruition, because at the very least we know not to bother with that avenue again in the future. And this is assuming that nothing else is learned on the way, which is rarely the case. Serendipitous developments do indeed occur, and even just refinements in tasks that improve our abilities or efficiency from there on in.

If we go back to paleontology and the search for hominid history, we find that through these efforts we’ve improved our abilities to extract and preserve fossils, establish accurate dates, evaluate anatomy through skeletal remains, and have even contributed to our knowledge of climate change and geologic processes. In the past century, science went from a collection of separate disciplines to an interconnected understanding of physics as a whole, and knowledge gained in any discipline impacts many others.

Let me provide an example. Many hominid finds are quite small – bone fragments, or an incomplete skull. Let’s say we have a new find consisting of just a big toe bone and a tooth. The tooth, through shape, size, and thickness of enamel, tells us the owner’s diet, development, nutrition, and age. We can even get an impression of how big the owner actually was, without any other bones. The diet can provide a clue as to what conditions were in that location at the rough time the owner lived, since they’re going to eat what’s available. And the minerals within can even tie in with geology, since they’re absorbed from the foods which in turn get them from the soil. The toe, meanwhile, will have a shape specific to its articulation, indicating a contemporary running stance or perhaps a tree climber, which then tells us whether the owner ran a lot to gather their food, indicating hunting on the savannah, or still lived at least part of the time in trees, indicating a significant arboreal climate (it’s unlikely that one individual will live in one tree in the middle of a field – organisms require populations, and tree-dwelling serves the purpose of protection, where one tree isn’t going to cut it. Moreover, it’s probably a temperate to hot climate to maintain foliage all year long, since empty branches also don’t cut it.) And if there’s anything else available to confirm time periods, such as geologic strata, then another data point in radiometric dating can be determined. Radiometric dating determines age of organic remains or rock formations through the routine decay of trace radioactive elements within, but in any given time period the concentrations of these can vary, due to geologic or atmospheric processes. With a couple of confirmed data points we have the ability to judge when such variations have occurred and improve the accuracy of our dating methods.

And then there’s the simple curiosity factor. We are a species that explores, investigates, and learns – it’s behavior evolved into us. While I will be the first to argue that blindly indulging our emotional prods isn’t the wisest of ideas, we cannot deny that the vast majority of improvements we’ve made to our lives comes from this simple trait. Our curiosity over radioactivity a century ago has impacted medical diagnoses and treatments, power generation, cosmology, and yes, even dating fossil finds. Studying both fossil types and geologic strata eventually led to plate tectonics, and thus understanding how and when earthquakes can occur. It would have been easy to say, back when the theory of continental drift was first proposed, “Yeah, but so what? Does knowing the continents were in different positions actually do anything for us?” It would have been impossible to predict then that it would be useful in establishing dangerous areas to build a city, or tell us why Australia and Madagascar have such different species.

Which brings us back to the investment idea. Some things are a gamble, but the jackpots in science are dividends paid in perpetuity – we never stop using what we’ve learned, even when, like Newton’s Laws of Motion, we refine them with new knowledge later on. There’s no way to put a value on those returns, except to say that we’ve exceeded the investments many times over. I’m the first to say that genetic variation is pretty cool, fueling the development of traits through selection, but the development of curiosity was a gold mine, allowing us to actively change our environment rather than waiting for all other forms of genetic change to adapt us to it instead.

The knowledge of our past history, as limited as it is now, is even responsible for a heightened perspective on our place in the ecosystem. We’re no longer the chosen ones, the sole special sapient species destined by god or whatever; we’re one survivor among many forms, all others of which vanished after only a short time on the planet. We have traits that helped us survive, selected out by the algorithm of population numbers, in many cases functional more for our past than our present. And we know that to have a future, we must learn from the past, and recognize that nothing is guaranteed – we can vanish like the others, like the vast percentage of species that once inhabited this planet. Avoiding that fate is another thing that cannot have a price put on it.

Plus, it’s just so damn cool! I’m typing this as the distant ancestor of a species that made simple stone tools and communicated through gestures and singsong vocalizations (something else we found while examining remains.) Our vocabulary is the product of a brain with stunning abilities to make connections, inferences, and emotional impressions; that directly leads to expanded knowledge, because we can communicate what we’ve learned. Can you imagine where this will lead in another million years?

A matter of timing

Progressing
I’ve been watching the autumn colors developing slowly, wondering what this year’s conditions are going to be like. The pursuit of “peak colors” is a routine activity for anyone who chases landscape images, and some photographers and painters are quite dedicated to it, ensuring that they’re in a prime location in time to see the best displays. I’m not one of those photographers, and will likely not be traveling anyplace other than locally when the peak arrives. This area isn’t too fascinating for landscape shots, so I find what I can.

A quick note about the clouds above. First off, they’re blown out, exposed so highly that most of the detail is gone. Many photographers will tell you to avoid this, and it’s true, you usually want more shapes and contours to be visible. Yet, if I hadn’t pointed it out, I’m not sure how many people would even have noticed, since the clouds are just part of the setting, not the main subject, and still quite apparent what they are. I’m not excusing my mistake; I’m just saying that worrying too much about minor details is often blowing things out of proportion, and if I had exposed for optimum cloud appearance, the trees would have looked considerably darker.

Now, when you have clouds like this, move quickly – they generally represent changing conditions and will not be around long, nor will they remain in nice locations within the frame. Twenty minutes after getting this shot the sky had hazed over entirely from the climbing humidity, the blue was gone, and the light muted. I’ve done my share of waiting on clouds, and it’s safe to say the payoff is wildly sporadic. Many days you just give up and move on.

LastOneThe oak tree in the back yard ended up dropping most of its acorns before the leaves became very attractive, and because all but one branch are well enough above my head to make framing decent images difficult, I got nearly nothing out of it this year. When I found this one remaining acorn, I dodged around a bit to frame it against the sky, unable to get a position that put better light on the nut itself. When the broad vistas aren’t really up to snuff, you can still go in close for selective bits of color, or isolated subjects, and I’ve done a lot of work on those skills (notice that I did not say I was adept at it or anything.) You can produce nice seasonal nature images even in the middle of a city, if you’re choosy about what’s in the frame and don’t think that every image has to be a wide view.

Remember that you can make the light work for you too. Here the backlighting brings out the color of the leaves, which you can see were going towards the rattier end of their appearance. This time of year, the light often comes through gaps in the foliage, and may selectively highlight or backlight something interesting, so keep watching for the little tableau to appear. The sun’s moving across the sky, faster than we often think, and the conditions will change – a little patience can pay off, but it also means don’t hesitate when you see what you want, because it may also vanish quickly.

The fall of the acorns this year has been an experience, because some of them are coming from quite high up and the limbs overhang the house. From time to time a sudden clack! announces the impact on the roof, or a ricochet from the back porch against the storm door – this is a little annoying late at night. I’ve managed to escape being struck while out in the back yard chasing photos, but in two cases it was a near-miss thing. And I can’t walk barefoot out there anymore – I’m not that masochistic.

AcornsI wanted an image to communicate how many acorns had littered the yard – seriously, the squirrels are going to die of obesity-related illnesses – but found the straight-down perspective to be a little boring, so I laid on my side and went for a different angle. Again, this could have been taken anywhere – the entire frame could be hidden under a book, but the proximity of the acorns is enough to indicate that there’s a lot of them.

Notice that these are different light conditions; bright direct light in this circumstance would have produced much more contrast and likely worked against the mood. A basic guideline for using light is to seek the low contrast subjects when the light is high contrast, and vice versa, even though in this case the subject was low contrast and so was the light. Most especially, when tackling very colorful subjects like fields of flowers, go with the softer light, which often means waiting for a passing cloud or hazy conditions. You can cheat sometimes and actually shoot within your own shadow. This technique can also help when trying to photograph subjects under the surface of water, very much so on days when the sky is hazy; direct light penetrates water well, but diffuse light just reflects from the surface, since it comes from all directions and it’s hard to choose an angle that doesn’t bounce it back into the lens. Shooting under a shadow helps a lot, but keep in mind that the reduction of light means that shutter speed might become an issue – moving ripples may also blur the subject.

Ripples are a nice compositional element to work with, by the way. They can produce very surreal effects from the reflections, and a nice texture when they’re frozen in time by the shutter. Always pay close attention to the water in your image, because what it’s reflecting is as much an element as whatever your main focal point is.

HeavyLeaf
While the light conditions were bright this day, my shooting location was in deep shade. The water, once I’d shifted around a bit to my liking, reflected the blue sky, clouds, and the overhanging tree limbs. The surface tension curves around the edges of the leaf were a serendipitous discovery, throwing a patch of high contrast into an otherwise low-contrast image. I took several frames in the few seconds before the water carried the leaf away from the sky colors, and each one is different because of the ripples from the falling leaves – it’s not only a good idea to take a few frames just because we can’t know just how the ripples will appear until we see the images, but also wait for different splashes as well. A nearby ripple can throw accentuating curves around a chosen focal point, or alternately might create some clashing lines, so don’t be afraid to throw a lot of frames at a subject you like.

So, get out there and play!

You can call me Al

Over at the New York Times, Carl Zimmer has an article on the difficulties of pinning down hominid species, which illustrates an interesting perspective in biology, but is unfortunately a little too brief. There are a couple of factors at play, and no easy way to resolve them.

The very first thing to bear in mind is that ‘species’ is an arbitrary distinction in many ways. The word was born to differentiate, say, chipmunks and bandicoots, or penguins and ostriches – okay, those are bad examples because differentiating them doesn’t take a lot of effort, but the premise is, these are distinctly different groups of animals that cannot interbreed. If they could mate and produce another critter, then they were the same species. Simple.

Until we got to hybrids. Horses and donkeys are different species, but they can still produce offspring, which is where we get mules (and you always thought it was a plant in Missouri.) Mules, however, are sterile, which is what distinguishes a hybrid, so the definition of species changed to mean that the offspring must be able to make viable babies too, otherwise the two original were separate species. Things remained okay for a while.

But then there’s the ancestral bit – Darwin and Wallace came along and had to ruin the party for everyone else by pointing out that all species were different in the past. We’d known about dinosaurs and such before then, but now came the theory that, for instance, Neanderthals were actually older direct relatives of us Homo sapiens (this was a possibility considered for a while, don’t get ahead of the narrative.) What it meant was that we needed some way to distinguish present-day versions from older ancestors, even if there was a direct lineage, and for convenience’s sake an ancestor with enough differences from the living version could be considered a separate species.

This is the first bit of fun, which produces countless issues the world over every day. There is no point where we can say that two adult Homo erectus produced the first Homo sapiens – it doesn’t work that way, and this illustrates the fudge in ‘species’ that has been accepted because, seriously, what else could we do? We do not have fossil remains that span the whole history of hominids, or anything else for that matter; what we have are spots in the past where we’ve found skeletons with different structures – enough like humans to be considered more closely related than chimps or whatever, but different enough not to match modern humans. And, a key point, we’re not really sure whether there is a direct lineage, or if they’re some family line that split away and went extinct later on.

In the middle of all this came a new factor: populations of existing species that could apparently interbreed, but never do. DNA isn’t much help here, for two simple reasons. The first is, there’s genetic variation in every individual of any given species, no matter how confident we feel about the demarcation – so it’s almost as if we could say that every offspring is a different species from their parents, if we went by genes. That’s not really a help, and of course makes ‘species’ well nigh meaningless. Second, a complete DNA strand, what’s called a genome, is ridiculously long, often billions of base-pairs (paired amino acid molecules,) so it’s not this routine task to look at samples from any two individuals and classify them as ‘same’ or ‘different’ even if we make another arbitrary distinction – say, everything that has this much unchanged DNA sequence is the same species. There aren’t any spots that never change. While we can define human from chimp in that manner, because the long divergence from the period when the two species had just split has produced dependably diverse stretches, when we’re talking about trying to tell apart two insects that appear identical, it’s both too ridiculously complicated and too vague to be useful. So we just take the bugs’ word for it and assume, if they don’t want to get it on, then they know something we don’t, and ‘species’ was changed again to accept this distinction as well.

When it comes to fossil remains, what we typically have is one sample from one area in one specific point in time, and very often not even a complete example of that, since nature isn’t very accommodating in keeping corpses complete and undamaged. But as Zimmer points out, we have a huge variability in Homo sapiens right now, to say nothing of species like dogs or horses. Finding two skeletons with radically different structures does not mean we’re dealing with two different species – we could just be dealing with a Newfoundland and a Yorkshire terrier. And then of course, there’s the possibility that we’re even seeing a rare mutation, such as dwarfism or gigantism, and if the skeleton is incomplete then there’s the opportunity for even more examples such as microcephaly. Finding a set of bones that does not match any samples we’ve already found does not mean we have a new species.

If, for instance, the fossils come from a time period far enough removed from any others, they’ll get their own name on the simple fact that there’s nothing that stops natural selection, so over enough generations the species is going to be different. But there are periods when several different species of hominid were concurrent; there was more than one proto-human existing at the same time, like Homo sapiens and Homo neaderthalensis. The other side of the coin is the case with the “Hobbit,” Flores man, or Homo floresiensis – pick whichever name you like. The multiple examples of these remains, existing at the same time as Homo sapiens, have traits indicating descent from H. habilis or erectus, along with traits of much older Australopithecines. This presents numerous possibilities: reverse adaptation, very early isolation with convergence (the simultaneous development of similar traits among unrelated species,) a population of genetic anomalies while still H. erectus, examples of island dwarfism… the debate goes on.

So the species names we affix to fossils is arbitrary in many ways, and does indeed change as new remains are discovered, or even new ways to examine existing fossils. One of the developments of all this uncertainty is cladistics, which classifies fossil remains not by individuals so much, but by the development of body-type variations (phenotypes.) 400 million years ago, all animal fossils were solely of fish, but 375 million years ago, lobe-finned fish appeared in the record; somewhere between those two times, the beginnings of the limbs that would allow walking on land appeared. It doesn’t matter which, exactly, species developed it, or indeed how many – it’s just enough to know the bracket when it occurred.

With the hominids, it’s a little more complicated, because they split off from the other primates in Africa and migrated to the rest of the world, but it’s not clear if this happened once, or in waves, and how many hominid variations existed at any time, nor where they went. Fossils are rare things; the conditions to make a skeleton fossilize rather than simply break down in the soil or sunlight, and then retain them for millions of years, are very specific and don’t happen too often.

And then there are the geological conditions to make finds even possible. Sediment builds up over time, or erodes away. If it erodes away through the fossil remains, wave goodbye – we won’t be finding them unless we’re there exactly when it happens. Otherwise, we have to dig through millions of years of accumulated soil, perhaps now metamorphosed into rock, to get to even the right layer. You can’t just dig a hole, go deep enough, and find hominid remains. Pull up any map, close your eyes, and pick a spot. Did you hit a cemetery? If not, then this displays the difficulty with random digging (and we pile bodies all together neatly, unlike our distant ancestors.) So the trick is to find someplace where rock from the right timeframe is being naturally exposed, and search in the broad fields available therein. This is why the Great Rift Valley in Africa, where Olduvai Gorge resides, is the site of so many finds. I’ve taken advantage of this myself by picking through the crumbled shale exposed when a glacier cut a huge scar through central New York.

Yet, we can’t do this for all time periods to present, because the conditions that preserve remains are rare. The fossils that I was finding existed in one layer, presumably when mudslides or river delta deposits buried dead animals in low-oxygen conditions, preventing bacteria from breaking things down. Earlier, and later on, the mudslides weren’t happening, or if they were, the remains still decayed completely. So, if we wanted to track the migration of the hominids from Africa, we couldn’t just go five hundred kilometers away from a known find and see how long ago more hominid remains can be found; the geology might be all wrong, and random digging is highly unlikely to produce anything.

So, we end up with tiny samples in time, pinpricks in both location and history where we have some particular phenotype. If it falls between the time periods for Homo habilis and Homo erectus, and has a body plan that seems halfway between the two, then we can tentatively say it demonstrates an intermediate species. Often, what we have are some traits of this one, and some traits of that one, but few if any traits that split the difference. It also needs to be noted that evolution does not necessarily proceed in a straight line, as if there’s a goal; selection depends on conditions and environment, which may change, especially for a species migrating across multiple continents. So traits may wander, develop at different rates, and so on. Now, imagine if we found a partial skeleton: portions of the hips, one upper leg and knee, a big toe, maybe some shoulder bones. Everything about the legs seems to indicate an upright walking stance. Then, from a time period .2 million years later and further south, we have a species with toe bones that indicate tree-climbers. Are they of different descent? A local environmental adaptation of the same lineage? A genetic fluke?

We don’t know. And this is true of so much of paleontology – we can only surmise based on limited evidence, and try to find more evidence that supports or destroys the hypothesis. For the time being, we give the fossils a name, knowing that this might change later on. That’s how it goes. [The example given above, by the way, is pretty much the case with Australopithecus afarensis and Au. africanus, though I admit the time separation might be different.]

What this seems to imply is that our impression of human development could be entirely wrong, something that creationists gleefully latch onto since the evidence for their own ideas is nonexistent (yes, any flaw in modern science is reason to deny it in its entirety, but flaws in scriptural accounts are ignored wholesale – pathetic, isn’t it?) But uncertainty as to how such finds fit together does not change the facts that any finds of such body structures tells us selection is a very real thing. No hominids of any kind have been found, or ever will be found, in 30 million-year-old strata. No Homo sapiens will be found alongside Au. afarensis. While we may not know which, if any, given fossil species from 2 million years ago is our direct ancestor, we are extremely comfortable in saying that the body plans in evidence at those times fits exactly into expectations of development from the common ancestor with the great apes. And of course, any competing theory also must explain why we have these remains, and why they progress in shape just like natural selection predicts.

What it all comes down to, however, is not considering any given fossil species to be as definitive as modern species – there’s just not enough information. There are species that are 99% likely to have gone extinct, like Paranthropus boisei – the skull structure was radically different from nearly everything else found, and nothing since shows evidence of developing from there towards Homo sapiens, or indeed any later species. But most other species are not so well determined, and lineage remains nothing but speculation right now – we cannot say which evolved from which, or which (if any) is our direct ancestor. This illustration shows the time periods that we have determined so far, but implying any particular path from it is nothing more than guesswork. Perhaps one day, we’ll have enough evidence to be pretty confident, and in the meantime any one of these may disappear, absorbed into another species due to more information, or we may even split off another line based on closer examination of existing fossils, to say nothing of any new finds. It’s an interesting, but undeniably contentious, field of study.

Coming up: Why do we bother with trying to find out?

1 266 267 268 269 270 328