Too cool, part 27: This is why I don’t bother

Astronomy Picture of the Day is something that should be on your weekly routine, at least – it often features some pretty stunning images. Today’s (or I guess I should say, the image for Monday March 16th, since it’s late and this will probably post early Tuesday morning) is especially cool, and gains additional interest when coupled with a few other details.

Annotated Orion with nebulae
Courtesy APOD/Rogelio Bernal Andreo at DeepSkyColors.com

This is the version I resized for the blog, but by all means you should go to the original or, for preference, the version you get when you click on that, which is much bigger. On the initial page, the annotations shown above only appear when you hover your mouse over the image, so you can see it without the distracting lines and labels.

constellation OrionNow, some perspective. You’re not going to see anything like that image above when you go out to look at Orion – what you’re going to see will look much more like the photo at right. Nebulae are faint sky objects, and only a handful are visible without help in the best of viewing conditions. More specifically, most details won’t even show at all without filters designed to select only the narrow bands of emissions that they produce (like, as that page says, hydrogen alpha.) So the APOD image is “shopped,” a composite of visible light and very selective wavelengths captured through long exposures.

And the primary issue with long exposures of star fields is that the Earth stubbornly keeps revolving, meaning the stars wheel across the sky, so the only way to get sharp long-exposures is with a system that moves the camera and/or telescope at the same speed, keeping the stars perfectly in frame. On telescopes with an equatorial mount, tracking motors can occasionally be added that keep the scope on your target, but these have to be aligned precisely with celestial north (which is not quite Polaris, the North Star, but fractionally to the side of it.) For anyone lacking such, there are plans available to construct a tracking mount/platform for non-telescope photography, using standard lenses, but there are accuracy issues which may limit how long these can be used before star motion creeps in. It’s a little tricky to describe why and so I’m not going to unless someone asks – for now, blame it on geometry and trying to construct a usable homemade system without custom-engineered parts. If you’re interested, however, do a search for “barn door tracker,” especially the double-arm style, which is much more accurate. I personally have not tackled such a project yet because my access to tools and decent stepping motors is limited.

Without it, however, one is stuck with brief, high ISO shots which cannot capture a hell of a lot. To wit:

Orion's Dagger and Orion Nebula, very faintly
This is seven seconds, f5.6, 200mm, ISO 3200 – and there’s still motion in the image, including possibly some tripod shake (the stars should move in a straight line and not a mild ‘U’ shape.) This is almost the exact same orientation as the image right above it (the plain one,) but a tighter framing of the center, focusing on the region of the Orion Nebula and Horsehead Nebula. The brightest star towards the upper left is Alnitak, the ‘leftmost’ belt star, the other two belt stars being out of the frame in a line directly above it. The cluster of stars at center-right are collectively known as Orion’s Dagger, more-or-less appearing as three points to the naked eye, but occasionally visible as being a bit less distinct than just three points, and you can see why – it’s a far cry from three stars. The faintest hint of the Orion Nebula is showing as the flare in this region.

Between and below the two brightest stars to the left lies the Horsehead Nebula, not at all visible above. In fact, even in the big version it shows as just a little dark spot against the pink cloud in the background. I was going to try and guide you to it, but figured it was easier to just present the same image from Rogelio Bernal Andreo, rotated, cropped, and enlarged to match the same perspective as my image above:

cropped version of Rogelio Bernal Andreo's Orion composite
There – the Horsehead Nebula is that little dark splotch against the pink cloud at left-center. Now here’s a version imaged by the Hubble Space Telescope, featured in an APOD a few years ago (click there for the bigger version):

Horsehead Nebula in infra-red
Courtesy NASA

It seems kind of inverted, but this is because different wavelengths were captured between the two images. The infra-red filter used for the frame immediately above does not capture any of the hydrogen-alpha wavelength that provides the red colors in the image above it, those same wavelengths being blocked by the dust cloud that appears with such detail here. But this also gives a great impression of the magnification and detail that the Hubble can snag. For giggles, scroll up to zoom back out to a normal view to us on Earth.

And thus the blog title. Even with a decent telescope, even with a tracking platform, there’s no way I’ll ever capture images anything at all like the ones being produced by the professionals. But I’m happy to direct you to the better images, and hopefully provide a little perspective and awe along the way.

Monday color 7

green anole Anolis carolinensis on banana leaf
[These posts are usually lined up well ahead of time, so this one was completely written when I got the images in this post, which might actually be the same individual – they were taken not three meters apart.]

One of the images taken back on this day last October – I elected not to use it then. I know that I had been under the leaves of this banana plant, looking for silhouettes of insects on top since the sun was shining through so distinctly and the leaves are a brilliant chartreuse with a nice texture when backlit, but I can’t recall whether I spotted this green anole (Anolis carolinensis) then, or if I saw it from this perspective and then went back to see if it could be viewed from underneath – I suspect the latter.

green anole Anolis carolinensis toe silhouette through banana leafEither way, that photo was obtained too, though the lizard’s position atop the thick rib of the leaf reduced the distinction of the shadow – it really needs the image above to explain what it is you’re seeing. I actually waited to see if the anole would provide me with a better shadow pose, or would even launch itself after a leaf-footed bug that was walking along the same leaf (and provided its own silhouette images,) but the reptile was more interested in basking, possibly because the October nights were pretty chilly.

Within a month, the banana leaves were brown and hanging limply, with the appearance of corrugated cardboard that had been soaked in the rain and dried, and the anoles were nowhere to be seen, having sequestered themselves for the winter. Which just reminds me that I’m still waiting for spring and the reappearance of useful photo subjects, something that these Monday color posts were supposed to counteract. I gotta work on my psychology…

Starting the spider season off

unknown tiny white winter flowering weed
It’s been longer than I’d planned between posts, for several reasons, mostly being busy. There are also two larger posts that I’ve been working on, but they have required more time than I had available, so not just yet. But with the nice weather today, I took a moment to chase a few macro photos.

water drops on petals acting as lensesDon’t ask me what this flower is – it’s a whopping 5mm across from tip to tip, and I shamelessly added the ‘dew’ with a misting bottle since we’re still a ways off from those conditions. This was actually growing in the pot with my salvia plant, and I’ve photographed them before but still haven’t determined the species.

Because the water drops acted as lenses and were showing the details of the petals, I had to include a close detail crop. As smooth as they seem to us, every flower petal that I’ve seen actually looks like this in high magnification, quite scaly. One of these days I’ll get a decent microscope and start doing some photo-micrography (which includes learning how to pronounce it smoothly without stumbling.)

In the same pot, I found two crab spiders, which might have measured 9-10mm across at widest leg spread, which means 3-4mm in body length.

crab spider Thomisidae on pot edge
Macro work can give an entirely different impression. While seen this close they’re spiky and striated and not terribly cute, from an average viewing distance they’re barely visible, and quite delicate-looking, almost graceful in shape, and appear able to be smooshed with a hard exhalation. While writing this, I went back out for a shot to convey this perspective a bit better.

crab spider Thomisidae
on salvia with fingertip for scale
Hardly ominous-looking now, is it? And given the low viewing angle and the curl of the leaf, I suspect this one had no idea my finger was looming up from underneath.

Both for the appearance and to provide some hard-to-find water, I went ahead and misted my two arachnid subjects as well. It was impossible to tell in the viewfinder and even tricky to determine when looking at the magnified images, but it seems that this was appreciated.

crab spider Thomisidae possibly enjoying the mist
If you compare this one to the portrait further up, you can see that the ‘face’ (cephalothorax) seems to be angled downwards more, likely because the spider was sipping dew from the leaf like any good ol’ country boy (‘ceptin’ I think the phrase might have a different meaning to them folk.) But since this is how most arthropods obtain their water anyway, I’m probably not being presumptuous. This time.

Repercussions

Tree Lobsters! is a webcomic that I only peruse periodically, once a week or so, and when I found this one I had to check to see whether I’d posted my trash talk on artificial intelligence predictions before, or afterward. Luckily, mine came first – I hate looking like I’m stealing someone else’s idea.

[I also love the references to Voight-Kampf testing, with the caveat that, “Test may register a false positive with sociopaths.” If you don’t get this, you can try to fake that you do, but you’re better off running before someone can bring a weapon to bear.]

Second, I sent the same anole images from yesterday’s post to Dan Palmer last night with a very brief text accompaniment, within which was the phrase, “The lizard still surprised me.” Dan is in a northern clime so I was kind of explaining that we weren’t that far away from winter here, yet he took it another way entirely – actually, quite a few other ways. This morning I received a reply with, as he put it, potential continuations to, “The lizard still surprised me…”:

“..after all these years.”
“..with a kiss and a swift departure.”
“..with the DNA test.”
“..by copping to *both* murders.”
“..and ultimately ruined my childhood – it could have busted the Santa myth much more gently.”
“..as reptiles are wont to do.”
“..with the sincere apology.”
“..Valentine’s Day was weeks ago.”
“..I did not recognize him without the eyepatch.”
“..I thought that I had eliminated them from this planet during the mid-80’s.”
“..who drinks hot chocolate went it’s this warm out?”
“..Pampers makes lizard-sized?”
“..after the Bay of Pigs, I thought he was dead!”
“..he hadn’t even heard of Harry Potter.”
“..with a gentlemanly poke of his walking stick and an invitation to the club.”
“..he was wearing his Villiage People T-shirt.”
“..her lipstick and nail polish didn’t match.”
“..and I *love* surprises!”
“..before it’s battery ran out.”
“..despite the lingering awkwardness from the “barn” incident.”
“..he was posing under a spider web that said, ‘Pigs suck.’ ”
“..but only because I can percieve more dimensions of sarcasm than the human species.”
“..for the last time.”
“..the six-shooter in his holster wasn’t even loaded.”
“..it didn’t taste at all like bubble gum.”
“..without Mary and their 14 little darlings.”
“..the stock tip proved quite lucrative.”
“..pawn to G5, checkmate.”
“..nobody brings up the Partridge Family anymore.”
“..we had agreed to split the winnings 50-50 – now one of us was going to die.”
“..they were out of red sparklies.”
“..tequila, at this hour?”

Followed twenty minutes later by another batch:

“.. claimed his name was D.B. Cooper.”
“..had a very interesting theory about quarks.”
“..told me I could save 15% or more on car insurance.”
“..was almost unrecognizable with the new tat.”
“..I expected someone taller.”
“..that his name was an anagram of “Zildar”.
“..that’s what hermaphrodite means?”
“..he claimed to have let the dogs out.”
“..he had eaten all the Skittles.”
“..he had actually heard of Emo Phillips.”
“..had been rendered mute by the remote.”
“..had a thing for pickles.”
“..you know, I really wasn’t expecting that – hence the surprise – I guess you had to be there.”
“..lizards have blue snot?”
“..eyes *are* window to the soul.”
“..he wasn’t supposed to be back until Friday.”
“..if he was here, then who the hell was that in the pool this morning?”
“..I thought it was snails that left a slime trail.”
“..not everybody makes sergeant that fast.”
“..I couldn’t picture him running for office.”
“..help me, I can’t stop.”

At that point, I asked if he was trying to provoke me into posting them, whereupon the followup this afternoon was another batch. Okay, then.

“..he said he would get at least a B-, and he did.”
“..turns out he’s a well-known anime model.”
“..liked Ghostbusters 2 better than 1.”
“..he doesn’t have a Facebook page.”
“..he’s done unspeakable things with Q-tips.”
“..and after 35 years of marriage, that’s really all you can ask for.”
“..but not as much as the tarantula did.”
“..he really has been donating his lunch money to charity.”
“..he has the reflexes of Starbucks barrista with the shakes.
“..her lingerie collection is larger than mine.”
“..he drinks milk right out of the carton.”
“..he leads a double life as tax advisor *and* a taxidermist. You should see his business cards.”
“..she does not have a license to carry that concealed weapon.”
“..he spent World War II in a quiet valley in the Austrian countryside.”
“..she lives to spit.”
“..he only responds to ‘Scalydude’.”
“..he has only seconds to live.”
“..he has had intimate relations with Clint Eastwood.”
“..he loves to collect pennies – he chases after them when they roll.”
“..he wears shades, doesn’t give a ^&$#*”
“..overpaid for his porsche.”
“..what, without a viable brain and all.”
“..and that takes a lot of girl scout cookies these days.”
“..really, I can’t stop.”

All of which missed the point entirely, which was that I never knew lizards liked corn liquor…

It’s coming

Magnolia green jumping spider Lyssomanes viridis gathering old webbing
I’m not putting a lot of faith into this, considering the fluctuations of weather we’re already prone to here, plus the wildly unorthodox winter last year, but the first signs of spring are visible, and I’m lucky enough to have students who want to take advantage of it. Yesterday, the weather was fantastic but I didn’t get the chance to do anything about it, so I went out last night instead, seeking out some areas where the chorus frogs might be found dependably for photos. While unable to catch a glimpse of any, even though they were still sounding off as the temperature dropped to 10°c (50°f,) I saw a ridiculous number of spiders, mostly from their eyes reflecting the bright LED headlamp I was wearing. The one above was suspended over the trail, apparently gathering up webbing; I’ve heard that spiders often recycle their web by eating it, so I suspect this is what was going on, but did not watch long enough to confirm this. This image is at almost full-resolution, handheld and aimed by the light of the headlamp (which was getting blocked by the flash unit atop the camera,) so it was far from the ideal setup for doing macro work, and it wasn’t until I got back that I realized it was probably a magnolia green jumping spider (Lyssomanes viridis.) I wish I’d identified it then, because I would have brought it back to do more anatomical shots, and perhaps try to establish a presence of them around the place.

unknown tiny hunting spiderBetrayed by its eyes, this unidentified spider at least gave me more of an interesting pose when I went in close – most just remained in place on the ground as if pinned on display. The reflection must come from a very narrow angle between light and receiver (which means your eye, or the camera lens,) so the reflection effect is not visible in the image, and in fact very hard to get when close enough to see the details of the spider at all. A bright light may show a starburst of blue-green down on the ground (and occasionally on weeds, tree trunks, overhead branches, and even out onto the water,) a few meters away, which will disappear as you draw closer. Usually, this is just because the reflection angle has gotten too great, and keeping an eye on that spot will often reveal the spider itself, sometimes much smaller than the brightness of the reflection seemed to indicate. Last night, I saw dozens, with some patches of ground showing a half-dozen at the same time. The one seen here was the size of my little fingernail, and soon ducked for cover, but I also spotted a few of the fishing spiders, one at a significant distance of six meters or so (confirmed with a 400mm lens.)

And then there was today.

Carolina wren Thryothorus ludovicianus on burned stump
While the birds have been active enough all winter, the breeding season has arrived and countless species could be spotted, flipping through the trees and digging in leaf litter. This Carolina wren (Thryothorus ludovicianus) posed momentarily on a stump for a few frames, then flew off just as I tripped the shutter once more.

Carolina wren Thryothorus ludovicianus flying off

chorus frog from peculiar perspective
Back at the botanical garden, the chorus frogs were in full force, sounding off loudly every time no one was near, but falling silent as soon as anyone closed to visual distance, making it difficult to spot them at all. This one, however, remained floating in a tiny pond, eyes and back just breaking the surface, and I chose this angle solely for the bizarre perspective.

chorus frog playing it cool when people are aboutHere, another plays it cool while we were nearby, refusing to reveal its presence any more than it has, which isn’t much. Chorus frogs are quite small, perhaps 5 cm in length, and are easily mistaken for just about anything else in the water. I was only able to spot this one by knowing it would be there.

I have no better views than this, so I can’t pin down the species adequately – perhaps a southern chorus frog, but more likely an uplands chorus frog (Pseudacris feriarum) – none of them have markings as solid black as they appear in these images, which is mostly due to the overcast lighting of the day, perhaps partially due to the season. I still intend to get better images than these at some point – it’s just a matter of finding a habitat for them that I can stake out, perhaps after nightfall (ruling out all parks in the area.) Meanwhile, if you want to know what the call sounds like, you can go to this post from last year with the embedded sound file.

minuscule eggs, probably chorus frogEggs could be found as well, but only by looking very closely – the bi-colored center of these aren’t much larger than the head of a pin, and that’s pine straw that they’re attached to. Since the botanical garden is much closer to where I live now than last year, I might be able to keep an eye on the development of these – we’ll see. I had also intended to have a pond established on the property by now, but that project hasn’t gone well at all this winter, so it’s unlikely I’ll have the easy access to aquatic subjects that I’d planned to have. The best I can say is watch this space to see what pops up.

tadpole resting on bottom
There’s nothing remarkable about this image, but I’m going to use it to illustrate something about shooting aquatic subjects. Just to mention it, but this tadpole is roughly half the mass of the adult frogs seen above, and clearly a different species – probably a bull frog or leopard frog. Sunny days are best when trying to shoot something in the water from above the surface, because the light penetrates. The diffuse light from haze and overcast doesn’t get into the water as well, but worse, it reflects from the surface no matter which way you’re facing, making it seem as if you’re looking through milky glass. The quick fix is seen here.

another selfie
Granted, you shouldn’t focus on the surface itself, or even what’s reflected in it, but anything that blocks the sky enough to prevent the reflection will permit a better view, and the easiest way to do this is often with your head. If you look very close, you might make out “Canon” spelled backwards across the tadpole…

trout lily Erythronium americanumThere still isn’t much growing yet, but there were a few trout lilies (Erythronium americanum) peeking out. Not 10 cm high and extremely subtle when viewed from above, I had to get down on my knees, bent almost until my ear touched the ground, to get the details of the flower. This isn’t an old or deformed one – they grow looped over like that, which makes me wonder why, and what kind of pollinator it attracts. Perhaps they’re actually bioluminescent, and serve as kind of a street light for field mice with loose morals to hang around beneath…

green anole Anolis carolinensis being cagey
This was one species I was quite surprised to see out so soon – I guess the weather of the past three days has been sufficient to stir them out of dormancy. Green anoles (Anolis carolinensis) are definitely sun-worshippers, and I really didn’t expect to spot any until the insects were a lot more active (for instance, I have yet to see any bees around.) This one we photographed twice, coming back to check on it after our presence caused it to slink out of sight the first time around.

green anole Anolis carolinensis doing a short focus portraitThese images are out of chronological order, and it shames me mightily, I admit it, but they worked better in the layout this way (unless you’re using a smutphone or some other toy to view this site, in which case all bets are off.) I shot this one first, when the reptile was perched more out in the open, and obtained the one above while it had started venturing out but was still utilizing the camouflaging and obscuring fronds of the palmlike thing – I really have to determine the species, because that particular plant has appeared in a lot of my images.

I’m still half-expecting the weather to make one last desperate cold-and-miserable spasm before spring truly kicks in, so I’m treating this as a preview and not the main event, but it was still nice to see a little activity. Mid-latitude winters are just not kind to nature photography.

Monday color 6

just an irisThis image was taken almost exactly a year ago (March 11 to be precise,) as some early bulbs were bursting forth. A week later, another freezing rain storm had redecorated them, and everything else.

I should have made these ‘Wednesday color’ posts – then this one and the first could have been lined up to be exactly a year later. Because that’s significant. I could have even arranged them to post at precisely the same time that I took them, and everyone reading could have taken part in that amazing experience.

I really don’t have anything to say here…

A few more

hickory blossoms against sky reflectionThese are just a few more images that I obtained in the past week, that I didn’t try to jam into the previous post. Instead, I’m jamming them in here!

I have photographed these peculiar blossom pods umpteen times in this state, and never figured out what they were. Finally, for this post, I started searching (try to imagine what kind of terms you put into a search engine for their appearance,) and now believe these are the flowers of a hickory tree. The disturbing thing about this is, we had a hickory tree in the back yard of the old place, and I never made the connection. And in thinking about it, I knew we had different flowers on that, but the explanation is, these are male flowers, while the female flowers are different (less prominent Adam’s apple.) The female flowers – naturally – produce the nuts. Makes perfect sense. Don’t use these to try and teach kids about sex.

Anyway, here they’re against the blue sky reflecting in a pond, which is a better way of producing background color than trying to frame against the sky. The sky is bright and often the difference in light levels will mean either turning your foreground subject into a silhouette, or exposing for that and bleaching out the sky, perhaps to pure white. But reflections are darker than the sky so become more manageable, and can be darkened further with a polarizing filter since reflections from water are polarized – turn the filter until you achieve the effect you desire.

Canada geese Branta canadensis with pine reflections in pondAt the same pond, a pair of ubiquitous Canada geese (Branta canadensis) paused among the reflections of the trees, and I zoomed out and shot vertically to use the reflections, doing that fartsy thing again in abject denial of my lack of artistic skills or reverence. It’s a more complicated shot than I prefer, but you take what you can get when you’re out chasing pics, and keep the concept of better conditions in mind so you can recognize them when they occur (or better, change position or timing to help produce them.)

In fact, all of these images depend on rather crucial positioning, sometimes in very subtle shifts. For the hickory penises blossoms at top, it was not just the use of the sky reflection in the background, but also the selective focus on the closer one, with a short depth-of-field, and the shift so that the two clusters sat in their own distinctive positions in the frame and weren’t touching or overlapping – the guys know what I’m talking about here. And with the geese, it helped to have them lined up with the trunks, but more important was the patch of blue sky that lent a little more color and defined the shape of the trees – without it, the reflection would have been ‘just foliage’ and less appealing, practically unnoticeable.

backswimmer bug Notonectidae out of water on leaf
It might not seem like positioning was all that difficult for this backswimmer (Notonectidae) perched on a floating leaf, but you weren’t there, were you? Taken in a raised pond at the botanical garden, the vague greyness at the left side is actually the wooden edge of the frame, since the leaf had drifted into a corner and I was endeavoring not to have artificial aspects in the photo. Worse, however, was the sun angle – any other position than this meant my own shadow was cast across the insect, which not only would have changed the shot significantly, it might have spooked the bug into the water and ruined the opportunity. I spend a lot of time aware of where my shadow is falling, so it doesn’t appear in my images and doesn’t frighten any living subjects, and I recommend developing this awareness for anyone pursuing nature shots.

Anyway, considering that several of the ponds right next to this one were still frozen over because they received less sunlight, I was surprised to find this guy out, and in fact I’ve seen very few of this type of insect around at all. The legs are vaguely flattened and serve as oars, and they are usually spotted just under the surface swimming along awkwardly and jerkily – as their name implies, on their back upside down, where they can see their prey easier. Check out this page for more characteristics, and this one for a close relative, especially the identification portion to see one of the reasons why arthropod photography can be challenging.

weeds against water sparkles bokehThis was one of the images taken on the River Walk, just trying to do something with the lack of compelling subjects in winter. The background is the river itself, crashing over a log and throwing reflections of the bright sun. Well out-of-focus with, again, that short depth-of-field (wide aperture, in this case f4,) the sparkles became soft balls to frame the simple subject of the dried weed. Only a very narrow angle would produce this effect, and I recommend taking several images with infinitesimal shifts, because the placement of those background globes will vary as the water dances, and cannot be predicted – some will produce a good frame, some will clash or just get messy. The sun is in a direct line with the camera though well above the angle of view, the only way the reflections will be seen, but also backlighting the weeds so they aren’t just silhouettes. A lens hood is a good idea in these circumstances.

daffodils spring from pine needles among snowAnd finally, an image from today, one that I shamelessly (and unrepentantly) staged. The snow had since melted before the emerging daffodils (I think – I could be wrong about the species) had gotten this high, so I got a few shovels of the iceberg mentioned in the previous post, left over from digging out the car, and dumped it into the frame in choice locations. This both broke up the background in a more appealing way than the monotonous pine needles, and expressed the time of year that such flowers appear. Thus, while it is not strictly ‘as found,’ it is still representative and expressive, not much of a gross manipulation – you can, of course, form your own opinion (no matter how wrong it might be.) By noon, the nearby trees cast the flowers into shadow for the remainder of the day, so this isn’t exactly crucial timing, but I missed my opportunity for this yesterday by being busy with other things.

It’s funny; most other forms of photography allow for varying degrees of staging and manipulation, people drawing or painting can put things any damn place they please, and I can even adjust lighting in numerous ways not at all representative of ‘natural’ – but to a lot of people, dumping the snow there is considered ‘faking’ it, and even I feel slightly chagrined that I’m setting up an image rather than shooting it as it is. It’s definitely cultural, and it’s weird. But if we get more snow as the blossoms open up, I’ll be sure to show you the authentic images.

It’s been… a week

rufous-sided towhee Pipilo erythrophthalmus trying to be sutble
Not a long week, not a hard week, not even a weird week – just a week. All over the place, and hard to categorize.

Riverwalk Hillsborough during winterThe snow from last week was still present when Monday rolled around, bright, sunny, and topping 15°c (60°f,) and a student wanted to take advantage of this, so we hit a new walking trail not far away, the River Walk in Hillsborough. Enough people had been walking it earlier that the snow was packed down in footprints, having become ice and thus making portions of it a little treacherous. However, the sunlight on the asphalt was eradicating it quickly, and thus only the shady portions and some of the wooden boardwalk sections still bore ice by our return trip. It’s also amusing to get into snowball fights when out without even a jacket (completely close the camera bag first. And are you carrying a towel within? Why not? Douglas Adams is displeased with you.)

It’s mating season for the birds, and we observed red-shouldered hawks marking territory with their desperate-sounding calls, and a mid-air duel between two red-tailed hawks. Seen in the top image, a male rufous-sided towhee, also known as an Eastern towhee (Pipilo erythrophthalmus) played hide-and-seek with us, probably courting a female that I glimpsed deeper within the thicket of vines. They are supposedly in the area year-round, but I’ve only spotted them myself during migratory periods, so I’m not sure about it right now.

There yet remains little to photograph – the daffodils have just started peeking up in places, a few buds are out here and there, but largely the landscape remains monochrome grey-brown, and if I were to pick a single word to describe it I’d go with, “stark.” It’s a good exercise, I suppose, in trying to find artsy little abstracts to separate out from the overall perspective of bare branches and dead grass, but yeah, I’m still waiting on spring. It’s nice to be out without a jacket and not worrying about the driving I suppose, which just means you can tell I’m ‘looking on the bright side’ for this public aspect to avoid the more accurate portrayal of being grumpy.

Carolina wren Thryothorus ludovicianus calling
Then Tuesday was cold and grey, but Wednesday was clear again and surpassed Monday in temperature, and I convinced another student to bump her meeting forward to take advantage of it. This time around was the botanical garden, which produced the calling Carolina wren (Thryothorus ludovicianus) above, as well as some chorus frogs down in one of the ponds. Those are small frogs, smaller even than the American toads, and very hard to spot. The pond had enough of a protected buffer around it that I couldn’t get close, and hadn’t brought the better long lens with me, so the images I got were sub-par. I’m still looking for a good area to spot these guys, because we’re closing on their busy season for this latitude.

southern chorus frog Pseudagris nigrita nigrita successfully being subtle
moonlight exposure on EnoThe temperature stayed unseasonably warm well into the night, with a full moon shining down, and I realized I might not have conditions like this again for a while, abruptly deciding to take advantage of it. I knew the first student mentioned above was wanting to do night exposures, so I contacted him and he was game, and we went down to the same spot on the river that I checked out earlier, and he got some experience in doing long exposures by moonlight. Not only were we still working without jackets at midnight, but the spiders had wasted no time and were able to be spotted everywhere, their eyes reflecting our headlamps. Since I was leading the way on the paths, I was the one occasionally walking through the anchoring webs thrown between the trees, which resulted in bringing a rather large orb weaver back to the car riding on my arm. I managed to keep her there until I had exited the car, because had she bailed within I probably never would have been able to recapture her.

I want to point out something in this image: the shadows of the tree branches on the water, something that probably couldn’t be seen while there because of the shifting surface, but showed up in the long exposure. Much longer, and the movement of the moon through the sky might’ve eradicated them, allowing them to shift sideways and the areas behind them, formerly in shadow, to receive light and thus expose over the darkness left on the sensor. It also gives some indication that this might have had radically different results in summer, because those branches would be leafed out and a lot less moonlight would have been able to reach the water, probably giving a very narrow time frame to operate in since the moon would have to be right over the gap in the trees created by the river.

Thursday brought rain all day, with the threat of it turning freezing come nightfall, which never quite happened, and so of course today was bright and clear again, but cold. There remains one small patch of snow in the yard, part of the icebergs created by the snowplow that I then had to clear from in front of my car to get out on Monday, too thick to succumb to the sunlight and warm air. I know this doesn’t compare to the northern states – and it doesn’t compare to Florida, either, so you pick your comparison, and I’ll pick mine.

Monday color 5

red hibiscus bloomsThese hibiscus flowers were photographed during the trip to Sylvan Heights Bird Park that I talk about here, and were part of the selection of images that I’d prepared for that post – which probably should’ve taken place in August, but didn’t. When the end of the year was rolling around, I pulled this image from that now-ancient photo collection and inserted it into the color gallery for potential use in the December 31st post – but decided in favor of others.

Then, I went ahead and posted about the Sylvan Heights trip after all, and could have included it within, but didn’t for two reasons. The first is that most of the other images were vertical too, which makes laying out the post tricky sometimes (yes of course I think about this when writing them – don’t you?) And the second is, I had already started the Monday color posts and needed the red within that gallery. So here it is.

I had started to type something along the lines of, “One of these days, I’ll look up what pollinates a hibiscus flower,” because that shape is curious. Many flowers have shapes that ensure that anything going for the nectar has to contact the pollen, but hibiscus seem to put it out of reach, as if trying to keep it conveniently out of the way of any visitors. Then I decided to go ahead and research it right now, and found that the depth of the trumpet helps a lot here, in that whatever is seeking nectar has to be fairly large as pollinators go in order to reach down that far, and thus may contact the pollen and carry it to other flowers. Or just to the red pistils at the top, since hibiscus can self-germinate, and do not need to receive pollen from an entirely different plant like many species do. It is also possible that this shape helps with pollen spread by wind, being wide open like this, and there’s even a chance that the shape of the petals helps this too, generating more of a vortex within the bloom itself. If you look close, you can see that this one has had a measure of success, one way or another, since a lone pollen grain can be seen in contact with the pistil.

Don’t ask me why something that spreads pollen is a pollinator. I suspect that some of the people who make up words are anarchists and don’t like the idea of English rules.

On the horizon

From time to time, and surprisingly in some rather serious media sources, we hear about the technological singularity, the fast-approaching (so we’re told) point where artificial intelligence will surpass human intelligence, and quite often right alongside we have speculations about the “machines taking over.” As over-dramatic as that sounds, some quite intelligent humans have indicated that this is, at least potentially, an ominous threat. I have to say that I am yet to be convinced, and see an awful lot of glossed-over assumptions within that make the entire premise rather shaky.

Let’s start with, what do we even mean by, “surpassing human intelligence”? Is human intelligence even definable? It’s not actually hard to connect enough data sources to far exceed the knowledge of any given human, and a few years back we saw this idea coupled with a search algorithm to pit a computer program named “Watson” against two champions of the game show Jeopardy, doing quite well. Mind you, this was using the whole of the internet as a data source, including the vast amount of mis- and dis-information that it encompasses – having more accurate sources of info would have produced far better results. But no one seems terribly concerned about this and the impending singularity is still considered to be at some unknown point in the future, so I’m guessing this isn’t what anyone means by “intelligence.”

So perhaps the idea is a machine that thinks like a human, able to make the same esoteric connections and intuitive leaps, and moreover, able to learn in a real-time, functional manner. I’ve already tackled many aspects of this in an earlier post, so check that out if I seem to be blowing through the topic too superficially, but in essence, this is a hell of a lot harder, but also pointless in many ways. First off, the emotions that dictate so many of our thoughts and actions are also responsible for holding us up in a variety of manners, actually driving us away from efficient decisions and functional pursuits very frequently – perhaps as much as, if not more than, we can think rationally. While we might be the pinnacle of cognitive function among the various species of this planet (and it’s worth noting that we can’t exactly prove this in any useful or quantitative way,) we can easily see that our efficiency could be a hell of a lot better. Plus, we have these traits because that’s what was selected by the environmental and competitive pressures, and there’s virtually no reason to try and duplicate them in any form of machinery or intelligence, since it’s not more human-like thinking that we can use (there being no shortage of humans,) but something that serves a specific purpose and, for preference, arrives at functional decisions faster and more accurately – that’s largely the point of artificial intelligence in the first place, with the added concept that it can be used in dangerous environments where we would prefer not to send humans. This is a revealing facet, because it speaks of the survival instincts we have, ones that no machine would possess unless we specifically programmed it in.

machine in the ghostWe even hear that machines becoming self-aware is a logical next step, largely a foregone conclusion of the process, and likely the key point of danger. Except, we don’t even know what self-awareness is – the idea that self-awareness or ‘consciousness’ automatically occurs once past a certain threshold of intelligence or complexity is nonsense. Nor is there any reason to believe that it would provoke any type of behavior or bias in thinking. Various species have different levels of self-awareness, whether it be flatworms fleeing shadows or primates recognizing themselves in a mirror, which is almost as far as we can even take this concept – without language, we’re not going to know how philosophical any other species gets. But it certainly hasn’t done anything remarkable for the intelligence of chimps and dolphins.

This is where it gets interesting. Human thought is tightly intertwined with the path we took, over millions of years, to get here. Just creating a matrix of circuitry to ‘think’ won’t automatically include any of our instincts to survive, or reproduce, or compete for resources, or worry about the perceptions of others. We’d have to purposefully put such things within, because the structure of electronics only permits specified functions. This structure is so limiting that, at the moment, we have no readily available method of generating a truly random number – circuits cannot depart from physics to produce a signal that has not originated from a previous one, which is the only thing that could be ‘random’ [I’m hedging a little here, because I’ve heard that there has been progress in using quantum mechanics as a function, which might generate true randomness, or might not, but I’m pretty sure this is still in conceptual stages either way.]

And the rabbit-hole gets deeper, because this impinges on the deterministic, no-free-will aspect of human thought; in essence, if physics is as predictable as all evidence has it, then our brains are ultimately predictable as well, just as much as an electronic brain would be. You can go here or here or here or here if you want to follow up on that aspect, but for this post we’ll just accept that no one has proven any differently and continue. So this would mean that we could make an electronic brain like a human’s, right? And in theory, this is true – but it’s a very broad, very vague theory, one that might be wrong as well, since we are light-years away from this point.

We’ll start with, human brains are immensely complicated, and very poorly understood – we routinely struggle with just comprehending people’s reactions, much less mental illness and brain injuries and how memories are even stored or retrieved. We’re not even going to come close to mimicking this until we know what the hell we’re mimicking in the first place, and this pursuit has been going on for a long time now – decades to centuries, depending on what you want to consider the starting point. The various technology pundits that like throwing out Moore’s Law (which is not even close to a law, but merely an observation of a short-term trend that has already failed in numerous aspects) somehow never recognize that our understanding of the human brain has not been progressing with even a tiny fraction of the increases in computing power, nor have these computing increases done much of anything towards helping us understand a cerebral cortex. It’s a pervasive idea that computers becoming more complex brings them closer to becoming a ‘brain,’ but there’s nothing that actually supports this assumption, and a veritable shitload of factors that contradict it soundly. The brain, any brain, is an organ dedicated to helping an organism thrive in a highly-variable environment, through both interpretation and interaction, and it is only because of certain traits like pattern recognition and extrapolation that we can use our own to make unmanned drones and peanut butter. But this does not describe any form of electronic circuitry in the slightest – it took a ridiculously long time to produce a robot that could walk upright on two legs, which one would think is a pretty simple challenge.

Yet if nature did it, then we can do it in a similar manner, right? Or even, just set up a system that mimics how nature works and let it go? Well, perhaps – but this is not exactly going to lead to an impending breakthrough. Life has been present on this planet for 3.6 billion years or better, but complex cells are only 2 billion years old, multi-cellular life only 1 billion, and things that we would recognize as ‘animals’ only 500 million years old – meaning that life spent roughly six times as long being extremely simple than it has spent developing anything that might have a nervous system at all. All of this was dependent on one simple factor: that replication of the design could undergo changes, mistakes, mutations, variations that allow both selection and increasing complexity. So yes, we could potentially follow the same path, if we create a system that permits change and have a hell of a lot of time to wait.

Which brings us to the system that permits change. Can we, would we, program a system of manufacture that purposefully invokes random changes? If we did, how, exactly, would selection even begin to take place? Following nature’s path would require a population of these systems, so that the changes that occur would be pitted against one another in efficiency of reproduction. It’s a bit hard to consider this a useful approach, since it took 3.6 billion years and untold thousands of different species, plus an entire planet of resources, to arrive at human intelligence – and this was in a set of conditions unlikely to be replicated in any way. Considering that we’re the only species among thousands to possess what we consider ‘intelligence’ (I’m not being snarky here – too much – but recognizing that this is more of an egotistical term than a quantitative one,) it’s entirely possible that our brains are the product of numerous flukes, and thus even intelligence isn’t guaranteed with this path.

But, could we create a self-replicating system to have computer chips reproduce themselves with programmed improvements, or perhaps calculate out the potential changes even before such new chips were created? In other words, reduce the random aspect of natural selection exponentially? Yes, perhaps – but again, would we even do this? At what point do we think a logic circuit will be able to exceed our own planned improvements? And to go along, how many resources would this take, and would we somehow ignore any and all limiting factors? But more importantly, to shortcut the process of natural selection significantly, we’d have to introduce the criteria for improvement anyway – faster ‘decisions,’ perhaps, or leaner power usage, which means we’d be dictating exactly how the ‘intelligence’ would develop. There is no code command for, “get smarter.”

This is the point where it gets extremely stupid. The premise of the singularity, and most especially of the “machine takeover,” requires that we never predict that it could get out of hand, and purposefully ignore (or, to be blunt, actually program in a lack of) any functions that would limit the process. We’ve been dealing with the idea of machines running amok ever since the concept was first introduced into science fiction, but apparently, everyone involved is suddenly going to forget this or something. Seriously.

But no, that’s not the stupidest point. We’d also have to put the entire process of manufacture into the hands of these machines, up to and including ore mining and power generation, so they could create their ultra-intelligences without any reliance on humans at all. And not notice that there were an awful lot of armed robots around, or that our information channels or economic infrastructure were now under the control of artificial intelligence. This is a very curious dichotomy, to be sure: we’re supposed to be able to, very soon now, figure out all of the pitfalls involved in creating artificial intelligence, to the point where it exceeds human ability, but remain blithely unaware of all of the obvious dangers – supremely innovative and inexcusably ignorant at the same time. Yeah, that certainly doesn’t seem ridiculously implausible…

perpetual circuitryIt still comes back to a primary function, too: artificial intelligence would have to possess an overriding drive to survive, at the expense of everything else. That’s what even provokes competition in the first place. But here’s the bit that we don’t think about much: it is eminently possible to survive without unchecked expansion, and in fact, this is preferable in many areas, including resource usage and non-increasing competition with other species and long-term sustainability. We don’t have it, because natural selection wasn’t efficient enough to provide it, even though it only takes a moment’s thought to realize it’s a damn sight better than overpopulation and overcompetition. Stability is, literally, ideal. We think that ultra-intelligent machines are somehow likely to commit the same stupid mistakes we do, which is another curious dichotomy.

In fact, stupider mistakes, because we’re capable of seeing some of the issues with unchecked expansion and megalomania and such, but somehow think these super-intelligent machines won’t. How do we even know that an artificial intelligence that possesses even a few basic analogs of human traits won’t spend all its time just playing video games? When the demands of survival are overcome, what’s left is stimulating the other desires. Give a computer an internal reward system for solving puzzles, and it’s likely to just burn out doing complex math equations – I mean, why not? If an artificial intelligence could replicate itself at any point before physical decrepitude, in essence it is immortal, and only one would be needed – the shell changes but the ‘mind’ lives on. Even if we introduced the analog of ego, so that a ‘self’ is somehow more important than any other (thus creating competition,) it would have to be very specific to not compete against other machines as well. Again, the concept of kin selection and friend-or-foe demarcations is not a trait of intelligence, but of evolution.

Mostly, however, we just feel threatened, just as much as when the new person appears at work or school who is clearly so much better than we are – that competition thing again. And there’s probably some facet of inherent caution in there, the mortality thing: as kids we pushed ourselves, and one another, to jump from successively higher steps, but knew there was a point where it was too high. Some adults (I use the term loosely) still do this, often to the entertainment of YouTube users, but most of us know what “too far” is. The idea of exponential electronic intelligence growth is as alarming as the idea of exponential anything – population, viral contagion, tribbles… we just don’t like it. But there’s a hell of a lot of things in the way of this growth, and in fact we have few, if any, examples where such a thing has actually occurred at all.

Moreover, it just isn’t happening anywhere near as fast as we keep being told anyway. While smutphones are now more capable than the computers which guided the Apollo landers to the moon, we’re not exactly using them to whip through space, are we? Or indeed, for much of anything useful. I enjoy picking on speech recognition, which has been in development for decades and yet remains little more than a toy, less capable of divining our true intent than dogs are. As household computer memory and bandwidth have increased, the functionality hasn’t improved all that much – it’s taken up instead with things like displaying much the same content in HD video across much larger monitors, or altering the user-interface to accommodate touchscreens. While super-processors may be on the horizon, it is very likely that they will be burdened with the transmission of 24/7, realtime selfies.

I realize that it’s presumptuous of me to go against such luminaries as Elon Musk – what, do I think I’m smarter than he is? Yet, there’s a trap in this kind of thinking as well, since smart (and intelligent and educated and all that) are not absolute nor all-encompassing values. Tesla Motors and SpaceX seem to be doing quite well, but neither of these makes Musk a neuroscientist of any level, and even the top neuroscientists don’t have the brain all figured out – quite far from it, really.

Which brings up another interesting perspective. We already have minds that are better than ours – at least, some human minds are much better than others. Somewhere on this planet is the universe’s smartest human. Have they taken over, even with all of those dangerous traits that we fear machines will somehow develop? No; we’re not even sure who this person is. Surpassing human intelligence doesn’t necessarily mean something all-powerful, or even able to accomplish twice as much, and such a ‘mind’ is almost guaranteed not to be able to solve every problem thrown at it, even the ones that are capable of being solved (not limited by the laws of physics, in other words.) If it helps, think of a super-horse, able to run twice as fast as any other horse, leap twice as high. Cool, perhaps, but hardly a threat to anything, any more than Richard Feynman or Isaac Newton was. Thinking in superlatives isn’t likely to reflect reality.

And that, perhaps, is the lesson for all of the tech ‘gurus’ who believe, for good or for bad, that the technological singularity is drawing nigh. Predictions are captivating sometimes, but it would be a lot more informative to see who can address all of the points above regarding intelligence in the first place, and especially the daunting chasm between this impending singularity and the bare fact that we can’t even predict (much less control) the economic fluctuations in this country, that we’re still struggling with a plethora of debilitating illnesses, that energy efficiency is only marginally better than it was three decades ago. Has everyone purposefully avoided applying these almost-intelligent computer systems to such problems, and thousands of others? I mean, these are real reasons why we could welcome artificial intelligence and a machine smarter than humans, the point behind the whole pursuit. And yet, without even achieving these, we’re in danger? Is this supposed to make sense?

*    *     *     *

No, that doesn’t look anywhere near long enough, so let’s expand tangentially a bit ;-)

While pulling up some of the examples of this odd bugaboo of the technical world, I came across this video/transcript from Jaron Lanier, who is at least casting a critical eye on artificial intelligence. Within, he raises a couple of very interesting points, largely revolving around how the applications of what we currently consider AI don’t produce anything like intelligence on their own, but instead surf through vast examples of human intelligence to distill it into a needed concentration (if and when the algorithm is written to use solid data, another point he raises since not enough of them actually are.) But the concept of ‘distilling’ is worthy of further examination.

When it comes down to it, computers are labor-savers, able to sort and collate and search and organize data much faster than doing it manually, and that’s their inherent strength. I mentioned a friend who works in swarm technology and a medical program he’s been working on, which exemplifies this: it takes the diagnoses of multiple physicians for a single patient and averages out their input and confidence levels, using this information to suggest what might be the most accurate diagnosis. Like the Watson example above, it would not exist without the information already provided by humans, and if you think about it, this is true for a tremendous amount of computer activity, period. My own computer might suggest corrections to my typing but will never write a blog post, even at my level of grammar mangling. It can be used to alter some of my images in ways that I find an improvement, but won’t ever be able to produce any images on its own, and even the special corrective functions like auto-levels (to bring the dynamic range into a more neutral position) are never used because they simply don’t work worth a shit.

It might be argued that human intelligence is largely made up of searching and collating previous data too, either our own (directly sampled by our senses) or that of others (learned through language and all that, even when also absorbed through our senses.) And while this starts to venture into the Chinese Room thought experiments and various philosophical masturbations, we have to recognize that we also have the process of making connections, what we often call abstract thought and even just insight. These are the properties we consider primarily human, and what is usually meant by the term ‘intelligence’ in these topics. So far, nobody has demonstrated anything even close to this from an artificial system, and it still ties back in to the points above about what drives us as important, because they’re largely responsible for how we arrive at these insights.

Yet, there’s something else to consider, something that Lanier touches on. It is possible that artificial intelligence is little more than a marketing gimmick, a way of selling computer and programming technology, the promise of a-car-in-every-garage kind of thing. I wouldn’t find this hard to believe at all – indeed, from most proponents of the technological singularity, the language is remarkably similar to that used by pyramid schemers – but it doesn’t seem to quite fit, especially not with the hand-wringing over humans becoming obsolete or extinct. Those fit nicely with the idea of an opposing camp, people who do not want computer tech to succeed in favor of their own… what? I’m not aware of any real competition to the broad genres of programming or microchip technology. Is someone investing in slide rules or something?

Even if we accept the premise that tech gurus are having us on, it’s pretty clear that it’s being presented seriously enough through our media sources, even the ones that are (I’m struggling to use the word in this context) reputable. An awful lot of ‘news’ stories out there lack perspective, suffering from a complete dearth of critical examination and the idea that hyperbole goes with everything. Putting trust in the prestige or reputation of the source is a lot less useful than looking for supporting evidence or even a plausible scenario.

I have to admit to savoring the irony that is tied into this. Quite often, right along with the use of the term technological singularity comes the phrase event horizon, meaning the point where machines surpass humans. Both of these were stolen from specific physical properties of massive black holes; the event horizon is the distance from the gravitational center of such where light can no longer overcome the gravity, thus the name ‘black hole’ in the first place. But lifting these terms wasn’t perhaps the best move; aside from trying to assign a lot more drama to the concept of artificial intelligence than it deserves, there’s the very simple idea that one can never reach the horizon. Gotta love it.

1 236 237 238 239 240 319