These hibiscus flowers were photographed during the trip to Sylvan Heights Bird Park that I talk about here, and were part of the selection of images that I’d prepared for that post – which probably should’ve taken place in August, but didn’t. When the end of the year was rolling around, I pulled this image from that now-ancient photo collection and inserted it into the color gallery for potential use in the December 31st post – but decided in favor of others.
Then, I went ahead and posted about the Sylvan Heights trip after all, and could have included it within, but didn’t for two reasons. The first is that most of the other images were vertical too, which makes laying out the post tricky sometimes (yes of course I think about this when writing them – don’t you?) And the second is, I had already started the Monday color posts and needed the red within that gallery. So here it is.
I had started to type something along the lines of, “One of these days, I’ll look up what pollinates a hibiscus flower,” because that shape is curious. Many flowers have shapes that ensure that anything going for the nectar has to contact the pollen, but hibiscus seem to put it out of reach, as if trying to keep it conveniently out of the way of any visitors. Then I decided to go ahead and research it right now, and found that the depth of the trumpet helps a lot here, in that whatever is seeking nectar has to be fairly large as pollinators go in order to reach down that far, and thus may contact the pollen and carry it to other flowers. Or just to the red pistils at the top, since hibiscus can self-germinate, and do not need to receive pollen from an entirely different plant like many species do. It is also possible that this shape helps with pollen spread by wind, being wide open like this, and there’s even a chance that the shape of the petals helps this too, generating more of a vortex within the bloom itself. If you look close, you can see that this one has had a measure of success, one way or another, since a lone pollen grain can be seen in contact with the pistil.
Don’t ask me why something that spreads pollen is a pollinator. I suspect that some of the people who make up words are anarchists and don’t like the idea of English rules.
From time to time, and surprisingly in some rather serious media sources, we hear about the technological singularity, the fast-approaching (so we’re told) point where artificial intelligence will surpass human intelligence, and quite often right alongside we have speculations about the “machines taking over.” As over-dramatic as that sounds, some quite intelligent humans have indicated that this is, at least potentially, an ominous threat. I have to say that I am yet to be convinced, and see an awful lot of glossed-over assumptions within that make the entire premise rather shaky.
Let’s start with, what do we even mean by, “surpassing human intelligence”? Is human intelligence even definable? It’s not actually hard to connect enough data sources to far exceed the knowledge of any given human, and a few years back we saw this idea coupled with a search algorithm to pit a computer program named “Watson” against two champions of the game show Jeopardy, doing quite well. Mind you, this was using the whole of the internet as a data source, including the vast amount of mis- and dis-information that it encompasses – having more accurate sources of info would have produced far better results. But no one seems terribly concerned about this and the impending singularity is still considered to be at some unknown point in the future, so I’m guessing this isn’t what anyone means by “intelligence.”
So perhaps the idea is a machine that thinks like a human, able to make the same esoteric connections and intuitive leaps, and moreover, able to learn in a real-time, functional manner. I’ve already tackled many aspects of this in an earlier post, so check that out if I seem to be blowing through the topic too superficially, but in essence, this is a hell of a lot harder, but also pointless in many ways. First off, the emotions that dictate so many of our thoughts and actions are also responsible for holding us up in a variety of manners, actually driving us away from efficient decisions and functional pursuits very frequently – perhaps as much as, if not more than, we can think rationally. While we might be the pinnacle of cognitive function among the various species of this planet (and it’s worth noting that we can’t exactly prove this in any useful or quantitative way,) we can easily see that our efficiency could be a hell of a lot better. Plus, we have these traits because that’s what was selected by the environmental and competitive pressures, and there’s virtually no reason to try and duplicate them in any form of machinery or intelligence, since it’s not more human-like thinking that we can use (there being no shortage of humans,) but something that serves a specific purpose and, for preference, arrives at functional decisions faster and more accurately – that’s largely the point of artificial intelligence in the first place, with the added concept that it can be used in dangerous environments where we would prefer not to send humans. This is a revealing facet, because it speaks of the survival instincts we have, ones that no machine would possess unless we specifically programmed it in.
We even hear that machines becoming self-aware is a logical next step, largely a foregone conclusion of the process, and likely the key point of danger. Except, we don’t even know what self-awareness is – the idea that self-awareness or ‘consciousness’ automatically occurs once past a certain threshold of intelligence or complexity is nonsense. Nor is there any reason to believe that it would provoke any type of behavior or bias in thinking. Various species have different levels of self-awareness, whether it be flatworms fleeing shadows or primates recognizing themselves in a mirror, which is almost as far as we can even take this concept – without language, we’re not going to know how philosophical any other species gets. But it certainly hasn’t done anything remarkable for the intelligence of chimps and dolphins.
This is where it gets interesting. Human thought is tightly intertwined with the path we took, over millions of years, to get here. Just creating a matrix of circuitry to ‘think’ won’t automatically include any of our instincts to survive, or reproduce, or compete for resources, or worry about the perceptions of others. We’d have to purposefully put such things within, because the structure of electronics only permits specified functions. This structure is so limiting that, at the moment, we have no readily available method of generating a truly random number – circuits cannot depart from physics to produce a signal that has not originated from a previous one, which is the only thing that could be ‘random’ [I’m hedging a little here, because I’ve heard that there has been progress in using quantum mechanics as a function, which might generate true randomness, or might not, but I’m pretty sure this is still in conceptual stages either way.]
And the rabbit-hole gets deeper, because this impinges on the deterministic, no-free-will aspect of human thought; in essence, if physics is as predictable as all evidence has it, then our brains are ultimately predictable as well, just as much as an electronic brain would be. You can go here or here or here or here if you want to follow up on that aspect, but for this post we’ll just accept that no one has proven any differently and continue. So this would mean that we could make an electronic brain like a human’s, right? And in theory, this is true – but it’s a very broad, very vague theory, one that might be wrong as well, since we are light-years away from this point.
We’ll start with, human brains are immensely complicated, and very poorly understood – we routinely struggle with just comprehending people’s reactions, much less mental illness and brain injuries and how memories are even stored or retrieved. We’re not even going to come close to mimicking this until we know what the hell we’re mimicking in the first place, and this pursuit has been going on for a long time now – decades to centuries, depending on what you want to consider the starting point. The various technology pundits that like throwing out Moore’s Law (which is not even close to a law, but merely an observation of a short-term trend that has already failed in numerous aspects) somehow never recognize that our understanding of the human brain has not been progressing with even a tiny fraction of the increases in computing power, nor have these computing increases done much of anything towards helping us understand a cerebral cortex. It’s a pervasive idea that computers becoming more complex brings them closer to becoming a ‘brain,’ but there’s nothing that actually supports this assumption, and a veritable shitload of factors that contradict it soundly. The brain, any brain, is an organ dedicated to helping an organism thrive in a highly-variable environment, through both interpretation and interaction, and it is only because of certain traits like pattern recognition and extrapolation that we can use our own to make unmanned drones and peanut butter. But this does not describe any form of electronic circuitry in the slightest – it took a ridiculously long time to produce a robot that could walk upright on two legs, which one would think is a pretty simple challenge.
Yet if nature did it, then we can do it in a similar manner, right? Or even, just set up a system that mimics how nature works and let it go? Well, perhaps – but this is not exactly going to lead to an impending breakthrough. Life has been present on this planet for 3.6 billion years or better, but complex cells are only 2 billion years old, multi-cellular life only 1 billion, and things that we would recognize as ‘animals’ only 500 million years old – meaning that life spent roughly six times as long being extremely simple than it has spent developing anything that might have a nervous system at all. All of this was dependent on one simple factor: that replication of the design could undergo changes, mistakes, mutations, variations that allow both selection and increasing complexity. So yes, we could potentially follow the same path, if we create a system that permits change and have a hell of a lot of time to wait.
Which brings us to the system that permits change. Can we, would we, program a system of manufacture that purposefully invokes random changes? If we did, how, exactly, would selection even begin to take place? Following nature’s path would require a population of these systems, so that the changes that occur would be pitted against one another in efficiency of reproduction. It’s a bit hard to consider this a useful approach, since it took 3.6 billion years and untold thousands of different species, plus an entire planet of resources, to arrive at human intelligence – and this was in a set of conditions unlikely to be replicated in any way. Considering that we’re the only species among thousands to possess what we consider ‘intelligence’ (I’m not being snarky here – too much – but recognizing that this is more of an egotistical term than a quantitative one,) it’s entirely possible that our brains are the product of numerous flukes, and thus even intelligence isn’t guaranteed with this path.
But, could we create a self-replicating system to have computer chips reproduce themselves with programmed improvements, or perhaps calculate out the potential changes even before such new chips were created? In other words, reduce the random aspect of natural selection exponentially? Yes, perhaps – but again, would we even do this? At what point do we think a logic circuit will be able to exceed our own planned improvements? And to go along, how many resources would this take, and would we somehow ignore any and all limiting factors? But more importantly, to shortcut the process of natural selection significantly, we’d have to introduce the criteria for improvement anyway – faster ‘decisions,’ perhaps, or leaner power usage, which means we’d be dictating exactly how the ‘intelligence’ would develop. There is no code command for, “get smarter.”
This is the point where it gets extremely stupid. The premise of the singularity, and most especially of the “machine takeover,” requires that we never predict that it could get out of hand, and purposefully ignore (or, to be blunt, actually program in a lack of) any functions that would limit the process. We’ve been dealing with the idea of machines running amok ever since the concept was first introduced into science fiction, but apparently, everyone involved is suddenly going to forget this or something. Seriously.
But no, that’s not the stupidest point. We’d also have to put the entire process of manufacture into the hands of these machines, up to and including ore mining and power generation, so they could create their ultra-intelligences without any reliance on humans at all. And not notice that there were an awful lot of armed robots around, or that our information channels or economic infrastructure were now under the control of artificial intelligence. This is a very curious dichotomy, to be sure: we’re supposed to be able to, very soon now, figure out all of the pitfalls involved in creating artificial intelligence, to the point where it exceeds human ability, but remain blithely unaware of all of the obvious dangers – supremely innovative and inexcusably ignorant at the same time. Yeah, that certainly doesn’t seem ridiculously implausible…
It still comes back to a primary function, too: artificial intelligence would have to possess an overriding drive to survive, at the expense of everything else. That’s what even provokes competition in the first place. But here’s the bit that we don’t think about much: it is eminently possible to survive without unchecked expansion, and in fact, this is preferable in many areas, including resource usage and non-increasing competition with other species and long-term sustainability. We don’t have it, because natural selection wasn’t efficient enough to provide it, even though it only takes a moment’s thought to realize it’s a damn sight better than overpopulation and overcompetition. Stability is, literally, ideal. We think that ultra-intelligent machines are somehow likely to commit the same stupid mistakes we do, which is another curious dichotomy.
In fact, stupider mistakes, because we’re capable of seeing some of the issues with unchecked expansion and megalomania and such, but somehow think these super-intelligent machines won’t. How do we even know that an artificial intelligence that possesses even a few basic analogs of human traits won’t spend all its time just playing video games? When the demands of survival are overcome, what’s left is stimulating the other desires. Give a computer an internal reward system for solving puzzles, and it’s likely to just burn out doing complex math equations – I mean, why not? If an artificial intelligence could replicate itself at any point before physical decrepitude, in essence it is immortal, and only one would be needed – the shell changes but the ‘mind’ lives on. Even if we introduced the analog of ego, so that a ‘self’ is somehow more important than any other (thus creating competition,) it would have to be very specific to not compete against other machines as well. Again, the concept of kin selection and friend-or-foe demarcations is not a trait of intelligence, but of evolution.
Mostly, however, we just feel threatened, just as much as when the new person appears at work or school who is clearly so much better than we are – that competition thing again. And there’s probably some facet of inherent caution in there, the mortality thing: as kids we pushed ourselves, and one another, to jump from successively higher steps, but knew there was a point where it was too high. Some adults (I use the term loosely) still do this, often to the entertainment of YouTube users, but most of us know what “too far” is. The idea of exponential electronic intelligence growth is as alarming as the idea of exponential anything – population, viral contagion, tribbles… we just don’t like it. But there’s a hell of a lot of things in the way of this growth, and in fact we have few, if any, examples where such a thing has actually occurred at all.
Moreover, it just isn’t happening anywhere near as fast as we keep being told anyway. While smutphones are now more capable than the computers which guided the Apollo landers to the moon, we’re not exactly using them to whip through space, are we? Or indeed, for much of anything useful. I enjoy picking on speech recognition, which has been in development for decades and yet remains little more than a toy, less capable of divining our true intent than dogs are. As household computer memory and bandwidth have increased, the functionality hasn’t improved all that much – it’s taken up instead with things like displaying much the same content in HD video across much larger monitors, or altering the user-interface to accommodate touchscreens. While super-processors may be on the horizon, it is very likely that they will be burdened with the transmission of 24/7, realtime selfies.
I realize that it’s presumptuous of me to go against such luminaries as Elon Musk – what, do I think I’m smarter than he is? Yet, there’s a trap in this kind of thinking as well, since smart (and intelligent and educated and all that) are not absolute nor all-encompassing values. Tesla Motors and SpaceX seem to be doing quite well, but neither of these makes Musk a neuroscientist of any level, and even the top neuroscientists don’t have the brain all figured out – quite far from it, really.
Which brings up another interesting perspective. We already have minds that are better than ours – at least, some human minds are much better than others. Somewhere on this planet is the universe’s smartest human. Have they taken over, even with all of those dangerous traits that we fear machines will somehow develop? No; we’re not even sure who this person is. Surpassing human intelligence doesn’t necessarily mean something all-powerful, or even able to accomplish twice as much, and such a ‘mind’ is almost guaranteed not to be able to solve every problem thrown at it, even the ones that are capable of being solved (not limited by the laws of physics, in other words.) If it helps, think of a super-horse, able to run twice as fast as any other horse, leap twice as high. Cool, perhaps, but hardly a threat to anything, any more than Richard Feynman or Isaac Newton was. Thinking in superlatives isn’t likely to reflect reality.
And that, perhaps, is the lesson for all of the tech ‘gurus’ who believe, for good or for bad, that the technological singularity is drawing nigh. Predictions are captivating sometimes, but it would be a lot more informative to see who can address all of the points above regarding intelligence in the first place, and especially the daunting chasm between this impending singularity and the bare fact that we can’t even predict (much less control) the economic fluctuations in this country, that we’re still struggling with a plethora of debilitating illnesses, that energy efficiency is only marginally better than it was three decades ago. Has everyone purposefully avoided applying these almost-intelligent computer systems to such problems, and thousands of others? I mean, these are real reasons why we could welcome artificial intelligence and a machine smarter than humans, the point behind the whole pursuit. And yet, without even achieving these, we’re in danger? Is this supposed to make sense?
* * * *
No, that doesn’t look anywhere near long enough, so let’s expand tangentially a bit ;-)
While pulling up some of the examples of this odd bugaboo of the technical world, I came across this video/transcript from Jaron Lanier, who is at least casting a critical eye on artificial intelligence. Within, he raises a couple of very interesting points, largely revolving around how the applications of what we currently consider AI don’t produce anything like intelligence on their own, but instead surf through vast examples of human intelligence to distill it into a needed concentration (if and when the algorithm is written to use solid data, another point he raises since not enough of them actually are.) But the concept of ‘distilling’ is worthy of further examination.
When it comes down to it, computers are labor-savers, able to sort and collate and search and organize data much faster than doing it manually, and that’s their inherent strength. I mentioned a friend who works in swarm technology and a medical program he’s been working on, which exemplifies this: it takes the diagnoses of multiple physicians for a single patient and averages out their input and confidence levels, using this information to suggest what might be the most accurate diagnosis. Like the Watson example above, it would not exist without the information already provided by humans, and if you think about it, this is true for a tremendous amount of computer activity, period. My own computer might suggest corrections to my typing but will never write a blog post, even at my level of grammar mangling. It can be used to alter some of my images in ways that I find an improvement, but won’t ever be able to produce any images on its own, and even the special corrective functions like auto-levels (to bring the dynamic range into a more neutral position) are never used because they simply don’t work worth a shit.
It might be argued that human intelligence is largely made up of searching and collating previous data too, either our own (directly sampled by our senses) or that of others (learned through language and all that, even when also absorbed through our senses.) And while this starts to venture into the Chinese Room thought experiments and various philosophical masturbations, we have to recognize that we also have the process of making connections, what we often call abstract thought and even just insight. These are the properties we consider primarily human, and what is usually meant by the term ‘intelligence’ in these topics. So far, nobody has demonstrated anything even close to this from an artificial system, and it still ties back in to the points above about what drives us as important, because they’re largely responsible for how we arrive at these insights.
Yet, there’s something else to consider, something that Lanier touches on. It is possible that artificial intelligence is little more than a marketing gimmick, a way of selling computer and programming technology, the promise of a-car-in-every-garage kind of thing. I wouldn’t find this hard to believe at all – indeed, from most proponents of the technological singularity, the language is remarkably similar to that used by pyramid schemers – but it doesn’t seem to quite fit, especially not with the hand-wringing over humans becoming obsolete or extinct. Those fit nicely with the idea of an opposing camp, people who do not want computer tech to succeed in favor of their own… what? I’m not aware of any real competition to the broad genres of programming or microchip technology. Is someone investing in slide rules or something?
Even if we accept the premise that tech gurus are having us on, it’s pretty clear that it’s being presented seriously enough through our media sources, even the ones that are (I’m struggling to use the word in this context) reputable. An awful lot of ‘news’ stories out there lack perspective, suffering from a complete dearth of critical examination and the idea that hyperbole goes with everything. Putting trust in the prestige or reputation of the source is a lot less useful than looking for supporting evidence or even a plausible scenario.
I have to admit to savoring the irony that is tied into this. Quite often, right along with the use of the term technological singularity comes the phrase event horizon, meaning the point where machines surpass humans. Both of these were stolen from specific physical properties of massive black holes; the event horizon is the distance from the gravitational center of such where light can no longer overcome the gravity, thus the name ‘black hole’ in the first place. But lifting these terms wasn’t perhaps the best move; aside from trying to assign a lot more drama to the concept of artificial intelligence than it deserves, there’s the very simple idea that one can never reach the horizon. Gotta love it.
The snowstorms that have hit the northeast have completely passed us by here in NC… or had, anyway. Tuesday morning it came in with a vengeance – thick clumps of snowflakes packed together like schoolgirls on the way to the ladies room, but considerably quieter. I ventured out for photos, but had to keep the camera covered with a towel almost the entire time it was in hand. The snow was even piling up on the backs of the Canada geese (Branta canadensis) as they cruised the pond wondering what all the fuss was about. I feel obligated to mention that this is the same pond, though neither likely the same goose, seen in this image from perhaps 20 years ago (but of course you recognized it.)
I have a standing goal to get decent snowflake images, which as you might imagine is rather tricky. Aside from their diminutive stature, snowflakes are pretty sensitive about their preferred conditions, adverse to appearing alone and quite neurotic about the surfaces they contact being too warm or too wet, jazz like that. Worse, they need good contrast to be seen in any detail, which usually means a dark background
and softer lighting. I was a little lucky in that the black rain barrel was sporting the only examples of spiderweb to be found, providing both a dark background and a surface that snowflakes can cope with, permitting them to sit in complete isolation without damage or melting. I was less lucky because very few flakes were getting caught by the web strands, but more distinctly, the initial snow had (I’m guessing) passed through slightly warmer, moister air on the way down, gathering a layer of additional ice that pebbled the flakes with frozen condensation, ruining the finer details.
Naturally, getting this close requires a high magnification lens, and the one that I have to use is the reversed 28-105 mentioned before. Even at roughly f16, the depth-of-field is pathetic, so best results are obtained when perfectly flat-on to the flake – this is challenging to visually determine (given the fixed small aperture and thus dim image in the viewfinder,) annnddd the flake may be twisting and dancing in the web if there’s any breeze at all, and there will be. I like challenges that make me frustrated as hell, it seems.
Like before, the snow soon turned to ice needles instead of flakes, so my opportunity passed. For an idea of scale, the needles usually measure in length three or four times the diameter of the flakes. These are the buds of The Girlfriend’s new prized cherry tree, by the way – we’ll have to see how it looks in a few weeks.
Then Wednesday brought surprisingly clear roads, and when I got the chance I scampered out for some pics someplace other than our backyard, but the snow was melting far too quickly. I shot a winter version of the early fall image seen here, just for contrast, but there really wasn’t much else of interest; the plants are all dormant, of course, and the snow too sparse to use as much of an accent – it had virtually all vanished by nightfall. I admit to being a little disappointed. I’d even gone out the night before to the same pond and fired off a few time exposures at night, but the nearby mercury lamp streetlights threw a curious green pall over the snow and trees, offset by the amber glow from the overcast illuminated by city lights. Just didn’t produce anything impressive, though they might have passed muster for any of those social snapshot sharing services – which is a pretty good indication that they needed to be better, as far as I’m concerned. Smutphones have done absolutely nothing for standards of photo quality.
I poked around the botanical garden while I was nearby, not finding a lot of interest there either. For fart’s sake, I did a few images of the tattered blossoms of an oak-leaf hydrangea plant (Hydrangea quercifolia,) which I’m definitely going to have to add to the yard this spring, just for a pleasant background plant. The blossoms, about 2 cm across, were the only thing left on the stalks by this point, but have achieved a curious lacy appearance that I wish I’d spent more time experimenting with.
The weather reports this winter, as I suspect for much of the country, have been highly erratic, their predictions often changing drastically every few days, so while snow was predicted for yesterday evening, it suddenly jumped up to (depending on the source) estimates from 10 to 30 centimeters, or 4-12 inches. It started in mid-evening and got down to business quickly, and I set up the camera alongside the floodlights on the back porch to do a time exposure of the flakes passing through the light beam – I had hoped for some nice swirls of flakes lazily blown by the wind, but what I got instead were streaks of huge and clumped wet flakes hurtling down harder than snow is supposed to; this image is a mere 1/5 second and only what is captured in the light thrown by the bulb glimpsed in the corner, so maybe slightly more than a meter in depth (with the frame spanning about a meter vertically as well.) You can even see one clump spinning in its descent, right in the middle of the frame. Apparently falling a flake at a time was considered inefficient, so they ganged up and came down like parachute display teams.
This did, however, provide another opportunity to capture snowflake images, and this time they hadn’t collected an additional layer of ice beads on their journey. With the precipitation this dense, it was impossible to keep the camera clear, so I was required to keep a towel handy to frequently wipe away the moisture. It was coming down so hard that I would occasionally select a flake, go in close for the photograph, and actually have another clump of snow crash into the extremely narrow view of the camera. Not to mention the accumulation on my broad-brimmed hat and the top of the camera bag. However, it was worth the efforts.
It takes a fair amount of luck to find a flake that is standing so free and apart from the others, supported by the barest contact, with further luck in finding a nice hexagonal ‘plate’ flake like this, and being able to get a reasonably dark background. At the same time, the luck also required being out there looking for them in the cold and uncomfortable conditions, snow falling down the neck of my coat, so, yeah. I like the asymmetrical nature of this one, each side being a different length. Don’t ask me how this occurs.
And the same web strands that had been used previously sported these two flakes this time around – I’d really love to find a way to preserve a patch of spider web just for the snows, since it’s so damn useful in supporting flakes like a display stand. Maybe I’ll start raising black widows (which have remarkably strong webbing.) I admit that this is a ‘stacked’ image, actually two combined, since I did not get any frames where both prominent flakes were in tight focus at the same time – where would I be without Photoshop? Well, showing you two images instead of one, I guess…
I’m also pleased that the softbox flash caught the web at the right angle so it showed up. As you can see above, this doesn’t always happen (and it’s ridiculously hard to arrange purposefully.)
The wetness of the snow meant it was a lot heavier than the previous, and in stepping out onto the screened porch after a few hours of steady snowfall, the frequent crack of branches could be heard as they succumbed to the weight. I was just thinking I should go get the sound recorder to see if I could capture audio of one (or more) of these when an especially sharp and close crack was heard, followed by a loud thump on the roof directly overhead, then the branch hitting the ground below with a shower of dislodged snow. That would have been impressive to capture.
Around 2 AM, as the snowfall had slowed to feeble efforts, I got dressed up again and ventured out in the depths, probably around 13 cm (5 in) of accumulation – again, nothing to compare to the northeast, but a pretty good amount for this area. The trees were all heavily burdened and white, and visibility was surprisingly good because the low overcast bounced the city light back down, to be reflected around by the snow – I could almost adjust the camera settings without additional light. This is a 20-second exposure at f7.1, ISO 250, and only a little brighter than it actually looked while out there. I was rather selective of where I walked, however, because the branches were still coming down, and I don’t think that’s any surprise now with the accompanying image. Indeed, not long after getting this shot and moving on to a different area, I heard a branch break off and thud into the ground, not very close to where I had stood but not far enough from it either, a nice warning.
I was hoping last year’s performance with inept North Carolina utilities would be left behind with moving to a new house, but I was wrong. After I had returned inside, edited the images you see here, and was typing this very post, the power went out, remaining out for just over 12 hours. We are now in a housing development with buried electrical lines, but that only applies to the small development, and out on the main road they remain supported (or not) by poles as usual – thus susceptible to the fragile and ubiquitous longneedle pines (seen at the right) that really need to be destroyed. While this is a more-or-less typical snowfall by NY standards, it was enough to knock out power for a very large number of residents in this area, because I suppose it’s more fun to do emergency calls in wretched driving conditions than cut the fucking trees back from the power lines while the weather is clear. Seriously.
I will leave you with another visual, a branch that missed the house but was audible enough to wake us up this morning when it thudded into the ground (even with all this snow,) and testimony that the pines that came with the property are going to be removed as soon as possible. For cold-weather trees, they really can’t handle the conditions – I’ve seen palms stand up to snowfall better than this.
I actually took photos today! But I may get some more tonight, and want to do a longer post, so for now, we go retro.
As I’ve been skimming through my images, I have a couple of folders dedicated to scans of slides and negatives from long ago. This one is not quite the oldest, but it’s close. To the best of my research this was taken on October 17, 1987 at the Rochester War Memorial – it’s Nancy Wilson of the band Heart (framed by several other unknown people) during their Bad Animals tour, and the first real concert I’d ever attended. I had come prepared and had soft earplugs in, which paints me as far from hardcore, I know, but I could hear when I left the auditorium, while my friends could not.
Now, some notes about this. This was before I could afford to get serious about photography, and the camera I had at the time was a Wittnauer Challenger, which I would include a photo of except I lost mine about fifteen years ago and no one seems to have an image of one online – pretty rare, it seems (though whenever I checked it was never valued terribly high.) It was almost identical to the Wittnauer Professional seen on this page, so go there to get the idea. Fixed 50mm f2.8 lens, strictly manual, no exposure meter, and rangefinder focusing, not even a split-image screen (if that’s all gobbledygook don’t worry about it – just accept that it was not only difficult to focus in good light, much less a concert, it was also very easy to take images with the lens cap on.) Concert security searched our bags, and either they missed the camera or, quite possibly, giggled and let me keep it, just like they let people through with their 110 Instamatics and Disc cameras. The basic point about this is, I was guessing at adequate exposure, and with a 50mm lens, you pretty much have the same perspective as you would see in person – no telephoto, no magnification. Which is what makes this image pleasing to me.
At this concert, it was an open floor plan – no seating, just a railing about a meter from the stage to keep the fans at bay. They don’t do this anymore, with good reason: the attendees try to get as close as they can, and the press near the stage is unreal, and literally dangerous. As I squeezed in to try and make use of this basic lens, I was literally jammed in so tight among everyone else that I had no control over balance, which was demonstrated twice. Somehow, somewhere on the edge of the crowd, someone would lean or trip or something, and this leaning wave would sweep through the crowd, which was almost terrifying. I felt myself going over with everyone else but couldn’t move my feet to maintain balance – it was like being strapped in, my arms folded up and pressed to my sides, camera only a few inches from my face at any time. Both times it passed without disaster, but the impression was that I could lift both feet and wouldn’t have slipped a centimeter. I couldn’t begin to tell you how much of the sweat pouring off of me was someone else’s, but I imagine it was a lot.
I was only a fraction as experienced as I am now, and not paying a lot of attention to composition, so this is more of a lucky accident than skill, but I’m not sure I could get something that communicates “concert” much better than this, even with careful planning. The press of people, the mixed orange and magenta lighting, the speakers, and the beams of the stage lights all framing a full-length portrait of Nancy Wilson, guitar in hand, singing into the microphone (don’t ask – I don’t remember what song it was.) Nothing else on that roll even came close.
Heart wasn’t quite ‘stadium rock,’ and had a lot of ballads thrown in, but many of the attendees felt it necessary to thrash their hands in the air in one way or another, because that’s what you do (there really is no other reason.) You can tell this is old because there are no smutphones blocking the view, not to mention the eighties-style hair and clothes, pretty much dictated by the record companies because the Wilsons hated the image.
By the way, as I type this, Heart is playing in North Charleston, South Carolina, maybe three hours away from here. That’s pretty impressive longevity for a band, you have to admit – but it’s also what made me push the image up, having discovered this during my research last night.
For this week’s Monday color, we hearken back (not to be confused with harkening back) to 1991 I believe, during a training seminar for animal cruelty investigation being held in Nashville, Tennessee. I had some free time in the evening and was wandering around the downtown area when the sky was suffused with some very rich hues at sunset. Spotting the crescent moon in the sky, I quickly lined up a shot with the Union Station clock tower, braced against a light pole, and snapped the shot.
Only, this is an approximation of what it looked like. All films (remember, 1991) have their own color registers, not really true neutral, and so will often produce a faint color cast. This can become even more pronounced for long exposures, when the different layers of the emulsion responsible for each color have a different sensitivity to low light levels. I had just purchased this film – Kodak 1600 if I remember right – specifically to do night photos, and when the magnificent deep red sky turned to indigo, there was no way I could pass it up.
When seeing the prints, however, they were not at all how I remembered the sky – since it was only a week or so later, we’re not talking distant memories. Thinking that the lab had introduced a color shift during printing, I later took the same negative to a make-your-own-print machine and tried my luck there, getting largely the same results. Many years later, I was able to scan the negative directly with a film scanner, and fully confirmed that no, the film simply had not registered virtually any of the blue in the sky at all.
In fact, it looked like this, and even at this reduced size, you can also make out just how grainy it is. The detail is actually pretty impressive when seen at higher magnification, but the color rendition of the sky was ridiculously far off (and convinced me to avoid that film forever thereafter.) As I learned some photo editing skills, I dug out the original negative and rendered the sky the way that I remembered it, making sure I didn’t alter the registers anywhere else, since the rest of the image seemed perfectly appropriate.
I have to note that this image has more than enough information to actually pin down the day it was taken. The shooting angle can easily be determined from the building façade, and the height and phase of the moon, tied in with the time showing on the clock, are enough to narrow the possibilities down to only two days that year, at least. While a waxing crescent moon occurs every 29 days, it can only occur in that position at that point in twilight twice per year; any other days will see the sun setting later or earlier. Chances are pretty good that the weather conditions wouldn’t be exactly the same for both possible days, and it might even be true that the moon would be in a different latitude. A lot of people don’t realize how specific orbital mechanics can be.
Also, remember this situation for when you run into philosophers, so you can ask them if the edited photo now represents a true- or false-color image. They’ll go on about what’s ‘true’ from the emulsion’s point of view and the subjectivity of experience and probably make up a few brand new terms – you’ll keep them busy for hours…
As New Horizons draws closer to Pluto, it’s starting to send back some really detailed images of the distant dwarf planet, including this lovely shot of dawn over its frozen surface.
Okay, that’s an outright lie. You’re looking at something that I’ve wanted to try since I saw it online last winter, and we’ve gotten the conditions necessary for it now. This is a soap bubble, freezing over in the frigid air. It was -8°c (17°f) as I did this, abnormally cold for this latitude, but still much better than the northeast, so I’m not complaining. Okay, that’s another lie. And yes, I’m just shy of half a century old and playing with soap bubbles – don’t get wise.
I tried this during the day, but I think it was just a little too warm for it then, even though the sunlight would have worked much better. I played around with the lighting a lot this evening/morning and never got it the way that I wanted, but still pulled out a few interesting images for my efforts. Focus was also kind of difficult, since it was hard to tell which surface that I was focusing upon, the closest or the farthest, and the strobe was at a different angle than the porch light, so even seeing the growing frost crystals was tricky. You are likely familiar with how the rainbow swirls will dance around on a soap bubble, courtesy of the shifting fluid, and it was interesting to note that as the crystals started to form they might also be dancing around, which seems counterintuitive but there you are.
About half of the time, the bubbles would burst even after freezing, leaving this curious shell, but sometimes they would just get a small hole in them. I produced a lot of bubbles for this, catching them on the wand after launching them into the air, and most of the rest would have started to freeze before they burst in midair, leaving a shower of delicate onion-skin flakes of ice to dance on the breeze.
At one point, having delayed a little too long while moving the strobe, the bubble wand itself froze over, so I did a backlit, dark field shot of the pattern for posterity. But, just once, I captured a very cool effect.
The same trait that makes the rainbow swirls in a normal soap bubble produced this display, as the diffracted light bounced off the inside surface of the bubble to shine through the encroaching frost on the opposite side. You’ll see this again shortly, but I figured it deserved a closeup. The beaded line, by the way, is the coiled off-camera cord between the camera and the strobe getting caught in the light and reflecting off the bubble. And that’s not the only odd reflection I achieved (wholly unintentionally, of course.)
I also managed a curious double-selfie; the softbox strobe is obvious, but you can also see my belly and the pale left sleeve cuff, with a black glove just barely visible beyond it curled under the lens, with everything duplicated in reverse – one reflection from the top surface of the bubble, and another inverted by coming off the back surface, inside the bubble. The strobe angle happened to illuminate me, but wasn’t aimed high enough to show my head or the camera itself, coincidentally producing almost no overlap from the two images. And I’m sure you didn’t miss the frost beginning at lower right.
I’ll close with an animated sequence, nine frames patched together to show the progression of colors and frost, with the reappearance of the rainbow effect. Sorry, they were shot freehand so they wobble a bit, but I’m sure you’ll cope.
I have a lot of purple in the ‘color’ folder, so I’m trying to space them out adequately, but I suppose they’ll be spaced out with other photos as we go anyway.
(“But Al,” you say, or at least there’s a greater probability of such if you’re female, “that’s not purple, that’s mauve/amethyst/heliotrope/whateverthefuck.” Sure, fine. I stand corrected.)
Anyway, the vivid contrast between the flower petals and the pollen-burdened genitalia was what attracted my attention, this being last year’s crocus blooms. I admit I did not return to try the same shot when the light was muted by clouds, just to see what difference there would have been in the color – believe it or not, sometimes this makes the colors seem richer, with more subtleties showing through. I’ll be back at some point and demonstrate the difference.
So, in a post on August 10 of last year, and again on August 13, I mentioned a trip that we’d taken that I was going to feature “shortly.” Given that there is no firm definition of this word, I maintain that I still made this deadline.
Also, I had waited for both The Girlfriend and a friend who had traveled with us to forward me some of their own images to feature, which never actually happened, so I remain ahead of them. But yeah, this is kind of a long delay since the images were taken, and I’m using it to fill in during the long cold season.
Anyway, we had returned to Sylvan Heights Bird Park and spent the day checking out the various species that inhabit the park. I had been trying to make it a point to do more than basic bird portraiture this time, but unfortunately the conditions (and perhaps my lack of inspiration) were working against me, so I have a nice collection of images but fewer than I wanted, and none that really bowled me over. This happens, more than sometimes, and it’s one of those things that make nature photography hit-or-miss – you certainly have to accept the bad days along with the good, because you can’t control many of the factors at all, and I have yet to find a locale that’s ‘guaranteed’ to produce bountiful subjects in ideal conditions.
Case in point: this image of a trumpeter swan (Cygnus buccinator) and its cygnet. The light was actually pretty good for shooting a bright white subject, the poses quite charming, and the background hideous. At times, a shift in your own position can change the background into something more useful, but this wasn’t one of those times. Patience can be very useful, since the birds were not going to remain in this same spot all day, and might move to a better location, get into the water, or display some really photogenic behavior, but naturally it’s easier to exploit this if you’re not traveling with other people who really don’t want to stand unmoving and stare at the same birds for long periods. And I admit, my own patience often doesn’t last that long anyway; what else might I be missing while I’m waiting, perhaps unsuccessfully, for the subjects in front of me to improve?
Then there’s one of the prime reasons why captive photography in zoos and parks often doesn’t work as well as imagined: fences. And, to a lesser extend, glass, and the enclosures behind the animals as well. Getting a nice image without any of these distracting, telltale traits in it can be challenging, but I will (with extreme reluctance) admit that this is one place where smutphone cameras can be useful. A typical SLR [read: proper] camera lens is wider than the gaps in the fencing, so they’re going to show no matter what, but the tiny lens on a phone camera can easily shoot through the openings. Now all you have to worry about is non-selective focus, no control over depth of field, automatic exposure control that somehow thinks 12800 ISO will produce usable images, and so on. But the fence will be out of the shot! Also, there’s a chance that badasses like this toco toucan (Ramphastos toco) will slip a beak through and take your phone away…
In my previous two posts about Sylvan Heights, I mentioned the impressive bird calls, and I came prepared this time with a small digital recorder. This naturally meant that virtually none of the birds wanted to demonstrate their vocal abilities. I managed a couple of recordings, which were often interspersed with comments from other visitors – which reminds me. When planning such trips, always try to avoid the weekends, but also try to avoid days where school trips are likely – I’ve found that Tuesdays and Thursdays are generally best. Hordes of excited, ill-mannered kids are not only annoying as hell, but very likely to interfere with any images you’re after and uncannily skilled at chasing off critters that were providing evocative poses for you. While we were there, a church group of the noisiest and most obnoxious yard apes came through, able to be heard across at least half of the park (I am not exaggerating in the slightest) and could only have been more disruptive if they’d been wielding firearms. Had I owned a taser I would have run the batteries flat. I am very sorry I did not take a photo of the side of the church van when we left as a reminder, because I’ve since forgotten the name and they really did deserve special recognition for their efforts. And if you’re someone chaperoning such trips, try to instill at least a passing awareness that other people are there to enjoy the parks too, okay?
The same goes for photographers – we need to be on our best behavior in public areas, and unfortunately I’ve seen a few too many that fail to recognize this themselves. While we may be trying for a particular image that takes time to get, or waiting for some appearance or behavior, other people have just as much right to use the area, so common (heh!) courtesy is to stay out of the way, not block views, be aware of the tripod and how it might interfere, and be nice overall. I’m not really sure where the arrogance that I’ve seen from some photographers originates, but it’s not only unwarranted, it works directly against them. If you believe you’re some hotshot, what the hell are you even doing in a public park, and not out in a blind in Zimbabwe? Why haven’t you paid for private access to the place? Seriously, if you’re that skilled, you’ll have no problem letting someone else into your special shooting position, because they still won’t be able to produce the images you can anyway. I always make it a point to be aware of those around me, and yield the superior viewing area when I’m not actively shooting, especially to kids – that may be hard to believe with the paragraph above, but it’s obnoxiousness that I don’t like, not kids themselves, and it’s usually much worse when they run in packs. I’ve even dropped the camera lower on the tripod so kids could see through the viewfinder (and the long lens I have affixed) and have invited people shooting with compatible makes of camera to go ahead and affix theirs to my lens when I’ve got something interesting lined up. A little goodwill goes a long way.
So, another fence shot – seriously, I got less than half of the useful images I’d intended. The only reason I’m featuring this whooping crane (Grus americana) is to show something I’d never realized: the red spot atop the head isn’t feathers, but actually a (mostly) bald spot, and it’s true for the similar sandhill cranes as well – I’ve photographed those numerous times and never noticed, but I suspect you have to be really close. The light angle wasn’t very useful for a better portrait either, with no opportunity to pick another, and had I used a fill-flash to get light into those shadows, it would have illuminated the fence far worse – the best I could have done was to have someone else hold the flash off-camera and away from the fence, and it really wasn’t worth the effort (now you’re starting to see why I didn’t rush this post up, aren’t you?) This particular crane was quite happily greeting visitors, though whether it was in the hopes of a handout, or to protect its territory, or because it was nearsighted and thought we were potential mates, I cannot confirm. But it’s one of the few birds where I got a usable audio recording as it grunted softly; one of the people I was there with was engaged in this conversation, so that’s the other voice you hear.
In a lot of the enclosures within the park, numerous species were all housed together, and in fact visitors are free to enter many of them, separated from the avians by only a low railing (that the birds occasionally disregarded themselves.) Guests gained access through gates that kept the birds within, and vast stretches of netting overhead completed the enclosure, as well as maintaining a barrier to local predators – we observed an opportunistic but frustrated coopers hawk, a native bird-eating species, make an attempt at one of the residents only to be thwarted by the netting.
In a few different locations around the park could be found masked lapwings (Vanellus miles,) a pigeon-sized bird but with much longer legs, members of the plover family and common in Australia. Running around in their yellow masks like anxious opera phantoms, they mixed freely with other birds in the enclosures, but it was a pair tending an open nest that caught our attention. The nest was about as minimal as you could imagine and still be considered evidence of expended effort, but the female was noticeably animated at times, and it took a long lens to confirm that, indeed, there had been a new hatching while we watched.
The discarded shell blocked our view for quite a while, and the mother would nudge it at times without actually moving it out of the way, but eventually the chick’s head peeked up from behind, visible here directly above the empty shell to the right, in protective dark plumage but with a white stripe extending from the back of the eye and curving down the cheek. Since I was at the park with three other women, you can imagine the articulations taking place.
What you can’t imagine, and I’m going to have a hard time adequately conveying, was the event that occurred soon after this. There came a sudden surge of bird calls, a cacophony of alarms, that spread seemingly throughout the entire population of the park, a couple of seconds before a large escapee flew overhead, something that I didn’t get the chance to identify but looked to be about the size of a bittern (so somewhere between a duck and a turkey.) What struck me was how the racket started a few seconds before the bird flew overhead, so before I would have thought any of the other birds could have seen it. A lot of them, including both lapwings, raced to the end of the enclosure in pursuit, leaving the nest completely unattended for about ten minutes, much to the chagrin of The Girlfriend. Even when returning, the pair remained somewhat agitated, chattering quietly to each other like disapproving elderly ladies who’ve just witnessed skateboarders in their neighborhood.
And yes, we saw a park attendant following the bird, calmly so as not to drive it further off in panic, but we did not get the chance to see if it had been recaptured quickly. While it is certainly harder to capture escaped birds than mammals, there are a lot of tricks that experienced handlers have available, so it was likely just a matter of time.
Somehow, just about every image I got is oriented to the left, meaning they need to sit to the right of the text, preventing much variety in the post layout – another reason I held off on this one, but now I’ll just make you suffer over it (I can picture hundreds of readers clawing at their eyes crying, “The repetition! The repetition!” – but then I’m weird, I think we’ve agreed before.) This great green macaw (Ara ambiguus) seemed quite pleased with itself deep in the foliage, and I kept shifting slightly to keep one eye in view, just to maintain a focal point – it’s at least a little better than a straightforward portrait. Unfortunately, I would greatly prefer to have gotten more shots of species people don’t often see, over something usually found hanging out at the local pet store. For some reason, I never even spotted a mandarin duck, which you should definitely look up – perhaps the most gorgeously-colored bird that I know of. There’s one that lives at Duke Gardens too, and has so far defeated my attempts to snag a decent portrait, though I admit to not staking him out like I really should, if I want such images that badly.
Last trip that I made to Sylvan Heights, I managed to find some insect subjects to shoot as well, because it’s me after all. And I did it this time around too, even though I wasn’t trying either time. Previously, it was a mating pair of wheel bugs, which are fairly sizable insects for this area, so I suppose it makes sense that I ended up going in the opposite direction. Actually, that doesn’t make any sense at all…
From a hanging basket alongside the path dangled a collection of delicate pink flowers, and as I went in close for a detail shot, I spotted this occupant, prompting me to get the flash out. I’m not going to try identifying it, but I can tell you it could fit comfortably on your pinky fingernail. The sparkle from the flower petals isn’t dew, or really anything particular to this species – it can be seen on a lot of flowers, but you have to be looking really closely. At an average viewing distance it usually can’t be made out very well. The results would certainly have been better with the softbox attachment, but I hadn’t planned on doing any dedicated macro work and so hadn’t brought the equipment I typically use for that – I know, right? How can I call myself a photographer? Believe me, it was a struggle to even admit this to you, but my therapist tells me it’s good to get it out in the open. Plus there’s all that stuff that I never tell anyone…
Despite the my results, I can still recommend Sylvan Heights Bird Park for a visit – definitely a cool place with a lot to see, just be prepared for some variables (as you should for anyplace you visit.) Spring is a good time, because birds are in their colorful breeding plumage then, but since the lapwings were hatching eggs in August, you’re not likely to be missing much if you go at practically any time of the year. Have fun!