But is it really?

We once again find ourselves at Freethinkers Day, and I’ve featured it a couple of times before, so you can also check those out for my perspective and recommendations. Frankly, I’m more in favor of calling it “Thinker’s Day” instead, or perhaps, “Are You Sure? Day,” because this is the part that really needs encouragement, as I’ve been reminded recently.

Something that is rampant, especially within this country, is mistaking an opinion, especially a strong and voiced one, with a reasoned opinion; we assume, far too often, that if someone is firmly, even emotionally behind some conclusion or viewpoint, that they’ve done the legwork and have good reason to be – even that they have exercised good reason. But the percentage of times that this is resolutely not the case is staggering, and I suspect there’s a far greater correlation between someone being non-committal or only mildly in favor or a particular viewpoint, and actually having thought it through. Because, let’s face it: the world is not full of absolutes, easy distinctions, or binary choices; it’s full of huge grey areas, and in fact, our reliance on absolutes, binary decisions, and polar opposites is not reflected in just about any area of real life. Forget good and bad, or good and evil, forget right or wrong, forget every “this or that” choice you might believe exists, because they simply don’t. Everything is a spectrum, and every worthwhile decision depends on careful comparison and aiming for the greatest benefit or the least harm, preferably a combination of the two. Easy answers are immediately suspect, and likely ignoring countless real-life factors; clear demarcations are likely far fuzzier and indistinct, when examined, than ever suspected.

The firm opinion is usually (not always – again, grey areas) the realm of a strictly emotional response, settled on because the holder likes some particular aspect of it, and often bears no critical review behind it. I’m no sociologist, but I strongly suspect that many, perhaps most, decisions are made this way, and confirmation bias comes into play when finding the ‘rational’ justification for it afterward. “I hate cheesecake,” may start with tasting a particularly bad example, and then gets followed with, “It’s fattening,” “It has too much sugar,” “It exploits cows and dairy,” and, “It’s really a pie.” A frivolous example perhaps, but you can likely spot any number of firm opinions that people hold that have begun in exactly this manner.

Something to remember, too, is that emotions are mere guides to survival, simple internal influences that resulted in pushing us forward better than the alternative at the time; they are not exact, they are not dependable, and they are not useful or even accurate in every circumstance. Just like every other animal, we have them solely because they worked just enough to make it through the process of natural selection, but nothing in that process actually produces perfection. Think of deer, that have a habit of standing still in circumstances of questionable danger so as not to attract attention by noise or movement. This works for just enough of the time to make it through the selection meatgrinder, but fails when it comes to the semi-truck bearing down. And as much as we’d like to believe we’re something special and totally unlike every other species on the planet, we spent the vast majority of our development dealing with the same kinds of dangers and the necessity of quick decisions, and that’s what our emotions reflect. That’s all they reflect. While there are certainly circumstances where they work just ducky – we wouldn’t have them if they didn’t – this in no way should be taken to mean that they’re dependable or reliable. And the very aspect that we believe sets us apart from all the other animals, our reasoning and rational brain, is what needs to be exercised to get us safely past those circumstances where the emotions fail.

Another emotion that traps us continually, and gets exploited all the time by those who understand it, is conformity. We feel the need to fit in, to be socially accepted by those around us, because our ancestors survived by being tribal/village/community-based instead of individually. But this also means we very easily fall for being just as stupid as those around us, mistaking this internal drive for the assumption that they all know what they’re doing, and, “this many people can’t be wrong.” Yet here’s an easy, quick decision for you, one that really is dependable: Yes. Yes, they can. History bears this up repeatedly.

So that’s my suggestion of an exercise, for today at least but really, embracing critical-thinking means we do this as often as possible: Stop and ask ourselves, But is it really? Did I find the best conclusion, or simply the one that struck me as ‘Right’ at the time? Take a moment or three to think about whether or not some conclusion, some opinion, some decision, can really be supported rationally. And the much harder part, to seek out the other side of the coin (or again, not settle for such a binary aspect,) and argue against ourselves, the devil’s advocate position, to see if our viewpoint still holds up. It can be tricky, for sure.

I’m not suggesting being in a perpetual, existential state of questioning everything, because then we cannot make decisions at all – merely that we seek out those circumstances where we settled on some viewpoint or conclusion too quickly, without due consideration. The question is, how many can we actually find?

Here’s another thing to remember that goes hand-in-hand with it all: It’s okay that we change our mind. It’s commendable, actually; this is growth, this is improvement, this is increasing intelligence, this is becoming a bigger person, and it’s even (much as this seems like a counterpoint) an aspect of humility. Why stagnate perpetually with a decision or viewpoint that obviously could be better, in favor of… what, exactly? Not admitting that we’re wrong is not the same as not being wrong, and is often the exact opposite. Own up to it, get on top of it, and move forward. And confidently, forthrightly, admit to it as needed – more people need to see this happening to understand that it really is okay, because, goddamn, do we have far too many people that cannot.

And while we’re at it, enjoy a piece of cheesecake.

Without even trying

Today is National Grouch Day (no foolin’) and in the past, I’ve had a list of ways to participate, to which I will refer you since I’m almost certain that you haven’t yet tried them all, and coming up with new ideas that are not variations of the same things is a great way to get me irritable – yes, that fits the bill, but then you don’t get to participate too, and merely observing the holiday in others is exceptionally poor spirit.

However, I have found that, inadvertently, I have stumbled upon the ways to celebrate it anyway. The first was, to simply suggest to The Manatee that he download GIMP to do some simple edits on images, which then led him to a long and frustrating process in attempting to find a version compatible with his mAcoS version, or alternately to see what it would take to upgrade his version – like me, he does not pursue the upgrade schedule that manufacturers suggest (read: try to force upon users) and ran afoul of GIMP no longer listing old enough versions through normal channels. All to just rescale (and maybe crop) some photos that I could’ve/should’ve done anyway.

Meanwhile, I celebrated it personally by once again tackling an issue that’s been bugging me – granted, this was ahead of schedule and not today, but it’s the frustration that counts, right? That saga, already in draft form this morning, lies below:

I have a semi-custom computer, a ‘mini-tower’ except it’s not all that mini, which has all of the extras that I use regularly: multiple hard drives for backups and countless projects, memory card reader, programmed gaming mouse with custom functions, and so on. And one of the things that’s been bugging me, for a long time now, has been the external speakers.

I don’t use them that often (mostly headphones,) but there are times when it’s seriously handy to have them. And at some point after switching over to Ubuntu Studio as my operating system, I realized that the left speaker wasn’t functioning. And this began a seriously long saga that I’m now going to relate as a lesson.

The first thing was, the headphones worked fine on both channels/ears, so it wasn’t a mono/stereo setting. And I bumped out the plug part way from the back of the unit and got sound from the left speaker. So far so good. Then I started playing.

I went into everything about Ubuntu and sound management that I could, and nothing fixed it. I tried switching speaker sets, and got the same result. I ordered another sound card (since I’d been using the onboard, built-in one) and tried that out. Same result. Had to be something in the sound drivers.

Now, Linux and Ubuntu are notoriously bad about handling sound, for unknown reasons, and still have a legacy sound handler called Pulseaudio that is widely recognized as shit – why Ubuntu Studio (which is specifically aimed towards music, video, and image editing) installed this damn thing, I’ll never know. The replacement for this software package is Pipewire, and I’d installed it, but never got it operational. Still, I had to suspect that Pulseaudio was doing something that it shouldn’t.

So I spent a couple of days (off and on, of course) trying to get Pipewire properly installed and operational, with conflicting instructions available. Several times when I thought I had it, I ran a check and Pulseaudio was still the default sound program. After much fucking around, I finally got Pipewire working and Pulseaudio off.

And yet, still no left speaker. So I disconnected the speaker set and connected them instead to another computer, and lo and behold, the left speaker failed there too. Now, bear in mind that, right near the very start of all this playing around, I’d backed the plug out and gotten the left speaker working, albeit from the right channel, but working. Fine, okay, pulled out the original speaker set and tested those on the other computer. They worked fine in both channels.

Plugged those back into my computer, and no left channel. Somewhere in the past, I’d removed the new sound card since it hadn’t seemed to be the solution, and two sound cards was confusing Ubuntu (especially since the headphones were going through the front port, so the onboard card.) Nonetheless, I dug it back out, installed it, and disabled the onboard sound card. And now both channels were working fine!

The lesson here is that, after finding the first speaker set worked if I backed the plug out, I switched them with another set, without running full diagnostics in order. Both the left speaker connection on the onboard sound card and the left channel on the second set of speakers were bad – I just dodged the step that would have revealed this by swapping things at the wrong times.

The takeaway: Do things one step at a time, in order, repetitively if necessary, to be sure of your diagnosis. In part, I swapped speakers at the wrong time (a working set for a non-working set,) and in part, I wasn’t thinking of the coincidence of both the sound card port and one of the speaker sets having the exact same failure point.

Now, in doing this, the headphone port on the front stopped making good contact, after much fussing a few months back to replace it too, so now more replacement parts are on order. Worse, that front panel comes from the onboard sound card, and I think it has to since it’s custom-wired and all that shit. I have had extremely bad luck with the onboard cards on this motherboard (the video, the network, and now the sound card have all been bypassed by using PCI cards because they’ve all been undependable.) So I’m not thrilled about having to re-enable it to have working headphones. I’ll be looking into avoiding this somehow – unfortunately, the speaker set that I’m using, while featuring a desktop power switch and volume control, does not have a headphone port built into it like some.

But some better news: Curious about just where the second speaker set was dropping out the left channel/speaker, I popped it open and did some basic diagnostics, in meticulous order this time. The desktop left and right speakers plug into the main, ‘woofer’ portion that sits under the desk, via a standard 3.5mm stereo plug, so that’s where I started. Doing the same ‘backing out’ trick I did indeed get detailed sound from the left speaker, so the speaker itself wasn’t blown, but this could mean either the connection or the sound itself (since backing it out only forces it to contact the right speaker connection instead, and had that side gone bad, it would have been even harder to diagnose); the latter would be a component issue and likely not fixable with my current knowledge. So I checked the continuity on the socket itself on the back of the woofer unit, and it all seemed fine. Then I split open the plug on the desktop speakers, which in typical manner had been assembled and then the rubber shielding actually cast over the whole thing, meaning it had to be cut away. but I have several replacement parts of this type already – mostly, what I needed to know was which color wire went to which terminal on the replacement plug, which simply cutting the wires wouldn’t tell me. Once determined, I soldered the wires to the replacement plug and bam!, all speakers working correctly now.

This unit has a headphone port right on the remote power/volume switch that extends to the desktop, but The Girlfriend wanted them so she wouldn’t have to keep plugging and unplugging her headphones into her own mini-tower, and I don’t blame her because the manufacturer of that one thought the case looked sleeker by making the port almost impossible to see, because the looks of a computer are far more important than the mere functionality of it (something that Apple embraced enthusiastically.) So I still have to re-solder a new headphone socket into my own mini-tower, when it arrives (I’ve got five on order, partially in case of another failure, mostly because it was cheap enough to up the number.) If you ask me nicely enough, I’ll be back to relate how that went!

TL;DR: No way. It’s National Grouch Day – go back and read that all from the beginning, you slack-ass.

Ahhh, that’s better!

Did the rounds tonight, counting how many juvenile Carolina anoles (Anolis carolinensis) I could find, and actually reached fifty this time, in fact, fifty-one. That one is shown below, doing its best to avoid being counted:

fifty-first juvenile Carolina anole Anolis carolinensis spotted for the evening, sandwiched between two camellia Theaceae leaves
While certainly not an overriding, driving desire, I felt compelled not only to break the previous record, but to reach the nice round number of fifty – and then started to (once again) ponder why. I haven’t done a semi-philosophical post in a while and I’m overdue, so strap in. And of course recognize that I haven’t the faintest education in any of this, so take it as it should be taken.

First off, I suspect there are a couple of motivations taking place in here. I’m convinced that we, humans, have a ‘puzzle drive,’ a desire to figure out why things are the way they are, to solve the puzzles, to reach a solution, and this is not only an integral part of how we became ‘intelligent,’ it even manifests itself in trivial manners because it’s so strong; this is why we play games and set nonsense challenges for ourselves and so on. It’s not hard to see where such a trait could spur us forward in our knowledge, and also easy to see that no other species seems to possess it anywhere near the extent that we do. But regardless of whether this exists or not, we do set arbitrary goals for ourselves, and my desire to reach 50 is an obvious aspect of this.

Then we ask, Why 50? Why not 47, or 53? Well, we can say, “round number,” and have done with it, but why do we even have the concept of a round number? So okay, we probably have the reliance on a base-10 numbering system because we have ten fingers, though ancient Roman cultures (I believe it was) had 6 and 12 as more ideal, ’round’ numbers, which is why we have 360 degrees in a circle and the timekeeping method that we do, so culture plays a part at least. And we appear to have an inherent grasp of multiples; 50 is “a hand of two-hands,” or five times ten, so it’s ‘complete’ to a certain degree. And yes, should I have been even approaching a count of one-hundred, you know damn well I would have been pushing hard to make that goal – I was already checking some esoteric regions of the yard for more hidden anoles, so I probably would have been circling every potential plant that I could find and wandering down into the swampy areas if I was nearing a hundred (not that this would have been likely to help much, since anoles aren’t really swamp-dwellers.)

Moreover, I can admit that once I reached fifty, there was a mental aspect of, Made it – everything after this is just gravy. Even though fifty-one is obviously better – but it wasn’t as neat as fifty. There’s definitely an aspect of round numbers and more pleasing goals that overrides the mere, “But this is a higher number,” aspect of our brains (I don’t think I’m alone in this, anyway.)

I can throw out things like ‘even’ (divisible by two,) and ‘multiples’ (divisible by other numbers, four or five or ten,) but all I’d be doing is referring back to some inherent desire to meet these, not explaining why there’s such a desire. And I’m not going to accomplish it now either, though I suspect there’s a certain mathematical portion of our brains, one that recognizes numbers and assigns more or less importance to them; there’s likely a connection between odd meaning “not divisible by two,” and odd meaning, “peculiar.” [Yes, I know any number is divisible by two, but then we get into ‘whole’ numbers and the meaning of that, and start contemplating cutting anoles in half…]

We can go more fundamental and simply examine counting, and knowledge of numbers larger than a handful (no pun intended but that word probably originated in that manner anyway.) I was pondering this as I was watching the wood duck broods on the pond; one had an initial brood of 12, that dropped to 11 and then I believe to 10, before we lost track of them as the young got older and more independent. Except studies by biologists and other sciencey people indicate that ducks cannot actually count that high, so was the mother even aware that her ducklings were reducing in number? This is assuming of course that she wasn’t telling them apart in different manners, by appearance or sound or whatever; she could easily know who was in her brood without being able to assign a number to them. Even so, plenty of animals might have useful reasons to count above three or four (the semi-universal numbers reached by most animals that can count at all,) and yet, can’t. Meanwhile, humans can count far better (so we believe) than anything else. Why is that? It’s not like our broods ever exceed three, and what else did we need to count, back during our development as bipedal, tribal, hunter-gatherers?

The first thing that comes to my mind is, counting and agriculture are pretty intertwined – I’m not sure you can have one without the other, to know when to plant and how long to wait and how long before you will have the season to plant again. There’s a little bit of a ‘chicken or egg?’ thing going on here, wondering which came first or if they necessarily had to develop at nearly the same time. There are also strong indications that numbers were the precursors to writing, simply to keep track of important figures, and there are efforts to determine how to use numbers to establish communication with intelligent extra-terrestrials, if and when we encounter them, because we can establish numbers through universal constants like atomic weights/abundances. It’s just going from there that becomes a bit tricky (understatement of the year.)

We can wonder about the ability to count to high numbers, and how it might have come about, what portions of the brain had to develop in what ways. There are numerous instances of people who lack certain fundamental emotions, or even the ability to feel pain, but before I’d started this post, I’d never heard of any instances where someone couldn’t actually count, or could only manage the smaller increments of many of the other animals. It would truly suck to have such a handicap, but maybe this is only a matter of perspective, since apparently there is at least one South American tribe that not only has no concept of any distinctions beyond one and two, they may be unable to develop it as well; this might be something that has to be instilled at a very young age, and we only think in multiples (‘a hand of two-hands,’ as mentioned above) simply because that’s how we’ve been taught to conceive of numbers. If you think about it, we can look at a small group of objects and say, “There are five treefrogs on the window,” but it doesn’t take many more than that before we can’t say without counting them, or at least having them in an order that makes multiples apparent, neat rows or whatever. So maybe we can only count in numbers of numbers? Meanwhile, the tribe is functioning just fine without higher numbers, except in encounters with other tribes that can use them, it seems.

I am also reminded of a simple experiment that Richard Feynman pursued, as he so often did. He was practicing timing himself accurately by counting in his head, and found that he could do it while reading, but not while conversing. Meanwhile, his friend had no difficulty with counting silently while conversing, but not while reading. Eventually, they determined that Feynman counted internally by hearing the sounds of the numbers as if reading them aloud, while his friend saw them as if watching a digital display. So how inherent is this, or is it entirely dependent on how and when we’re taught?

There’s also the idea of favorite numbers, which are not hard to explain: they usually have some significance in their connection to our lives or memories, like birthdays. But then, there have been large scale tests that show that some numbers are far more favored than others, though I don’t know if they’ve done these tests among different cultures or not. And curiously, while I have a faint preference for the day of the month of my birthday, the month itself is not favored in the same way, and I couldn’t begin to tell you why. Maybe my parents lied to me all those years ago…

But yeah, food for thought, to disguise the fact that I’m still posting about anoles, for fuck’s sake…

Thinking like that

Because I’m a supporter of George Hrab’s Geologic podcast, I receive his weekly newsletter, and the one from July 5th [yes, this sat in editing limbo for a bit] contained an article on superstition that he’d written for the James Randi Educational Foundation back in 2008. I’d done a post myself on superstition two years after that, without having seen his, so I dug it out to compare it as I read his own. Mine was much shorter and more ‘clinical’ (you know, like how that sketchy tattoo parlor is ‘clinical’) while his delved more into the social and empirical implications – I’d link to it here but I don’t believe it exists online anymore. Anyway, what I’m going to cover now is an extension of his own thoughts, so credit to George Hrab for most of this.

Basic premise of my post: superstition seems (to me) to stem from three things primarily, which are searching for meaning and cause, finding patterns, and confirmation bias. As for the first, while we endeavor to find the cause of any particular event or phenomenon, we too often view this through the filter of our human social instincts where we reward good behavior and punish bad, to keep the tribe strong, but then mistake events that occur to also follow this reward and punishment system, even when there’s no reason to believe this. When something bad happens to us, we’re too inclined to think that we deserved it somehow. The second part, pattern recognition, is simply our ability to seek patterns from what we experience, which was likely a decent survival trait and certainly assisted in our quest for knowledge – yet we can find patterns too often when no such pattern actually exists. And that brings us to confirmation bias, the practice of noticing the events or circumstances that support our beliefs and ignoring or downplaying those that fail to. Together, this gives us the ability to wonder about things like droughts, then consider that they might be related to when the peppers were harvested, and then start to believe that harvesting peppers leads to a drought (reinforced by not harvesting the peppers and finding that it rained soon afterward – even when it was bound to rain at some point anyway.) In short, various human traits combine to support superstition. Something that I failed to note in that post: assisting with this is the tendency to seek agency, to believe that events or occurrences are the product of intention rather than simply happenstance, which does a lot for supporting gods and all that. This last one isn’t always an aspect of superstition, but it appears a lot all the same.

Hrab goes into the advances of science and human achievement when superstition was ignored in favor of actual testing and experiments and consideration of other causes or answers, and this is the part that I’m highlighting now. It’s easy to believe that ancient cultures, when viewing the rare occurrences of eclipses, assigned them portentous or supernatural causes, and there are some limited accounts of this, though written records of older cultures are too sparse to give an accurate idea of how widespread such beliefs might have been. And while total solar eclipses tend to be few and far between, not usually able to be witnessed multiple times in anyone’s life span, partial solar and lunar eclipses are another matter. Someone, sometime, recognized that these occurred only at the times of new or full moons and began piecing together the patterns, ignoring whatever cultural beliefs existed about the special or supernatural causes and starting to find that they were only due to orbital mechanics. The Antikythera mechanism indicates that this occurred, at least in Greece, 2,200 or more years ago, since the patterns and timing were understood well enough to build a geared device that would predict future eclipses, as well as other celestial events.

Medicine is another excellent example. The various illnesses that befell humankind were given untold thousands of explanations and supposed treatments, some vaguely on the right track, some so far afield that we consider our ancestors to have been irretrievably stupid, but gradually, people began to notice the patterns, and to recognize the indicators and counter-indicators. Germ theory, easily the greatest advancement of medicine in our history, promoted the idea of organisms far too small to see as the primary culprit in numerous diseases, and while we can understand how difficult it was to get this idea across to enough people, the concept was obviously quite sound.

We might see such occurrences as evidence of past ignorance, a time when the populace was much more superstitious than today, but we need to recognize that a lot of such advancements occurred quite recently, and it wasn’t very long ago at all that exorcisms were practiced for erratic behavior; indeed, they still are.

The point here, however, is that a very large percentage of the advancements that we’ve made, as a culture, as a species, originated from people that didn’t accept the pat answer, that didn’t believe in the idea that ‘if enough others believe it, it must be true,’ that dared to question not just common knowledge, not just authority, but even their own senses. The ones that said, “Shouldn’t we expect B to happen if A is really true?” The ones that required a demonstrable and measurable body of evidence before they accepted an answer. They had to deny that tendency towards superstition, the ‘gut feelings’ and the internal prods towards accepting certain answers (or even just relying on other people as a guide,) to actually make progress.

It’s also easy to take the wrong message from this. The key isn’t, “Buck the trends and rebel against common knowledge,” which can be done for literally anything (as countless people do,) but instead, “Build the supporting data and the probabilities until the conclusion is valid.” And there’s even a caveat to that, because that pattern-seeking aspect, as well as the ego in believing we’re right, can cause us to ignore all of the evidence that fails to support this, which is confirmation bias, a huge favorite of those that embrace psychic powers and alternative medicine. Doubt is a key ingredient, especially self-doubt, the recognition of how many ways we can be wrong.

And there’s another aspect that doesn’t receive as much attention yet has held true throughout the majority of advancements, and that is, if we need to propose significant complication or properties that we have not actually measured or observed, it’s probably wrong – at the very least, such complications should be well-supported by evidence and data. Oh, the stars can predict what our personalities will be? How do they possibly do that? What physical law applies here, and in what manner? The number 666 is connected to satan? In what way, and why? Isn’t this a distinct giveaway that satan should have abandoned by now? Even small corporations will ditch their negative branding…

This is why critical thinking is such an important aspect of education, too often neglected in favor of simply teaching facts. The idea isn’t to memorize what we’ve found before, but to know how to find (and test, and confirm or deny) new results, new ideas, new discoveries. The scientific method, overall, places a lot of emphasis on this, though it’s not as widely adopted as it should be, but it should never be up to just college graduates; we should all be practicing this as much as possible. Remember that, at one time, there were no colleges, no such thing as ‘higher education.’ It took specific people to encourage and emphasize this departure from our internal biases to even promote the idea of an advanced and comprehensive education.

Those are the people that we need to recognize and emulate – or more specifically (since hero worship is missing the point,) the mindset and habits that they possessed. As interesting as it might be to believe in ghosts, as satisfying as it might be to think that shaking water gives it special properties, as self-affirming as it is to hear that Capricorns are supposed to like music and we really do, 99.999% of the time, the actual advancements and improvements in our culture and health and technology came from relying on more than feelings and gut instincts and “there’s something more mystical at work here, I know it.” We can’t look at any point in human history and say something like, “Boy, it’s a good thing we could tell how honest someone was by how far apart their eyes were – that certainly saved a lot of lives!” Let’s give credit and respect where it’s actually due, and not fall prey to the simpler human traits.

Errors of omission

Recently, I came across a link to an article on Aeon, which may be titled either “Incredible testimonies” or “The short, dramatic history of alien abductions in the US,” depending on whether you go with the title in the opening graphic or in the meta tag for the page that shows in the browser tab. Written by Greg Eghigian, a professor of history and bioethics at Pennsylvania State University, and edited by Sam Haselby, it seemed (based on the second title, anyway) to be right up my alley, as I’ve had a sideline interest in UFOs/UAPs for some time now, though definitely from a skeptical standpoint, and am quite familiar with much of the field. So I dug into the article with interest.

The first thing that I’ll say is, there was a marked difference between what I was familiar with and what the article divulged; often, this will be a good thing, because it means that I’m learning something that I never knew before, or a different perspective, or something along those lines. Not so in this case, however, because the article had a distinct bias, but much worse, managed to avoid or gloss over some really crucial details, ones that, had the author done any decent research whatsoever, I cannot believe he was not acutely aware of. As always, I will encourage you to read the article in its entirety, both to be familiar with what I’m going to review here and to avoid any accusations that I’m quote-mining or taking things out of context, though I will be directly quoting several sections below.

The article opens with:

In 1992, Sheila (a pseudonym) sought the help of a prominent psychiatrist. Since the death of her mother in 1984, she had regularly found herself angry, sad and irritable. She was also experiencing terrifying nightmares: she would be unable to move, her body felt like it was vibrating, and she had dreams that someone or something was controlling her body. In one dream in particular, Sheila’s house filled with a high-pitched noise and flashing lights. Then, she saw several short, thin-limbed beings covered in silver walking down the hallway toward her bedroom.

Now, anyone even passingly familiar with abduction stories, possession stories, or psychology will immediately see the distinctive traits of sleep paralysis, which took quite some time to be identified but is now well understood to be a crossover state between differing stages of sleep that some people are quite prone to. It is, in essence, a mere nightmare, but distinct in the feelings of paralysis or restraint while under the threat of someone or something in the room, while the sufferer is usually convinced that it is not a dream (because of this crossover trait.) It’s quite well documented (and feel free to do your own search) – just, never mentioned at all in the article.

Instead, we find that ‘Sheila’ eventually contacts John Mack, a “Pulitzer Prize-winning psychiatrist and professor at Harvard Medical School.” This little offhand comment about the Pulitzer was a small red flag, one I was willing to let slide initially even though the name was quite familiar, since John Mack is notoriously known in skeptical circles, as well as professional ones. A Pulitzer is, naturally, a journalistic prize and has nothing to do with psychiatry or medicine or anything related to his academic standing; it was received for Mack’s biography of T.E. Lawrence and so has no bearing whatsoever in this article.

Onward:

Mack used hypnotic regression – a technique designed to recover lost memories – to help Sheila find out more about her past. The method seemed to work, and it confirmed what had been suspected: she was having alien encounters.

I considered this foreshadowing as the author built the case, and I expected the article to delve a little deeper into this technique, because it has quite a history, but instead it quickly turned elsewhere. It relates how stories of UFO/UAP encounters surged in the eighties and nineties, and touches on how popular it was in books, TV shows and even movies – though never making any recognition of which might have caused which. But then:

Why did this extraordinary phenomenon that challenges commonsense certainties about the real world suddenly disappear from the list of popular concerns? The answer lies in who ultimately got to decide what was and what wasn’t true about alien abduction, and how they managed to not so much solve its riddle as reconcile themselves with the phenomenon.

That’s some seriously slanted prose there, for anyone that actually knows the subject. We’re going to come back around to this, but right now, we continue right where we left off above:

Debate over the authenticity of paranormal phenomena is hardly new. Historically, authorities of various kinds have been called upon to decide on episodes and cases. In much of 16th- and 17th-century Europe and the New World, for instance, the Inquisition often determined whether the sickness or death of livestock or a person had a supernatural cause, and whether someone accused was in fact a witch or not. In the 18th century, the Habsburg empress Maria Theresa turned to physicians to evaluate if reports of vampires in the empire had natural explanations. In 1784, France’s King Louis XVI appointed two expert commissions that included astronomers, chemists and doctors to conduct experiments to establish if the phenomenon of mesmerism was due to a mysterious, invisible fluid or simply the product of the fevered imaginations of the easily influenced. And in 19th- and early 20th-century Britain and the US, a mix of researchers with backgrounds in psychology, philosophy, physics, philology, anthropology and stage magic investigated some of the age’s most prominent occult claims: mediumship, apparitions, haunted houses, clairvoyance, telepathy.

Okay, good. We’re starting to establish the idea that self-proclaimed experts in things like witchcraft and ghosts might not only have no real evidence to back themselves up, they might also have a vested interest in promoting the ideas in the first place, while scientific investigations are a much less biased and more evidence-based method to evaluate claims. Let’s see how this progresses:

In all these instances, figures in positions of authority either moved to or were drawn into establishing some consensus truth about supernatural claims. Often, in the Western world at least, these authority figures came from the Church, the state or academia. In some cases, such as vampirism and mesmerism, officials recruited outside specialists to look into matters; in other cases, such as ghosts, researchers took it upon themselves to weigh in. As such, what defined ‘expertise’ in the extramundane and uncanny was not always obvious, opening up a veritable grey market for self-proclaimed specialists.

That’s a remarkably vague paragraph that can be taken any way you like, though from experience, the inclusion of the word “truth’ isn’t a good sign. Serious researchers don’t look for “truth” or bother with such an ill-defined and emotional concept; they seek the weight of the evidence, and probability. I considered this paragraph to still perhaps be setting up why the scientific method was useful, but the wording wasn’t leading in that direction.

Then the article turns towards another application of the recovery of ‘repressed memories’ through hypnotherapy, though in this case Mack was not involved:

Beginning in 1983, law enforcement and parents accused supervisors and teachers at the McMartin Preschool in California of sexually abusing children in their care. In interviews with social workers and police, witnesses reported the abuse was organised as part of violent satanic rituals. Over the next decade, reports of so-called satanic ritual abuse emerged across the US as well as Canada, the UK, Australia, the Netherlands and Germany. In the McMartin and several other cases, some of the accused were criminally charged and put on trial. By the mid-1990s, however, courts threw out the charges in some of the most high-profile cases.

Okay, good, we’re building the story here, if perhaps just a little weakly. There was a huge surge, though notably only among a handful of mental-health professionals, in the idea of recovering repressed memories, and this led to some high-profile court cases as well as a shitload of media attention.

Within a few years of the first allegations, journalists and social scientists began publishing critical assessments of the evidence, questioning the reliability of child witness testimony. They also drew critical attention to officials’ use of suggestive and aggressive interview techniques that steered answers and encouraged embellishments. Noting how both evangelicals and the tabloid press highlighted the roles of satanism and cult-like rituals in the cases, critics portrayed the wave of accusations as a modern-day ‘witch hunt’. By the mid-1990s, a consensus formed that the whole affair had been the product of a baseless moral panic that had exploited the vulnerabilities of children and parents.

Again, a little weak on the failures of the techniques, as well as completely skipping over some of the backlash. This is also the second time that “consensus” was used, implying that the matter was decided more along the lines of voting than examining the weight of the evidence. And before this paragraph was complete, we have this sentence appended:

It is also true, however, that subsequent research about the prevalence of child sexual abuse has raised questions about whether this conclusion is too facile.

And that, right there, shows that the author either hasn’t understood the subject matter at all, or chooses to interpret it in his own manner.

Let me be specific: the sudden crash of repressed memory techniques came from several distinctive studies and trials which showed that hypnosis makes the subject far too susceptible to leading questions from the therapist, capable of completely creating a narrative in the patient’s mind that had never before existed. Several of the leading memory recovery specialists, John Mack among them, were known to be able to find ‘evidence’ of alien abductions or child abuse or repressed memories of traumatic events because they led their patients down a primrose path towards them in the first place, and their track records of ‘successfully uncovering’ episodes of alien abduction or child abuse were incredibly high, far higher than anyone would reasonably expect – and still without corroborating evidence despite these elevated numbers. In short, they were producing classic examples of “gaslighting” before the term was even widely used. Many other mental health professionals were more than suspicious about the entire idea of repressed memories, because they dealt with people constantly who were traumatized by things they wanted to forget but couldn’t – there was no evidence, save for the results from this handful of ‘specialists,’ that any such thing as memory repression could even exist. Add onto this the well-known concept that children are notoriously difficult to question about events, regardless, because there is too little distinction between fact and fantasy while young, as well as the desire/compulsion to get the ‘right’ answer for adults. And it can be as simple as the difference between the question, “Who was there?” and, “Was there a man there?”

So while we should not, in any manner whatsoever, dismiss accounts of child abuse regardless, this does not in any way exonerate a methodology that has been found corrupt and egregiously misleading, if not producing completely false results. It’s a damn stupid thing to suggest, and bears notes of an underlying desperation to believe despite evidence to the contrary.

Further:

As a problem in social knowledge, the satanic ritual abuse episode posed some of the same challenges in social epistemology arising from reports of alien abduction at the time. Both raised real intellectual and ethical questions about the proper ways to acquire, evaluate and present the testimony of witnesses who may be apprehensive and vulnerable.

Well, no. The questions raised were how to obtain accurate information without introducing any bias whatsoever, and this is where the crossover between psychiatry/psychology and the ‘hard sciences’ such as physics and biology started taking more of a lead, since the hard sciences had methods in place to try and eliminate incorrect results or assumptions, while psychiatry and psychology did not rely on these very much – see the history of Freud’s research and how long it took to realize most of it was utter bullshit. Too much of the evaluations of mental health and mind-based maladies were based on pronouncements by the professionals without any decent methods of demonstrating accuracy, because the fields had never relied on replication, falsification, or eliminating alternate causes.

The heart of the matter is how can we believe the seemingly incredible? In the case of satanic ritual abuse claims, this was ultimately settled – at least to the satisfaction of most observers – by the courts. Criminal justice assumed the role of the appropriate social epistemologist.

Wrong on both counts. Ritual satanism arose in folkloric beliefs, fueled by the rise of televangelists and their ludicrous, wolf-at-the-door postulations, but never had any convincing evidence behind them in the slightest – it was the kind of things that ‘everyone knew’ was happening but somehow no one had ever directly witnessed, and it was effectively quashed by the lack of direct evidence, requested from law enforcement departments nationwide, and an FBI investigation that directly concluded that there was no evidence of any form of satanic rituals nor organized black masses.

[Aside because I’m sure someone will start squawking: Yes, there are/were two churches of satanism active in the US – both of them more tongue-in-cheek than having anything to do with belief in satan as an entity or even distinct concept, and neither ever practiced any form of black masses or sacrifices in the slightest. Neither are even remotely related to any of the claims made.]

The article begins to wander significantly at this point, and suffers from a far-too-common trait of online articles: being wordy and meandering back and forth without getting to the point. I’m fine with building a case, even the suspense in the reader, but the author does not appear to actually know what he’s building to, and constructs his edifice only to slap it aside with a comment or three in a later paragraph.

Caught up in what the literary critic Frederick Crews dubbed ‘the memory wars’ of the 1990s, alien abduction found a place alongside satanic ritual abuse, recovered memories and multiple personalities as something deemed scientifically spurious. Witnesses were not suspected of lying. Rather, the recollections of abductees, it was argued, were false memories encouraged by abduction consultants through leading questions in order to imaginatively relive ‘experiences’. As such, the experiences of abductees could be seen as embellishments after the fact, with vulnerable individuals filling gaps in their memories with details lifted from popular media and abduction advisers.

Sounds good, but it falls short of the more pertinent details, especially in its wording. A couple of high-profile court cases slammed the hell out of the very concepts of recovered memories by demonstrating that they were completely false and nothing even remotely approaching scientific evidence, making the ‘expert testimony’ by the practitioners to be utterly worthless.

A key moment came in June 1994, when Harvard Medical School formed a committee to investigate Mack’s work with abductees. In its final report issued around a year later, the committee fell short of accusing Mack of misconduct, and he retained his status as ‘a member in good standing’ in the faculty. It did, however, criticise him for several shortcomings in his methods, the most serious being his neglecting to distinguish between abductees he was treating as research subjects and those who were his patients.

The wording of this dodges the bulk of the drama regarding John Mack, and feel free to look this up on your own, because his methodology is widely considered flawed and the entire concept of repressed memories is, with only minor exceptions, almost entirely expunged from the mental health fields. While Harvard (his employer) didn’t come down too hard on the concept, the same cannot be said for the greater scientific community, and for someone who felt obligated to mention Mack’s Pulitzer, somehow the author missed Mack’s notoriety in practicing and promoting something now almost entirely discounted as a viable concept.

Now watch this:

As had been the case with satanic ritual abuse, the backlash from behavioural scientists and clinicians had a palpable impact on public opinion. This was also evident at the box office, as filmmakers cooled to the idea of adapting abductee stories for the big screen. The conclusion, then, would seem to be that researchers and practising clinicians stepped in to debunk the phenomenon and succeeded in undermining its credibility.

But, in fact, most behavioural scientists and treatment specialists who took positions on the matter did not categorically repudiate alien abduction. Instead, they tended to see it in clinical terms, as a phenomenon evolving out of therapeutic-like settings and encounters, where the process was not about reconstructing an accurate picture of one’s past but rather about developing personally believable and productive stories about that past. Even the Harvard committee investigating Mack made it clear that members were not in the business of deciding or assuming whether alien abductions were taking place or not.

This is only a variation of the hoary old dodge so cherished by UFOlogists, Bigfoot-chasers, and the religious:You can’t prove this doesn’t exist!” And with that, science gets thrown back out the window in favor of supposed logical challenges, even while the article was paying a little lip-service to how badly these topics had fared when examined empirically. But science, and even those that just understand what logic actually is, doesn’t bother with trying to impossibly prove a negative; the goal is to establish positive evidence. And when the only positive evidence was obtained through a corrupt and discounted method, well, you have nothing now, don’t you?

And so we come to our concluding paragraph:

In a paradoxical way, alien abduction was afforded a certain measure of legitimacy since it avoided legal authority and fell to the psychologists. The experience of abductees was real in that it was real enough to the person who believed it. So the phenomenon was effectively relegated to the status of a devoutly held belief, not unlike a spiritual conviction or idea. Viewed as a deeply felt personal belief, many people saw no problem in at least respecting reports of alien abduction as yet another perspective on reality. In this way, the alien abduction phenomenon was made relatively harmless. Now, at a time when talk of unidentified anomalous phenomena and retrievals of crashed spaceships and ‘non-human biologics’ has made its way into the world of congressional hearings, it remains to be seen whether alien abduction will stay in its place.

Wow, deft little rescue of a concept from the dustbin called ‘Irrationality,’ wasn’t it? Except, not really. We now have a variation of, “Well, what does it hurt what someone believes?”, another argument that skeptics get to hear too often. And seriously, what harm is there in letting someone have their cherished little beliefs, if it’s that important to them?

Which says an awful lot in itself, because why would someone feel compelled to maintain a cherished little belief when it’s patently false? This implies that emotional supplication is more important than reality, which is not a road that you probably want to continue down, especially when it comes to some (a lot of) specific beliefs.

But let’s go back to ‘Sheila,’ the alien abductee recounted in the very beginning of the article:

Moreover, she discovered that she had been having visitations in her home since before the age of six, and that both Sheila’s sister and daughter had also been having strange encounters. It all left her feeling violated, terrified that she was unable to protect her family, and overcome with dread that ‘they’ would return.

Well, we’re not taking about cherished beliefs now, are we? And if this was indeed sleep paralysis, then ‘Sheila’ was subjected to an elaborate campaign to extend her fears, in both breadth and time, far beyond anything remotely necessary, when she could have been diagnosed with a simple disorder that would have alleviated the bulk of her anxiety rather than increasing it; sleep paralysis was a known condition at the time of this, and something that a professor of psychiatry should certainly have been aware of. While we’ll never know for sure at this point, we’re faced with the possibility that she was misled by someone pursuing their pet project.

And this is not an isolated occurrence. The article mentions the claims of satanic rituals and abuse by McMartin Preschool, which was one of those major cases that I mentioned. The amount of suffering and anxiety that this produced, in everyone involved, was completely unnecessary and provoked by a psychiatric technique that had never been established as viable, because why bother with that? At least, that was the attitude at the time – it’s been changing since then. And we can’t ignore Gary Ramona’s case, with lives ruined by relying solely on another professional with literally no evidence outside of a corrupt belief.

A small aside here: questioning of witnesses and victims should be done only by people trained to do so, because it is a specific skill that requires avoiding bias and leading questions, interviewing multiple witnesses separately and before they have any chance to compare their experiences or be influenced by others, and maintaining a complete neutrality in the results. Far too many police departments don’t have any such staff or don’t bother with them (because obtaining a conviction is far more important than determining the ‘truth’); psychiatrists and hypnotherapists do not receive any such training for these purposes. One of John Mack’s case histories involved interviewing numerous schoolchildren all at once regarding the UFO encounter that they claimed to have had, virtually guaranteeing that most of the kids would be influenced by what they heard their classmates saying. Any opportunity to find discrepancies in the accounts, which would cast doubt on the shared experience, was thrown out the window by performing this incredibly inept move.

The article mentions the case of Betty and Barney Hill, easily the most well-known of alien abductions. But again, a bit of research would have revealed quite a bit to examine. Not only did the details of the encounter change with virtually every new hypnosis session, there was little agreement between Betty’s and Barney’s accounts until long after they’d had the chance to discuss it at length with each other. We’re led to believe that repeated sessions eventually homed in on the ‘true’ account, as long as we ignore that hypnotherapy is no longer considered viable, and that there is no point where we could confidently pronounce that we now, finally have the correct version, and that there is no way to corroborate a correct version in any manner whatsoever. That’s a lot of baggage. Then as we go deeper into the fine details of the case, we find that Betty Hill was clearly enthusiastic about UFO reports before the encounter, and that she dwelt on them constantly afterwards and maintained pages of her dreams. We also find that her first hypnotherapist considered that she was only recounting another dream (it’s amazing how often that little fact gets left out of the numerous accounts of this case.) In her later years, while she was the darling of UFO conventions, Betty Hill continued to relate how often she saw and had contact with aliens, to the point where even the die-hard UFO enthusiasts started to become embarrassed by her, since she now appeared to be more than a little delusional.

Which brings us back around to the attitude at the end of this article, the concept that alien abduction stories can be ‘legitimate’ even if they aren’t true, almost directly likening it to religion (a comparison I’ve maintained myself, though not in any complimentary manner.) If we can’t actually establish in any manner that alien abductions have occurred, or that aliens actually exist, then we’re just condoning delusion, is that correct? We’re not talking fantasy, because by definition, fantasy is understood to be strictly imaginary, but those that believe that some celebrity really does love them back have obvious issues, and at times dangerous ones. Shouldn’t we, at the very least, establish that such indulgence in unsubstantiated ideas have some benefit before we rashly pronounce them ‘okay’? Especially when the belief that aliens can abduct and perform medical procedures on anyone, without detection or means of prevention, is a significant fear within our culture anymore, almost entirely based on ‘true encounters’ such as this?

And that’s one of the worst factors about this article. Psychiatrists exist to help people – that’s the specific goal of the field. If we only want indulgence, liquor stores and drug dealers and psychic readers abound.

Physicists meticulously examine their experiments to ensure that the results they achieve are indeed from the cause that they propose, attempting to rule out as many alternate explanations as possible. Biologists narrow down all of the factors that they can think of to determine that their test subject responded specifically to the conditions introduced. Medical research relies on double-blind clinical testing, control groups, and careful examination of case studies before offering even tentative conclusions. But somehow, psychiatrists can introduce an entirely new concept of ‘repressed memories’ without ever once checking to see if they found something factual or corroborated?

It is perhaps unfortunate that Betty Hill may have been encouraged to believe in and build on something fanciful, for the rest of her life, rather than recognizing that it was nothing more than a detailed dream. It’s potentially tragic that the named ‘Sheila’ may not have received the diagnosis that would have helped her far more effectively, and saved her years of anxiety. We cannot be sure of either of these (and many similar cases,) though the evidence weighs far more in these directions rather than the ones actually taken. It’s disturbing that ‘repressed memories’ yet resides in the public consciousness as a distinct idea, without anything of merit behind it. But it’s inexcusably irresponsible that this idea made it all the way into the courts as a form of evidence without any checks or balances whatsoever, causing unimaginable chaos in the lives of everyone involved. Correcting such egregious errors can take a long time once they’ve been established, and we’ve been lucky that the efforts to correct this were as effective as they have been; we can’t say the same for Andrew Wakefield’s selfish and intentionally fraudulent efforts to discount the efficacy of vaccines. So seeing anyone attempting to whitewash the whole concept and find some manner to still support it is reprehensible.

* * *

Late in the writing of this post, I realized that the message within the original article seemed to vacillate more than a little, and then remembered that there was an editor credited too, which is not standard procedure. It occurs to me that it’s possible the article was altered in editing to change the slant or message, to make it more appealing to whatever audience was deemed the target – this happens fairly often. And if this is the case, does the blame for this whitewashing lie with the author, or the editor? Or still with both? Does it matter either way? The article remains ridiculously misleading and less than accurate.

* * * *

I had a disturbing number of tabs open during the writing of this, some of them getting linked into the text, but others deserve their own examination to better understand the issues at hand, such as:

Dr Elizabeth Loftus, whose name came up repeatedly while searching on repressed memory, since she was integral to the False Memory Syndrome Foundation study that played a large part in revealing the flaws in the concept. Dr Loftus has also produced countless works and papers regarding suggestibility and the malleable nature of memory.

‘What Psychologists Better Know About Recovered Memories: Research, Lawsuits, and the Pivotal Experiment’, another article on the topic.

‘Repressed Memory’, an article in Harvard magazine suggesting that the concept appears to be a recent cultural phenomenon rather than an affliction that should have left its mark throughout historical accounts.

Sleep paralysis. While judging the validity of anyone’s experience from a distance such as mine is irresponsible, it’s far more irresponsible to fail to take into account that some ‘recovered memory’ could be simply a common (and treatable) sleep disorder.

The Skeptoid episode on Betty and Barney Hill. Just a hint of some of the details that never get mentioned in the more credulous accounts of their experience – or, to be more accurate, their claims of their experience, since there’s barely a fragment of supporting evidence that anything actually happened.

‘The Eyes that Spoke’, an article in Skeptical Inquirer indicating that Barney Hill’s description of the aliens was remarkably similar to an Outer Limits episode that aired only two weeks before his hypnotherapy session.

‘A Study of Fantasy Proneness in the Thirteen Cases of Alleged Encounters in John Mack’s Abduction, another article in Skeptical Inquirer evaluating the ‘abductees’ in Mack’s own book for how many traits of being fantasy-prone that they displayed – you’ll be surprised to find the numbers are quite high.

‘Abductology Implodes’ [pdf file], an article by Robert Sheaffer for, again, Skeptical Inquirer on the abysmal presentation of three alien abduction specialists, John Mack among them, for an Abduction Study Conference at MIT in 1992. It also provides an account of the “gullibility and intellectual dishonesty” of Budd Hopkins, another of the researchers (and another name well known to those who have interest in the field) from Carol Rainey, his ex-wife and former assistant. While the link to her own article within that paper is dead, it can be found at this link [pdf file] instead – see “The Priests of High Strangeness” on page 11.

And overall, I will always recommend The Demon-Haunted World to any and every reader regardless, but especially those who find topics like this compelling.

But how? Part 31: What game is this?

First off, we’re not really answering any religious questions with this one, but asking a lot of them instead. Second, while I have tried (with varying success) to avoid going on the offensive with posts within this topic, this one is disregarding that restriction entirely, because we’re going hard on the offensive now. I’ve broached aspects of this in one form or another several times before, but now it’s time to get as many of them together into one place. So let’s ask, What kind of game is being played here?

We’ll start with a basic tenet of the abrahamic religions, that only applies in related ways to a few of the others: the concept of eternal reward or punishment – heaven and hell. On the face of it, these seem to make sense: any individual is rewarded for good behavior and punished for bad, until you ask why these are useful and implemented, which always comes down to, it influences future behavior. You cleaned your room when asked, you get a new videogame, or you hit your brother with a pinecone, you lose internet privileges for two days. Whatever. The idea is to promote future behavior that’s beneficial. But let’s face it: if you don’t want something to happen again, boom, kill the child. Done deal. Horrifying, right? Not even a fraction as horrifying as tormenting someone eternally for whatever misdeeds they might have done. What purpose could that possibly serve? And isn’t it enormously sadistic, I mean, really sicko behavior? Further, what use is a perpetually expanding realm of either ultimately happy or ultimately miserable souls? Are they currency of some kind, and for what? Is the supreme being trying to score certain points?

There’s also the bare fact that we only recognize ‘good’ and ‘bad’ living conditions or circumstances in relation to those that aren’t. Is it even possible to be perpetually happy, and if so, how would you know? Is it, like, a constant orgasm? What do you do all day, anyway? Is there any longer such a thing as anticipation or dread? Are there goals? Are there, to be blunt, any of the aspects that we view as ‘life’ in the first place? If you think about it, everything that we experience right now (provided that you’re living while reading this, and not a ghost or something,) reflects on not just the biological needs and desires of a living organism, but a social and finite one at that. We try to get along, we try to perpetuate our genes, we strive to accomplish things and/or be remembered, we even enjoy food that fulfills the evolved desires for proper sustenance. None of those apply to perpetual souls, or indeed have any meaning to such. Even people that have retired from their careers, successful at the primary goal of their survival, end up finding ways to occupy their time – new hobbies, new goals, new challenges, because that’s how our minds work. Does traveling the world or carving more elaborate statues have any meaning in heaven? How about regretting taking the lord’s name in vain, or planting two different crops in the same furrow, while burning in hell?

Oh, the afterlife isn’t actually perpetual, but temporary, a stage before the rebirth cycle, like in hinduism? Sure, whatever; what was it you did in the past life that you now know you shouldn’t do again? You don’t remember? Well, that’s certainly functional. Worse, if you follow the ‘greater/lesser beings’ idea and are reborn a cockroach or something, what, exactly are your choices for behavior now? “I probably shouldn’t have robbed all those people – I’ll be sure to be a good little cockroach now and – “… um, do what? Plant fucking trees? Maybe avoid eating or infecting human food? Sure, I’ll buy that; show me the reborn cockroaches that refuse to get into the breakfast cereal. Or perhaps the ones that recall just enough about their previous existences to hold still and let the shoe slam them back into the cycle to be reborn a step higher…

Which also leads to the question of what point a rebirth cycle has. I mean, it makes slightly more sense than the idea of perpetual good/bad afterlife – but only slightly. Again, where are we going with this – what’s the goal? Ultimate enlightenment, like in buddhism? Sure, what’s that? Does it mean omniscience? Fantastic – and what do we do with that? Knowledge is great, when it can be applied to improve something in our lives. But just to have it? And imagine trying to have a conversation between two omniscient people…

Underlying all of this is the basic tenet that religion overall is intended to guide us towards good behavior, which is fine and commendable, but ultimately unnecessary – we’re actually quite capable of determining what’s beneficial and detrimental, because it’s not actually hard at all. The biggest stumbling block is that we’re too often conflicted between what’s personally beneficial (or desirable) and what’s socially beneficial, or short-term versus long-term benefits, or the fact that winning some form of competition, real or imagined, usually does not actually equate with benefit in any form. It would be far easier if we couldn’t become this confused, since it’s mostly emotional/glandular, but again, that’s the way we were made, right? Though at least, the acceptance and active practice of religion is so adept at thwarting most of these ills, which is why we never, ever hear that religious people commit crimes, or take advantage of others, or engage in bloodshed, or [absolutely fucking huge list of social ills from a long history of religious persecution and abuse.] This is also why the cultures and countries that are the most religious are also the happiest and the most advanced. I’m sorry – did that sound like sarcasm? I do so try to avoid using that…

At this point, by the way, there’s never any shortage of people who protest that none of these heinous acts were committed by those who were really religious, but the nasty question is, could you tell that before they committed these acts? Because, you know, a hell of a lot of people could have used that guidance before the thefts, abuses, and murders occurred…

I’ve covered the inherent flaws in omniscience and omnipotence before, because they’re mutually exclusive (if you know everything, you already know what you’re about to do and thus have no power to do anything else,) but if we admit that maybe the scriptural chroniclers got that bit wrong and the supreme being isn’t ultimately knowledgeable or powerful, we still come back to the idea that we were created to be exactly this way – including our ability to make mistakes. Now, the idea of any master plan thwarts our behavior entirely and eradicates the very concept of free will, because we’re only players in this plan, automatons. Or, okay, said supreme being is only watching to see what we do, because, why? What’s the point? They could create what they wanted, do anything they wanted, with or without our participation, so…? What could possibly motivate a being – a perpetual being, mind you – to accomplish anything? Can they be bored? Can they gain any kind of fulfillment when it’s virtually guaranteed that they’ll succeed in everything because they can make it so instantaneously? Not to mention that there’s evidence in nearly every form of scripture that said being is capricious and capable of changing their mind, but also (much more alarmingly,) often quite emotional and petulant when its creation performs as it was created to be! What kind of a mental case would I be if I made a toaster that could also blow the roof off, by design, and then get mad when it happens? I mean, we know why we have emotions, and still don’t have very good control of them, but why would an infinite being have or need such a thing?

Not to mention that, while this supreme being loves us, it sure has a wicked history of being quite vicious about it. Pardon me for referring once again to the abrahamic scriptures, since I’m far more familiar with those, but we have lots of accounts of god playing obvious favorites when it comes to conflicts and wars, including stopping the sun in the sky (and not the planet from turning,) to provide enough daylight hours for the chosen portions of its creation to slaughter the unchosen bits. Lovingly, of course. Or we have the expulsion from the garden of eden, because this being planted a tree right there and said, “Don’t eat the fruit,” (all-knowing, of course, that it was going to happen anyway,) and then in retribution, made its creation susceptible to sinning. Like they weren’t susceptible to it beforehand when they were tempted by the fruit? And what was the tree, we ask? The tree of knowledge, often given as the knowledge of good and evil. Seriously, what’s the scattered and nonsensical message here?

Notice, too, that all of the animals (in this case meaning non-human) were expelled too, and then learned to prey on one another, because, um, they were complicit in this act? Because god doesn’t love them and so they might as well suffer the consequences too? Because god just likes burdening mankind with guilt? This plays out again in the noachian flood, when the vast majority of the world population (human and non) gets slaughtered too, save for a breeding set, because I guess the act of creating the entire universe made god too tired to do something millions of times easier and target simply the sinners.

We’ll broaden our scope now, and point out that not one of the creation stories, from any of the hundreds of different religions that have peppered the Earth, manages to fit in even slightly with all of the evidence that we have (intermeshing and corroborating as well,) of how the sun and planets started, of how life evolved, how old things are, and so on and so on. Now, the trait of studying cause-and-effect, of figuring out just about every mystery that comes in front of our eyes, is deeply ingrained – and has proven to be enormously useful as well, responsible for every last advancement we’ve ever made as a species. But, this fails when it comes to understanding our origins? It’s, as countless religious pundits have maintained over the centuries, all misleading, “testing our faith,” as it is so often put? First off, why? Seems like a hell of a lot of trouble to go through for a simple test, not to mention that the supreme being already knows what’s in our heads, not to mention that it already knows how it will all play out (oh, wait, we have to ignore that omniscience angle.) And correct me if I’m wrong, but doesn’t this make it all a huge lie? I thought that was one of those bad things, but I suppose only for us to do, and not the perfect being. So what else is it lying about? I mean, we read or hear about scripture through the same eyes and ears that tell us about fossils and geological deposition and atomic forces, so where does that leave us?

The especially amusing bit about all of this is, religion is repeatedly claimed to provide “all of the answers,” and I cannot count the times that I’ve heard that science can tell us how, but religion tells us why. Feel free to pose any of the above questions to any religious authority that you like and see what kind of answers you actually receive. I’ve been on enough forums to see what they consist of, which is always a dodge along the lines of, “we’re not meant to know,” making the claim of providing answers rather hypocritical of course. I’ve also been on enough forums to hear the countless stories of people who, in their youth, posed such questions honestly and earnestly to their religious instructors, only to be chastised or punished for even asking (which naturally started that distrust and resentment rolling.) Even the explanation of what jesus’ sacrifice (is it a sacrifice if it was all planned, and he went to heaven afterward?) was supposed to provide for mankind somehow isn’t agreed upon by the devout; I’ve heard at least seven different variations personally, none of which made any sense.

Now, I can easily accept that cultures long past were attempting to provide answers, and that’s what most scripture (of any kind) consists of. It is, in fact, the only way that any of this makes sense at all. But to claim that this is how any supernatural being(s) communicated with mankind in order to guide, or inform, or enlighten us? It’s remarkably inept and pathetic, really, hardly the efforts of even average intelligence, much less an enormous intellect, or even one that created the game in the first place.

It’s often maintained that science doesn’t have all of the answers either, but this ignores the bare fact that it never proposed to in the first place, or even proposed anything at all; science is only a methodical process of learning, not any form of pronouncement nor any attempt to guide mankind. That said, using science to understand the origins of this planet and life thereon, as well as the odd and seemingly conflicting emotions that form our motivations, has produced thousands of times more answers than all religious ‘information’ put together, even as our understanding remains incomplete. Yet, this is a misleading comparison, since it’s not an “either/or” situation, a competition between two choices. Even without what science has demonstrated for us, religion does not serve to move us forward or explain our origins or actions or how to behave properly, especially if we take the myriad examples provided within scripture. The moment that anyone feels the motivation for it to make sense and “fit,” it is revealed as sorely lacking in such regards, and the only thing that can be obtained from it is self-indulgence – provided, of course, that one purposefully ignores all of the portions that fail to support such indulgence, or openly contradict it. But answers? Don’t be foolish.

Walkabout recommends: Hogfather

This has actually been recommended before, at about this time of year, and I’m a little remiss in not making this a full post, especially two weeks ago or more to give people a chance to get it on their own. You can get it rushed to you in time for the new year, at least.

cover of Hogfather DVDThe movie in question is Hogfather, based on the novel of the same name by Terry Pratchett, an entry in the Discworld series. Now, this is a tall order in itself, since Pratchett’s writing doesn’t lend itself to easily making the jump over into screenplays, but one can be excused for being more worried that this was a serialization, of sorts, airing as a two-part episode on BBC television. I have to say, for converting a novel into film, this falls only behind Lord of the Rings in visualization, effort, and accuracy, while having a tiny fraction of the budget. Full credit goes to director Vadim Jean, but close on his heels is casting director Emma Style for putting together a fine collection of actors that fulfill their parts wonderfully. Getting Joss Ackland for Mustrum Ridcully (Archchancellor of Unseen University) was excellent, but Michelle Dockery (Downton Abbey) as Susan and Marc Warren as the quintessentially creepy Teatime are near-perfect for their parts. Perhaps the only weakness in the cast is Corporal Nobby Nobbs, because Pratchett’s vague descriptions of him are bound to provide the readers’ own views that are next-to-impossible to fulfill anyway, though Nicolas Tennant nonetheless does an entertaining version.

The reason that I say this is so late is that the Discworld has its own counterparts of our culture, and the Discworld counterpart to our christmas is Hogswatch, presided over by a jolly fat man in a red suit (driving a sleigh pulled by four boars,) the Hogfather. Only, there is a plot afoot to eradicate the Hogfather, which may have dire consequences for all of the Discworld. Now, the Hogfather is an anthropomorphic personification, in essence, imagination made real, just like the Tooth Fairy, the Soul Cake Duck, and Death itself, so one might ask how you could potentially eradicate such a ‘being,’ and Pratchett does a marvelous job of addressing this while opening up much bigger and more philosophical subjects such as the nature of belief. The film progresses through the gradual reveal of the plot machinations, and those with a short attention span might find themselves too confused too often, but patience is a keyword, because it all becomes not just clear in the end, but also an examination of human nature and cultural heritage. Don’t let me give the wrong impression, though, because the story remains paramount and its progression holds our attention while presenting us with the various parallel aspects of the Discworld.

There are two main caveats. The first is the runtime, which clocks in all told at 3 hours and 9 minutes, though it is helpfully broken into two parts and thus may be watched in two sittings as preferred – on DVD or streaming, of course, this may be broken down further. The second caveat is that it was produced in England and, despite this being a fictional universe, the dialect and patois are distinctly British – this is not out of place, given how Ankh-Morpork, the city where most of the story takes place, is remarkably similar to Victorian London, but it does present some challenges from the dialogue at times, perhaps most so from Ridcully (Ackland.)

Michelle Dockery’s portrayal of Susan is exemplary, gradually revealing the special properties that she is disinclined to acknowledge herself, but from her first appearance she displays her critical thinking abilities and no-nonsense approach to matters. Her involvement in the whole affair is perhaps not quite as reluctant as she maintains, and this does eventually bring her into contact with Death – though, not in the manner that you might imagine, and she has her own way of addressing such. Death (voiced wonderfully by Ian Richardson) plays a large and important role in this film, though not very often in fulfillment of its own duties.

The faculty of Unseen University (the Discworld’s premier college of wizardry) also plays a large role, especially when more personifications start to appear, and while Hogfather falls before the other novel in the series that I’ve reviewed, this one served as an integral aspect of the development of the faculty that culminated in that later novel. As I recall, there was more involvement of the wizards in the book than in this film adaptation, which could have been more entertaining it itself, but may also have been sidetracking a little too much, not to mention adding to an already appreciable runtime. ‘Hex’ is nicely depicted though, and there are a few easter eggs here and there for those who are paying attention. It was years before I discovered that Mr Sideney (Nigel Planer) was one of the main actors from The Young Ones. Pratchett himself makes a cameo appearance at the end, as well as being an integral part of the screenplay and production. Meanwhile, listen carefully to Teatime’s holiday wishes right at the end of Part One, and pay attention to the curtains near the very end of the film.

The special effects are not up to par with many other films of the era (this was produced in 2006, after all,) but are not bad in any way, especially for a made-for-TV film; I found the weakest aspect to be the sound effects, yet not in any way distracting. I would have liked a little more variety in the music as well, since the main ‘theme’ of the film repeats quite often, through the DVD menus as well, and it can stay in your head for days. On the other hand, the children that appear are more accurate than nearly every holiday movie out there, and two of the little girls in the department store are adorable, though your grandmother may not agree. In fact, the department store is likely my favorite scene throughout the film (well, scenes, since it is broken up among concurrent plot developments.)

Pratchett’s wry observations of culture and human nature come through from time to time, as evidenced by Death’s manservant Albert reminiscing about his underprivileged childhood, longing for an elaborate rocking horse in a store window:

Albert: Yes, I would have killed for that horse. But you know what? I still hung up my stocking on Hogswatch Eve. And you know why? ‘Cause I… had… hope. Yep. And the next morning, our dad had put in my stocking a little wooden horse that he carved his very own self.

Death: AH, AND THAT WAS WORTH MORE THAN ALL THE EXPENSIVE TOY HORSES IN THE WORLD.

Albert: No, ’cause you’re a selfish little bugger when you’re only seven. It’s only grownups that think like that.

Yet the real strength of Hogfather, and the reason why I’m glad this one of Pratchett’s novels was chosen for this treatment (a few others came along later, probably based on the response to this,) is that underneath it all, it examines how humans take our world around us and turn it into something else. I’ve posted about this before, but we have the tendency to almost dismiss what is in favor of what seems better to us, to the point that the facts of the matter can often be considered rude or ‘unfeeling.’ There are so many aspects of how we live in a fantasy and assiduously avoid reality that it’s almost disturbing, and while the story doesn’t decry this per se, it nonetheless hints at how often it occurs, and that it’s a facet of human nature. For better or for worse? Well, that depends on the ultimate effect, doesn’t it?

Hogfather has been a holiday staple in our household for years now, much better than much of the schmaltz that many people want to consider their holiday tradition, and perhaps you won’t adopt it in the same manner, but it’s at least worth a viewing to see an alternate depiction as well as a suspenseful crime story. And it’s a good introduction to the Discworld series of books, as well.

Chaos, revisited, part two

I said earlier that I had a few new thoughts about chaos theory, and so we delve back into this mess, but at least I’m giving you fair warning.

We refer back to The Forces of Chance article by Brian Klaas, and he uses the example of how the Secretary of State during Word War II had vacationed in Kyoto, Japan many years previously, and his fond memories of that visit prompted him to campaign against it as a target for the atomic bombs, which was ultimately successful. But during the second actual bombing run, the aircrew was thwarted by obscuring clouds over the primary target of Kokuro, forcing them onto their secondary target of Nagasaki; they nearly abandoned that target due to the same circumstances, but a break in the clouds allowed them to sight accurately and release destruction onto that city.

This illustrates two key points where we could say chaos theory reigned. First, the human element of emotional attachment to Kyoto, which kept it from becoming one of the targets despite its strategic value. And second, the vagaries of weather that blotted out sight of Kokuro but also allowed Nagasaki to be targeted. Between them, they dictated the events that led to Japan’s surrender.

Kind of. For instance, targeting Kyoto instead might have made the Japanese surrender before a second bomb was dropped – or it might have tightened their resolve. Playing, “What if?” with events during WWII is a common, but ultimately meaningless, pastime among professional and amateur historians. There are alternatives to anything everywhere that might lead to ‘alternate histories,’ and people are fond of imagining how things might have turned out. In this way, we imagine that events are wildly variable and so deterministic physics is all wet, but it’s not true in any sense that we can actually prove.

To go back to the Secretary of State, it seems likely (in fact, highly probable given all that we know of physics) that his experiences during his trip were what created the emotional resolve to save Kyoto, and this could have happened no other way – at least, not without other input that would have swayed his emotions in a different direction; had he been mugged while there, it could easily have been different. So the past history of one individual can have a stunning effect – but this is true of any circumstances beholden to human vagaries; the attitudes that we all hold are shaped by our past experiences. It is also entirely possible that anyone who knew the Secretary of State fairly well would have predicted themselves what his reaction might be – we do this all the time with people we’re close to. So is this chaos, or simply complication, too many factors to collate and calculate? Is that all that chaos theory highlights?

Not to keep flogging the same points, but we’ll turn back to the weather example again. The problem with the weather is that it takes very little input, very little energy, to affect what direction it might take. And we know this, because all meteorologists provide their predictions with a margin of error; this is not calculating chaos theory into the mix, but the enormities of the data needed to become more accurate, to reduce this margin of error. We often know when the conditions exist for a wildfire, but not where lightning will strike or what idiot will start an improper campfire in the middle of those conditions. The resulting wildfire then introduces more energy into the atmosphere that will behave exactly as physics dictates it would, even when we cannot possibly calculate how much energy is entering and exactly where. Yet when we think about how much this is happening all the time, it becomes rather astounding that the weather reports are anywhere near as accurate as they are. We might expect or hope that they were more accurate, thinking science is failing us if they’re not, but the handle we have on it is actually pretty impressive – and the averaging out of small effects works fairly well. Even when some storm or system starts down a ‘chaotic’ path, the moment we recognize it we’re already refining our predictions.

So there are two main bodies of thoughts that occurred to me. The first is that, is it possible to identify the circumstances that may go ‘non-linear’ (or against expectations)? Are there key areas where a random variable is more likely to push things farther afield than expected? Initially, I was thinking that the lower the energy needed to effect change, the greater the chance of change occurring – not especially deep I know, but it’s more along the lines of, if the ground temperature rises two degrees above expectations or predictions, is this more likely to create rising air that develops into a storm, or a front that will deflect the storm? In other words, are there key areas that we should be watching more closely? Though I can’t say that this isn’t already being done, or at least attempted. And this isn’t just with the weather but say with economics; what stocks are likely to trigger reactions from traders with small variations? For sociology, what observable factors from a culture tend to give indications of impending changes?

Klaas provided the example of the Arab Spring, begun when one merchant reached his breaking point with oppressive government and immolated himself, touching off a backlash of reaction, frustration, and revolution. Though quite frankly, anyone familiar with the cultures involved likely knew they were a powder keg. And so we reach thought number two, which is, how often human emotions are tied into specific examples of chaos theory? In this case, not so much the revolutionaries themselves, but more the government attitudes that, if nothing has happened so far, people are okay with it, and nothing will continue to happen. This type of thinking is enormous within humans, and responsible for countless issues that arise. It’s harder and more expensive to plan for contingencies, so we’ll ignore the possibilities and hope that it works out – and every moment that it does is reinforcement for this behavior. When the breaking point is reached, it’s not that this wasn’t predicted, it’s that such predictions were ignored or misunderstood.

This has happened countless times in this country with hurricanes. There’s a range of predictions, often depicted as a cone representing the possible paths of the hurricane – less defined the further away in the future it is. Most of the time, the path falls within the cone, though less often in the center of the prediction; there is also a range of predictions of wind strength as well, with largely the same results. Yet countless people assume that the ‘worst case’ scenario isn’t going to come to pass, or they’re close enough to the edge of the cone that they won’t feel the effects too strongly, or even that it’s too much of a hassle to vacate the area – in essence, they’re betting against the predictions. And for every hurricane, we hear about the people who lost their bet, many of whom complain that they “didn’t expect this.” While we may feel inclined to blame this on imprecise predictions and the failure of physical science to pin things down precisely, pointing to it as a manifestation of chaos theory, it’s more a manifestation of relying on ‘averages’ and past experience, especially those who heard of the ‘worst case’ scenarios that didn’t come to pass and consider this a failed prediction, attempting to produce their own pattern of expectations.

So once again, the human element is a major factor – and many of those in authority recognize this and become even more adamant that evacuations are mandatory. They know that, “Better safe than sorry,” will too often take a backseat to, “That was a waste of time.”

This is not to say that every place where we might point to a manifestation of chaos theory is due to human foibles and behavior, but we should be paying attention to those areas where it does, and realize that this is more a sociological problem than a mathematical one. And again, even if we have utter faith in the value and strength of chaos theory, it doesn’t permit us to improve things – not until it can predict results more accurately than the meteorologists (or economists, or sociologists, et cetera.)

So where does this leave us? Before this article, I had attempted to understand chaos theory with James Gleick’s book, and found it not just uninformative, but confusing and willingly misrepresenting science and determinism by a wide margin. Klaas’ article was significantly better at presenting the theory as instances where we cannot predict results, but this hardly moves us forward in any way, does it? And in fact, I think it may actually do a disservice in giving a specific and official-sounding name to circumstances where, let’s face it, the primary reason that we cannot predict results is that the data are too voluminous. How, exactly, is that a ‘theory?’ Does it predict where and when the ‘law of averages’ will fail to average out? Does it allow us to know how to control this, or anything, really? To all appearances, when we have any circumstances where we might say, “Crap, that didn’t go as expected,” mathematicians may nod knowingly at each other and say, “Chaos theory.” But what’s the value in that, especially when they cannot let us know ahead of time?

Sorry, I’m just going to stick to, “Too much data needed for accuracy,” and not bother with other names for it.

Chaos, revisited, part one

I almost felt obligated to do this follow-up, considering how badly I lambasted the book that purported to explain the concept, because it seems that it was, at least to a degree, more the author’s dismal attempt to explain it than the concept itself.

We’re talking about chaos theory, and it does not bode well that I sought out multiple sources to try and get a grip on it and failed; however, this recent article did more for my understanding than all of those sources combined, and did so in a way that it fit well into the other disciplines of science. That article is The Forces of Chance by Brian Klaas who is, of all things, an associate professor in global politics at University College London – not (by vocation) a mathematician, though how far removed this is I cannot say. But all credit to him for pinning the idea down so well, with perhaps some caveats in there, which we’ll get to.

Basic definition: chaos theory is the manner in which large systems may depart from expected results in a non-linear manner. Well, that’s about worthless without exposition, but to add the necessary element, in some cases very small variations can lead to quite broad effects, and we’ll use the old standby of the weather to help explain it. Weather is hard to predict, even though we understand the mechanics of it just fine: air warms and expands, picks up moisture that can make clouds, and so on. But predicting it is hard and fraught with wide error margins, because a little extra warmth here, a bit of smoke there, and other such factors, can cause a storm to become raging or peter out entirely.

A quick note: physics is deterministic, meaning that if we know the energy that goes in, we know the effect that comes out. Full stop. The only place where this falls apart is on the subatomic level, and countless experiments at this level shows that it rarely ever can rise above it to have the slightest effect at all; there’s more than a suspicion that there’s at least another law of physics governing this subatomic ‘randomness’ to make that deterministic too. What this all means is, given enough information about conditions of any given system (for instance a cold front,) we can predict what it will do. The key factor in there is given enough information, which is many cases is far more than we have any way of gathering or collating. How much of that smoke from a volcano will reflect sunlight and drop regional temperatures, versus how high is the humidity where the smoke particles themselves form nuclei for raindrops and gets quickly carried back down to the ground?

Now, the ‘law’ of averages (we really should stop using that term,) indicates that small variations tend to balance out: a little bit this way, a little bit in the opposite way, and the effect largely cancels. What chaos theory addresses are the circumstances where that averaging fails, and a small variation leads the physical effects down a different path (this is where non-linear is a bit misleading, because it remains linear, just departing from average or even expectations.) My example is following a complicated set of driving directions, only inverting right and left in just one step – you might still get fairly close to your destination, or you might go incredibly far out of your way.

Klaas provides another example that highlights a difference in factors, by recounting a personal, emotional bond that affected the choice of targets for the atomic bombs dropped on Japan during World War II – and at the same time, a chance weather event. Both of these – human emotions and weather – are inherently chaotic, defying predictions and expectations. And Klaas largely addresses the failures of social sciences (politics, economy, sociology, psychology) to effectively predict outcomes, which are all reliant on human emotions to a large extent. Does this mean humans are chaotic? Well, certainly that they’re complicated, while not in any way defying the laws of physics or determinism. For instance, I don’t like the color red, and who knows why this might be? It could potentially be due to my astigmatism, and how the lenses in my eyes don’t focus red as well as other colors, but the result might be that, in a decision that depends on whether I choose red or blue, blue is going to be the case most often – even though, on average, humans prefer red over blue. You could only predict this if you knew this trait about me (and now you are so armed.)

This overall dependence on wildly variable human input, however, is why there’s a distinction of ‘social’ sciences versus ‘physical,’ and you can argue – I would, anyway – that the word ‘science’ shouldn’t be applied to the former, because there have never been any results in such fields that tell us we have this down to a science. Here, chaos theory has due application – to a degree, anyway.

Because chaos theory doesn’t determine when a system will depart expectations or ‘become non-linear,’ nor does it provide a method to prevent this – it’s just something we can point to after it happens, a name we can apply. In scientific terms, a theory is an explanation for the known facts – a strong theory predicts results, given the right factors, and this is what chaos theory does not do. And this is also where the article was ultimately disappointing, because while it showed how and where chaos might erupt, it didn’t provide any advancements that have been made since the theory was first coined – I was kind of hoping that, given our enormous computing power and the decades of observations, someone might have been on track for finding any key factors that would help predict when this non-linearity could appear, but so far, we appear to have nothing.

The aforeblasted book by James Gleick was notorious for accusing scientists for not accepting chaos theory, though it never became clear where this was taking place, nor what exactly was supposed to be done about it, and Klaas makes the same error, though to a lesser degree, For instance:

The problem is that social scientists don’t seem to know how to incorporate the nonlinearity of chaos.

I’ll bite: how do you incorporate chaos theory? If it can arise at any time, especially in certain disciplines, what are we supposed to do about it? Simply shrug and say, “I dunno,” and then go play video games? Giving a name to unpredictability isn’t exactly a huge accomplishment – we’d embraced unpredictability before we had language. Ignorance is our default state; our goal is to reduce that as much as possible.

Klaas also targets natural selection (which of course raised my hackles,) but this is more of a straw man argument than anything informative. He shows that genetic variations were largely random, which is perfectly true; the problem is, virtually no one claimed otherwise, and the key part in there is natural selection. Evolution is how the environment favors the variations that best support survival and reproduction, but it has always depended on which variations arise, and many of the weird things we see in species are because an optimal variation did not, so something else that could barely fit the bill was adapted instead. I hate to tell him this, but this was known before chaos theory was coined (and didn’t leap forward after that, either.)

I still have to give the article credit in that it never attempts to deny or misuse determinism, never implies that our knowledge of physics is somehow flawed, which is certainly the overriding impression that I kept receiving from Gleick’s book. Nor does it attempt to elevate chaos theory into something remarkable and innovative, though Klaas does seem to believe that the social sciences cannot recognize it; I have seen no direct examples of this myself, though I never put stock into economics and poli-sci and don’t know how many people do.

But what I will say is that the article sparked a couple of ideas, and rather than make this post inordinately long, I’ll go into them a little later on. Lucky us, eh?

Like, Wow!, man!

This one popped up several weeks ago, when I was too busy to do it justice, so I set it aside for when I had plenty of time to write it up properly, which appears to be now. There are a decent number of details and thus it will take some explaining, so get comfy as we set off on this journey.

There used to be a massive radio telescope at Ohio State University, dubbed the Big Ear, that monitored a portion of the sky each night for any signals within a fairly broad set of wavelengths – this was largely aimed towards intercepting any potential signals of extraterrestrial life, and served as among the first efforts at SETI (Search for Extra-Terrestrial Intelligence.) And on August 15, 1977, the telescope recorded a remarkably strong signal from a relatively barren patch of sky. This wasn’t discovered for a couple of days since the printouts of the recordings had to be examined personally for any anomalies, but when it was, the section that showed peak reception and signal strength was circled and appended, “Wow!” by Jerry R. Ehman, the astronomer evaluating the data. And it was certainly significant, many times stronger than any signal received before and tracing a bell curve of signal strength that helped pinpoint the location in the sky, as well as helping confirm that it was not a stray, fluke signal.

Some background: Radio telescopes aren’t tubes and lenses like optical telescopes, but antennae of different types, and in Big Ear’s case, it was two massive mesh arrays that reflected and bounced the radio waves down to two receivers. The whole affair was fixed and not able to be aimed, but it had a broad focus range and the rotation of the Earth served to track its observation path across the sky. The signal lasted for 72 seconds, which corresponds closely with how long any given point source would be able to be received as the planet rotated.

Mostly, anyway. The way the recording worked, the telescope would take ten seconds of signals received, spending two seconds to average out the signal strength from that period, and record that average as a single digit on the readout. So technically, there were six readings of ten seconds averaged out, allowing for a little slop on the signal strength and start and end periods; this also prevented recording any modulated or patterned signal with short periods. For instance, had a Morse code style signal been received, all of the dots and dashes within that ten seconds would have been simply averaged out to one digit of signal strength, roughly half of the actual signal strength because all of the pauses between dot and dash would be averaged into the final tally as well. However, the curve traced by the rise and fall of signal strength was in accord with a steady, constant signal, getting ‘louder’ and ‘quieter’ as it passed into and out of the focus of the antenna array.

printout of "Wow!" signal from Ohio State University
Credit: Big Ear Radio Observatory and North American AstroPhysical Observatory (NAAPO).
[There are always illustrations of this in any given article about the signal, showing a string of letters, “6EQUJ5.” The signal strength was measured by simple digits representing variation from the ‘background baseline,’ one through nine, and then letters if the signal required more than that. Reaching “U” meant it was 30-31 standard deviations above the baseline, background signal. Examining the other portions of the printout, most signals show from 1 to 3, with two signals reaching 6 and 7, and no other letters (better than 10 deviations above background.) In short, it was way out of place for typical celestial receptions.]

The wavelength of the signal was 1420.4556 Mhz, which is notably close to the wavelength of excited hydrogen electrons, the radio waves emitted when hydrogen is bombarded with energy from an outside source such as a star. The problem was, when the location of the source was plotted, there was nothing within range that should have been able to produce such a signal, and in fact, very little there at all. The signal was also transient: there were two receivers for the telescope, aimed slightly differently, which should have captured the signal three minutes apart – but only one did (and the telescope was not designed to differentiate which one it was.) So at the very least, the signal stopped before the second receiver aimed at the same section of sky, or started after the first did. Moreover, the same section of sky was monitored on subsequent passes – and later on by much more sensitive telescopes. And in the nearly fifty years since, no comparable signal has been detected, from any location. The Wow! signal stands alone as a peculiar anomaly.

In 2017, a potential explanation was put forth, in the form of two comets that converged on that portion of the sky, thus magnifying their own output in the hydrogen wavelength. The popular media ran with this, but it didn’t take long to determine that the comets did not pass through that portion of the sky at the time the signal was received, nor was there any known way for them to emit that strong a signal.

The question that’s been asked repeatedly is, might this actually indicate extraterrestrial intelligence, an attempt to communicate? Arguments have been made that the wavelength could be indicative of this, since it would demonstrate that the transmitting species was aware of the properties and penetrating value of that wavelength. But at the same time, hydrogen is the most abundant element in the universe, and fully to be expected to be received quite often – just, not at that strength. It’s also a restricted frequency for civilian use, so unlikely to be a stray signal or reflection from a terrestrial source (though the military is not so restricted.)

Missing are any factors that would be more convincing of an attempt to communicate (or even just an intercepted transmission not intended for us): patterns, variations, or modulations, any repetition, any further examples, and so on. One long beep, one time only doesn’t exactly say anything, except perhaps an alien driver stalling too long at a cosmic traffic light.

The question that’s been in my mind for years has been, what are the chances this was a simple hoax? Not on the part of the team at Ohio State University, but perhaps by some knowledgeable students? The antenna array probably wasn’t hard to access, nor would it be difficult to build a lightweight radio to transmit on that frequency, perhaps carried in a light plane or even just floated from a balloon. Since the antenna was huge, it could pick up very faint signals, and the wattage of the transmission would not need to be high at all. The signal only being captured in one of the two receivers bolsters this idea slightly. I could easily see this occurring. Except… if you’re going to go through all that trouble, why pick a wavelength so close to hydrogen emissions? Why such a short and simple ‘beep?’ Any elaborations on the idea would probably provoke a better response.

However, a couple of months ago, a paper was released that provided a potential explanation. Examining archive data from the much larger and more sensitive Arecibo radio telescope (which had collapsed a few years earlier,) astronomers found several other signals of the same type, albeit much weaker. These signals were generated simply by clouds of hydrogen excited by external energy sources, and it was hypothesized that the Wow! signal was the same, only produced by a much stronger source exciting a hydrogen cloud to unprecedented levels. The hydrogen cloud has actually been identified, just not the energy source, but given the nature of the signal, the close fit with the frequency, and the presence of many other weaker examples, the evidence is weighing distinctly in favor of this explanation.

Now we get to the critical examination of this all. Without the corroborating detail of the energy source, the explanation isn’t conclusive, though we’d feel more confident if and when it’s even seen again in some other circumstances – just one instance seems unlikely at least. Then again, the nature of the phenomenon is that it’s transient and short-lived, and a radio telescope has to be pointed directly at it when it occurs, so we shouldn’t expect to see it often either.

As for it being a deliberate signal from some extra-terrestrial intelligence? Well, again, it’s just a ‘beep’ in an extremely common natural wavelength, and just like the hoax possibility above, we’d expect to see something more elaborate, or repetition or a pattern or anything. Even singular beeps that went up the spectrum through the wavelengths of hydrogen and helium and oxygen or whatever, something that is highly unlikely to progress by any natural means but would be very indicative of intelligence, would be far better (and we’ve speculated ourselves on doing the exact same thing if we decide to reach out, a message that says, “Hey, this is not random nor probable,” without any language at all.)

Occam’s Razor comes into play. The most recently proposed explanation involves the fewest unknowns and presents examples of weaker versions of the same kind of signal. It has the fewest questions that need to be answered (“What was the energy source?”) and does not require anything that we have no evidence of at all, like extra-terrestrial intelligence. It serves to answer the question for now. Maybe it’ll be overturned in the future – but it’s far more likely that further support for it will appear instead. All we can do is watch for further developments, but that’s how science works.

1 2 3 38