Put down the Dymo, Avery

Several recent posts and articles have highlighted a problem that I’ve seen far too many times from, quite frankly, people who should probably know better. It’s rampant within philosophy, and unfortunately, there are still too many who think philosophy is something to be revered, so it tends to cross over into other disciplines as well. For lack of a better way of describing it right now (which will be ironic as soon as I actually get around to mentioning what the hell I’m talking about,) I’m going to call it the Labeling Problem.

Basic premise: We are a species that likes definite answers. In the face of vague, ephemeral feelings or assumptions about how things work, we immediately want to apply a label to them: “consciousness” and “free will,” “socialism” and “dualism,” “science” and “morality.” This isn’t exactly a bad thing – our language would be even more tortured without easy terms to apply to complicated concepts – but each of those terms above, and many more besides, are so poorly defined that the moment anyone uses them, someone else has an entirely different idea what is meant by the usage. Very frequently, this means that endless discussions take place because no one seems capable of recognizing that they’re not working from the same premise.

The last two examples, “science” and “morality,” are the ones I’m going to highlight here. Long ago I settled on a basic definition of science – “a methodical process of learning” – and I have yet to see where this does not apply. That there is an alternate usage along the lines of “the body of knowledge gained from this process” – making science a thing rather than a function – only demonstrates why labels are difficult sometimes; make up another word, for dog’s sake! But because the definition of science floats around a bit, there are those who feel that science requires bubbling retorts and lab results, electronic machinery and microscopes, and this then allows them to feel that science should not, and can not, be used in realms such as morality. This curious perspective is reflected in the “is/ought” dilemma.

While there are myriad aspects of this dilemma, the overall idea is that science can tell us what is, the bare facts of anything, but shouldn’t/can’t tell us what actions we ought to take over them. Science can tell us that animals can feel pain, but not if it’s good or bad to kill them for food. This is true enough, but then again, there isn’t any other pursuit that fares any better, when it comes right down to it – and even demonstrating the failures of them all takes, believe it or not, science. In order to know whether or not one pursuit is more functional than another, you need empirical data, a body of information to provide something other than vague guesswork and emotional reactions. One person may not like causing animals pain, while another enjoys hunting, so there needs to be something more than just personal reactions to serve as a guideline.

Philosophy leaps heroically into the fray here, or so most philosophers seem to believe. The ‘ought’ issue can be decided with long debates! Sometimes, perhaps – it’s true that discussion of salient points or varying perspectives can cause people to change their minds, and I’d be in rampant denial if I tried to claim I don’t use this throughout the blog, much less this post. The effectiveness of this, to demonstrate that it really is a better method of approaching such subjects, still requires an accurate dataset though. Advertisers are quite well aware that compelling arguments don’t reach people one-tenth as effectively as pretty faces and appeals to base emotions (I’m all out of luck on that first part, I’m afraid.) So, is the philosophical approach effective? Well, those who like philosophy will tell you that it is.

And there we have the first inkling of an underlying issue. How we personally feel about something is paramount to the decisions we make, and the pursuits we tackle. We consider morality an important pursuit, but why? Because it’s a part of us as a species, a mental desire to – to do what? What exactly is the goal proposed by these vague feelings within us?

Well, I feel perfectly comfortable saying that there isn’t one, because these feelings are a byproduct of natural selection, an emergent property that simply worked a little better than not having it – there’s no goal involved, any more than water has a goal to run downhill. It simply occurred. Which also ties in with the problem of labeling it effectively. Our desire for “morality” is most likely a desire to maintain a cohesive tribal unit, since as a species we survive better in groups. Morality, after all, revolves around how we deal with others, and whether or not some action is considered “proper” more by them than by our individual selves. But note that this does not apply to everyone else, only those to whom we have a certain connection. The dividing line between our ‘tribe’ and outsiders is arbitrary, very often involving whether others try to do something bad to us. If our family survives, our genes pass on to offspring, which is the only way natural selection can work – but the survival of the tribe is often tied in with survival of the family, and the ‘tribe’ may end up extending across the continent, depending on who threatens us. It is exceptionally muddy, because it is exceptionally vague.

And from these vague feelings of protection, survival, and cohesiveness, we try to develop a rigorous definition of morality – at least in part because we don’t like vagueness, but want absolutes instead (likely another emergent property.) Mind you, it’s science that informs us how these feelings kick in, and explains why we even have them – religion, philosophy, and every other pursuit throughout the history of mankind all attempted, and all got it wrong. And we didn’t find them wrong by debate, assertion, or epiphany, but by comparing the data and performing experiments and tests. We see how altruism has some notable effects in groups of chimpanzees, and what happens when prides of lions intersect – very often, it’s not a matter of other species not possessing traits that we have, but instead possessing them to a different level or effect.

So we come to goals, what we want morality to accomplish, and where we think it’s lacking or ineffective. But, ineffective at what, again? That question, and the answer thereof, depends largely on how we feel about it. The emotional impetus that we define as ‘moral desire’ is what makes us dissatisfied with some state of affairs, and provokes us to improving things. From a rational standpoint, it’s hard to find anything wrong with such desires, so we’re probably safe with indulging them. And we realize that it’s not a rule that we’re following, not a definition that we’re trying to fit into, but a reaction to something that we find unacceptable; crime, poverty, war, class inequities, slavery, abuse, even poor parenting. There’s no way to list them all – we don’t know how to add to the list until we think of a situation and find out how it makes us feel.

Obviously, making a definitive set of rules or guidelines presents difficulties, because not everyone feels the same way. Yet we can always select a rational goal, such as eradicating world hunger, and realize that this will appease the inner turmoil among a large number of people. The emotions are goads towards behavior – not specific behaviors, mind you, and a lot of things may work to answer the internal call. So it’s not a definitive method of being moral that we need, but a way to recognize the desire for this and answer that desire effectively. We can only be driven by a goal if we already find that the goal answers the internal drives.

Let me provide an example. Human overpopulation is already a serious issue in numerous areas of our planet, and promises to be a major issue worldwide in the next century. So, pick any six people that you know, and tell them they cannot have babies, ever, for the good of the planet. See how many of them absolutely lose their shit. But, it’s a rational goal, isn’t it? Yet that really doesn’t matter when it’s fighting upstream against the internal drive to reproduce. What might work is to convince them, with lots of evidence and detail, that their child or grandchild will be among those that starve to death, or succumb to pandemics, or otherwise meet an undesirable fate. Or perhaps, that there are offsets that can be performed, actions that can be taken that provide a net positive effect against the negative impact of a child. While doing this, of course, there cannot be the slightest hint that someone else will be free from having to sacrifice their desires, or then it becomes a class duel, and victimhood takes a hold. Human interactions are complicated…

Here’s what’s funny, as a brief aside. Emotional reactions are often expressed, openly or just internally, as rational decisions – we like to believe that we consider things, rather than follow some automatic response, and this often results in some astounding rationalizations that fall far from actual rationality (just refer to any political discussion for an example.) But by merely mentioning that reproduction is a base drive of our species, someone can be prodded towards disregarding the emotional reaction and commence real consideration. Isn’t that great?

This may sound like philosophical debate, and in a way it is; such debates are often engaged in finding the particular perspective or emotional appeal that causes someone to change their stance on a topic. Randomly attempting arguments is far less effective than specifically targeting someone’s base desires, however, and often we need to think like the advertiser and find the hot button. The desire to reproduce does not come from philosophy, or religious instruction, or even rational consideration, but as a simple evolved trait, and we wouldn’t know this without having applied the methods of science to the issue.

What this comes down to isn’t the ridiculous question of whether ‘science’ can dictate ‘morality,’ but how we actually determine what is acceptable to us as a species, and how we can channel our evolved traits towards something we collectively approve of. It requires discarding age-old assumptions, labels, that are misleading in nature, and taking the time to recognize what’s really at work – and yes, that’s what science can tell us. We end up leaving behind the ‘ought’ concept, because no one can adequately define ought beyond what we want; instead, we can seek effective methods of fulfilling desires in ways that do not introduce other conflicts. Perhaps no less complicated than the interminable discussions before, but almost certainly much more usefully aimed.

And the only way we’ll know for sure is to quantify the results ;-)