Science of Good and Evil Page 20
After existentialism I tried utilitarianism, based on Jeremy Bentham’s principle of the “greatest happiness for the greatest number.” Specifically, I found his quantitative utilitarianism attractive because of its scientistic approach in attempting a type of hedonic calculus where one can quantify ethical decisions. By “hedonism” Bentham did not mean a simple pleasure principle where, in modern parlance, “if it feels good, do it.” In fact, Bentham specified “seven circumstances” by which “the value of a pleasure or a pain is considered”:
1. Purity—“The chance it has of not being followed by sensations of the opposite kind.”
2. Intensity—The strength, force, or power of the pleasure.
3. Propinquity—The proximity in time or place of the pleasure.
4. Certainty—The sureness of the pleasure.
5. Fecundity—“The chance it has of being followed by sensations of the same kind.”
6. Extent—“The number of persons to whom it extends; or (in other words) who are affected by it.”
7. Duration—The length of time the pleasure will last.5
As a pedagogical heuristic, I once presented the table in figure 22 to my introductory psychology course to draw students into seeing the problem of assigning actual numbers to these seven values (the boxes were blank), in making a rather simple choice between spending money on a good meal, a good date (with the possibility but not certainty of sex), or a good book. The values in the boxes are my own (I was single at the time).
According to Bentham, once the figures are assigned, “Sum up all the values of all the pleasures on the one side, and those of all the pains on the other. The balance, if it be on the side of pleasure, will give the good tendency of the act upon the whole, with respect to the interests of that individual person; if on the side of pain, the bad tendency of it upon the whole.”6 In my example the book wins out over the meal or date. Of course, this is just my opinion, the application of the hedonic calculus to one person. To apply the principle to society as a whole, Bentham says, we must:
Take an account of the number of persons whose interests appear to be concerned; and repeat the above process with respect to each. Sum up the numbers expressive of the degrees of good tendency, which the act has, with respect to each individual, in regard to whom the tendency of it is good upon the whole: do this again with respect to each individual, in regard to whom the tendency of it is good upon the whole: do this again with respect to each individual, in regard to whom the tendency of it is bad upon the whole. Take the balance; which, if on the side of pleasure, will give the general good tendency of the act, with respect to the total number or community of individuals concerned; if on the side of pain, the general evil tendency, with respect to the same community.7
Figure 22. Jeremy Bentham’s Hedonic Calculus
Dismissing the obvious impossibility of doing this on a daily basis and being able to even leave the house, it is clear that you can cook the numbers to make it come out almost any way you like. Doing this on a societal level is simply impossible.
Utilitarianism, particularly in the form of calculating the greatest good for the greatest number as if one were computing an orbital trajectory of a planetary body, is very much grounded in pre-twentieth-century psychological, social, and economic theory that presumed humans (at least Western industrial peoples) to be rational beings who make choice calculations along the lines of a double-entry bookkeeper. (Utilitarians even designated units of pleasure as “hedons” and units of displeasure as “dolors”—in the manner of physicists measuring photons and electrons—and debated among themselves whether we should try to maximize utility or, as satisficing utilitarians held, should only try to produce just enough utility to satisfy everyone minimally.) Moral choices, then, were simply a matter of looking at the bottom line.
Thanks to extensive interdisciplinary research by psychologists, sociologists, and economists over the past several decades, however, we now know that humans are emotional and intuitive decision makers subject to the considerable whims of subjective feelings, social trends, mass movements, and base urges. We are rational at times, but we are also irrational, the latter probably a lot more than we care to consider. As we shall see at the end of this chapter, moral reason must be balanced with moral intuition.
These are just a few of the ethical systems that appealed to me, but there are many others for the student of ethics and morality to sample. For example: consequentialism, as the name implies, holds that the consequences of an action should determine whether it is right or wrong. Contractarianism posits that contractual arrangements between moral agents establish what is right and wrong, where violations of agreements are immoral. Deontology claims that one’s duty (deon is Greek for duty) is the criterion by which actions should be judged as moral or immoral. Emotivism holds that moral judgments of right or wrong behavior are a function of the positive or negative feelings evoked by the behavior. Ethical egoism (or psychological egoism) states that people behave in their own self-interest and thus even apparently altruistic behavior is really motivated by selfish ends. Moral isolationism, a form of moral relativism, argues that we ought to be morally concerned only with those in our immediate group, “isolating” those outside our group as not relevant to our moral judgments. Natural law theory states that there is a natural order to the human condition, the natural order is good, and therefore the rightness or wrongness of an action should be judged by whether or not it violates the natural order of things. Nihilism denies that there is any truth to be discovered, particularly in the moral realm. Particularity contrasts with universality and impartiality, holding that we have moral preferences to particular people morally relevant to us. Pluralism (an approach very much embraced in this book) holds that there are multiple perspectives that should be considered in evaluating a moral issue, and that no one ethical theory can explain all moral and immoral behavior. Subjectivism is an extreme form of relativism, holding that moral values are relative to the individual’s sole subjective state alone and cannot even be evaluated in the larger social or cultural context. Encyclopedias of philosophy and morality abound in an alphabet soup of ethical theories and moral labels, and library shelves are sagging with volumes on ethical theories purporting to present the reader with valid and viable criteria of right and wrong human action. What are we to make of all these theories?
Provisional Ethics
If we are going to try to apply the methods of science to thinking about moral issues and ethical systems, here is the problem as I see it: as soon as one makes a moral decision—an action that is deemed right or wrong—it implies that there is a standard of right versus wrong that can be applied in other situations, to other people, in other cultures (in a manner that one might apply the laws of planetary geology to planets other than our own). But if that were the case, then why is that same standard not obvious and in effect in all cultures (as, in the above analogy, that geological forces operate in the same manner on all planets)? Instead, observation reveals many such systems, most of which claim to have found the royal road to Truth and all of whom differ in degrees significant enough that they cannot be reconciled (as if gravity operated on some planets but not others). If there is no absolute moral standard and instead only relative values, can we realistically speak of right and wrong? An action may be wise or unwise, prudent or imprudent, profitable or unprofitable within a given system. But is that the same as right or wrong?
So, both absolutism and relativism violate clear and obvious observations: there is a wide diversity of ethical theories about right and wrong moral behavior; because of this there are disputes about what constitutes right and wrong both between ethical theories and moral systems as well as within them; we behave both morally and immorally; humans desire a set of moral guidelines to help us determine right and wrong; there are moral principles that most ethical theories and moral systems agree are right and wrong. Any viable ethical theory of morality must account for these observations. Mo
st do not.
In thinking about this problem I asked myself this question: how do we know something is true or right? In science, claims are not true or false, right or wrong in any absolute sense. Instead, we accumulate evidence and assign a probability of truth to a claim. A claim is probably true or probably false, possibly right or possibly wrong. Yet probabilities can be so high or so low that we can act as if they are, in fact, true or false. Stephen Jay Gould put it well: “In science, ‘fact’ can only mean ‘confirmed to such a degree that it would be perverse to withhold provisional assent.’”8 That is, scientific facts are conclusions confirmed to such an extent it would be reasonable to offer our provisional agreement. Heliocentrism—that the earth goes around the sun and not vice versa—is as factual as it gets in science. That evolution happened is not far behind heliocentrism in its factual certainty. Other theories in science, particularly within the social sciences (where the subjects are so much more complex), are far less certain and so we assign them much lower probabilities of certitude. In a fuzzy logic manner, we might say heliocentrism and evolution are .9 on a factual scale, while political, economic, and psychological theories of human social and individual behavior are much lower on the fuzzy scale, perhaps in the range of .2 to .5. Here the certainties are much fuzzier, and so fuzzy logic is critical to our understanding of how the world works, particularly in assigning fuzzy fractions to the degrees of certainty we hold about those claims. Here we find ourselves in a very familiar area of science known as probabilities and statistics. In the social sciences, for example, we say that we reject the null hypothesis at the .05 level of confidence (where we are 95 percent certain that the effect we found was not due to chance), or at the .01 level of confidence (where we are 99 percent certain), or even at the .0001 level of confidence (where the odds of the effect being due to chance are only one in ten thousand). The point is this: there is a sliding scale from high certainty to high doubt about the factual validity of a particular claim, which is why science traffics in probabilities and statistics in order to express the confidence or lack of confidence a claim or theory engenders.
The same way of thinking has application to morals and ethics. Moral choices in a provisional ethical system might be considered analogous to scientific facts, in being provisionally right or provisionally wrong, provisionally moral or provisionally immoral:
In provisional ethics, moral or immoral means confirmed to such an extent it would be reasonable to offer provisional assent.
Provisional is an appropriate word here, meaning “conditional, pending confirmation or validation.” In provisional ethics it would be reasonable for us to offer our conditional agreement that an action is moral or immoral if the evidence for and the justification of the action is overwhelming. It remains provisional because, as in science, the evidence and justification might change. And, obviously, some moral principles have less evidence and justification for them than others, and therefore they are more provisional and more personal.
Provisional ethics provides a reasonable middle ground between absolute and relative moral systems. Provisional moral principles are applicable for most people in most circumstances most of the time, yet flexible enough to account for the wide diversity of human behavior, culture, and circumstances. What I am getting at is that there are moral principles by which we can construct an ethical theory. These principles are not absolute (no exceptions), nor are they relative (anything goes). They are provisional—true for most people in most circumstances most of the time. And they are objective, in the sense that morality is independent of the individual. Moral sentiments evolved as part of our species; moral principles, therefore, can be seen as transcendent of the individual, making them morally objective. Whenever possible, moral questions should be subjected to scientific and rational scrutiny, much as nature’s questions are subjected to scientific and rational scrutiny. But can morality become a science?
Fuzzy Provisionalism
One of the strongest objections to be made against provisional ethics is that if it is not a form of absolute morality, then it must be a form of relative morality, and thus another way to intellectualize one’s egocentered actions. But this is looking at the world through bivariate glasses, a violation of the either-or fallacy, breaking the law of the excluded middle.
Here again, fuzzy logic has direct applications to moral thinking. In the discussion of evil, we saw how fuzzy fractions assigned to evil deeds assisted us in assessing the relative merits or demerits of human actions. Fuzzy logic also helps us see our way through a number of moral conundrums. When does life begin? Binary logic insists on a black-and-white Aristotelian A or not-A answer. Most pro-lifers, for example, believe that life begins at conception—before conception not-life, after conception, life. A or not-A. With fuzzy morality we can assign a probability to life—before conception o, the moment of conception .1, one month after conception .2, and so on until birth, when the fetus becomes a 1.0 life-form. A and not-A. You don’t have to choose between pro-life and pro-choice, themselves bivalent categories still stuck in an Aristotelian world (more on this in the next chapter).
Death may also be assigned in degrees. “If life has a fuzzy boundary, so does death,” fuzzy logician Bart Kosko explains. “The medical definition of death changes a little each year. More information, more precision, more fuzz.” But isn’t someone either dead or alive? A or not-A? No. “Fuzzy logic may help us in our fight against death. If you can kill a brain a cell at a time, you can bring it back to life a cell at a time just as you can fix a smashed car a part at a time.”9A and not-A. Birth is fuzzy and provisional and so is death. So is murder. The law is already fuzzy in this regard. There are first-degree murder, second-degree murder, justifiable homicide, self-defense homicide, genocide, infanticide, suicide, crimes of passion, crimes against humanity. A and not-A. Complexities and subtleties abound. Nuances rule. Our legal systems have adjusted to this reality; so, too, must our ethical systems. Fuzzy birth. Fuzzy death. Fuzzy murder. Fuzzy ethics.
Moral Intuition and the Captain Kirk Principle
Long before he penned the book that justified laissez-faire capitalism, Adam Smith became the first moral psychologist when he observed: “Nature, when she formed man for society, endowed him with an original desire to please, and an original aversion to offend his brethren. She taught him to feel pleasure in their favorable, and pain in their unfavorable regard.” Yet, by the time he published The Wealth of Nations in 1776, Smith realized that human motives are not so pure: “It is not from the benevolence of the butcher, the brewer or the baker that we expect our dinner, but from their regard of their own interest. We address ourselves not to their humanity, but to their self-love, and never talk to them of our necessities, but of their advantage.”10
Is our regard for others or for ourselves? Are we empathetic or egotistic? We are both. But how we can strike a healthy balance between serving self and serving others is not nearly as rationally calculable as we once thought. Intuition plays a major role in human decision making—including and especially moral decision making—and new research is revealing both the powers and the perils of intuition. Consider the following scenario: imagine yourself a contestant on the classic television game show Let’s Make a Deal. You must choose one of three doors. Behind one of the doors is a brand-new automobile. Behind the other two doors are goats. You choose door number one. Host Monty Hall, who knows what is behind all three doors, shows you what’s behind door number two, a goat, then inquires: would you like to keep the door you chose or switch? It’s fifty-fifty, so it doesn’t matter, right? Most people think so. But their intuitive feeling about this problem is wrong. Here’s why: you had a one in three chance to start, but now that Monty has shown you one of the losing doors, you have a two-thirds chance of winning by switching doors. Think of it this way: there are three possibilities for the three doors: (1) good bad bad; (2) bad good bad; (3) bad bad good. In possibility one you lose by switching, but in possibilities two and th
ree you can win by switching. Here is another way to reason around our intuition: there are ten doors; you choose door number one and Monty shows you doors number two through nine, all goats. Now would you switch? Of course you would, because your chances of winning increase from one in ten to nine in ten. This is a counterintuitive problem that drives people batty, including mathematicians and even statisticians.11
Intuition is tricky. Gamblers’ intuitions, for example, are notoriously flawed (to the profitable delight of casino operators). You are playing the roulette wheel and hit five reds in a row. Should you stay with red because you are on a “hot streak” or should you switch because black is “due”? It doesn’t matter because the roulette wheel has no memory, but try telling that to the happy gambler whose pile of chips grows before his eyes. So-called hot streaks in sports are equally misleading. Intuitively, don’t we just know that when the Los Angeles Lakers’ Kobe Bryant is hot he can’t miss? It certainly seems like it, particularly the night he broke the record for the most three-point baskets in a single game, but the findings of a fascinating 1985 study of “hot hands” in basketball by Thomas Gilovich, Robert Vallone, and Amos Tversky—who analyzed every basket shot by the Philadelphia 76ers for an entire season—does not bear out this conclusion. They discovered that the probability of a player hitting a second shot did not increase following an initial successful basket beyond what one would expect by chance and the average shooting percentage of the player. What they found is so counterintuitive that it is jarring to the sensibilities: the number of streaks, or successful baskets in sequence, did not exceed the predictions of a statistical coin-flip model. That is, if you conduct a coin-flipping experiment and record heads or tails, you will encounter streaks. On average and in the long run, you will flip five heads or tails in a row once in every thirty-two sequences of five tosses. Players may feel “hot” when they have games that fall into the high range of chance expectations, but science shows that this intuition is an illusion.12