Morality

This is the final post making up my essay on moral calculus.

We now incorporate conventional moral language into our discussion. To begin, I posit the following definition:

Definition 5.1 Moral Problem

A moral problem is a question of the form “How close to the optimal outcome will I be if I perform action x?”

Ethical problems such as the well known trolley problem (http://en.wikipedia.org/wiki/Trolley_problem) are all ultimately moral problems according to Definition 5.1, in that they can be reduced to a question about which action should be performed. For instance, the first trolley problem is that of five people tied to a railway line with a train bearing down on them. It is in your power to divert the train onto a different track, thereby preventing the death of five people. However, if you do so, the train will follow a different track that has one person tied to it. Unsurprisingly, most test subjects elect to divert the train. If a test subject wishes to know if diverting the train is the “right” action to take, they are essentially asking how close to the optimal outcome they will be if they perform that action.

In both my definition of moral problems and in my use of the word “right” in the above paragraph, I have implicitly assumed that some metric exists against which different outcomes can be measured. It is with this thought in mind that I make the following definition:

Definition 5.2 Moral Theory

A moral theory is an algorithm that ranks actions according to a chosen metric.

There are many possible metrics, and therefore many moral theories, to choose from. We have, thus far, introduced no information which could tell us which theory is “better” than another: such a value judgment would, after all, require the application of its own metric, and we have not developed any such metric. We must therefore accept, at least for the present discussion, that different moral theories may rank the same actions differently.

Once we have decided on a moral theory, though, how does it provide us with a solution to a given moral problem? Consider the following definition.

Definition 5.3. Solution to a Moral Problem

We are given:

  1. A moral theory T,
  2. A moral problem of the form “How close to the optimal outcome will I be if I perform action x?” (Definition 5.1), and
  3. A set of relevant actions.

To find a solution, apply the following steps:

  1. Use moral theory T to rank the relevant actions.
  2. Normalize the rankings (i.e., scale them so that they lie between 0 and 1).
  3. The solution to the moral problem is the normalized ranking of action x. If the normalized ranking is 0, then performing action x puts us as far as possible from the optimal outcome. If the normalized ranking is 1, then performing action x yields the optimal outcome. A normalized ranking between 0 and 1 indicates how close to the optimal outcome action x will take us.

To demonstrate this definition, we choose as our moral theory the algorithm of Solution 5. This algorithm fulfills our definition of a moral theory because it ranks actions according to some metric (desire intensity in this case). The steps in Definition 5.3 should be familiar to the reader, since we have already followed them in Section 3. The rankings that were acquired by Solution 5 were as follows:

A B C D Ranking Normalized Ranking
a 0 4 37 -13 28 0.86
b -49 0 0 0 -49 0
c 49 -4 -37 -13 -5 0.49
d 0 4 37 0 41 1
e -49 4 37 13 5 0.60

Table 15. Value-intensity table with rankings and normalized rankings.

Thus, if our moral problem is “How close to the optimal outcome will I be if I perform action d?”, the answer is “the optimal outcome is achieved”, since action d has the top ranking. If, on the other hand, our moral problem is “How close to the optimal outcome will I be if I perform action b?”, the answer is “the least optimal outcome is achieved”, since action b has the lowest ranking.

I repeat that since many moral theories exist, more than one solution is likely for a given moral problem. As yet, we have no way of determining which of these solutions is “better” than another.

Finally, I note that one of the reasons I chose Solution 5 to tackle the example problem above was because this solution is specifically built to rank actions, not desires. How, then, would we carry out the steps of Definition 5.3 using, say, Solution 1, which ranks desires instead? We could take two approaches here. The first would be to recast our definitions of moral problems and moral theories in terms of desires. The second approach would be to leave these definitions untouched, but instead modify Solution 1 (or any other desire-based system) to deal with actions rather than desires. I will not attempt either approach in this essay, but I will note that problems are likely to occur in both. This is because there is no guarantee of a simple one-to-one relationship between desires and actions. I simply note that if actions are the mechanism behind desire fulfillment, then the more complete treatment of the problem is one that considers actions explicitly, rather than eliminating them with the sort of generalization described in Chapter 4. This was my motivation for using an action-based definition of moral problems.

5.1 The is-ought problem

According to the above discussion, it can be stated that if a particular moral theory is adopted, then there is a certain action that will be regarded as optimal according to that theory. Put differently, we could say that the optimal action is the one that ought to be performed under the assumptions of the chosen moral theory. However, there is a lot of philosophical baggage associated with the word “ought”, as famously noted by David Hume, in A Treatise of Human Nature:

In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.

In other words, while it is possible to state what sort of behaviors are needed in order to fulfill a certain goal, there is nothing compelling us to choose that particular goal over any other. In the language of the preceding discussion, it is possible to state what action should be performed according to a certain moral theory, but there is nothing compelling us to choose that particular theory over any other.

The is-ought problem has caused much consternation among philosophers, but I think its importance is grossly exaggerated. To explain why, let me consider science as an analogue to morality. With science, it is also possible to state what sort of behaviors are needed to fulfill a certain goal, and it is equally impossible to determine what that goal should be. We certainly observe that the goal of science is to determine facts about the natural world (and this goal is achieved by the application of the scientific method). But why should science have this particular goal? Why should science not set as its goal the determination of myths about the natural world or, for that matter, the propagation of deliberate lies about the natural world? The answer is that scientists could indeed, set themselves these goals, if they so wished. In fact, many individuals do set themselves these goals, they just don’t call themselves scientists.

In other words, there is nothing forcing us to seek facts about the natural world. It just happens that there are many people who have a natural desire to do just that. And they happen to call that endeavor “science”.

Morality is analogous. There is nothing forcing us to make others happy or to reduce suffering, or to campaign for human rights. It just happens that there are many people who have a natural desire to do these things. They also happen to call their endeavors “moral”.

As far as I can tell, then, the “is-ought” problem is only considered a problem at all because it is so easy to lose sight of the fact that the word “ought”, like the word “should”, only makes sense when preceded by a conditional statement. For instance, it is perfectly acceptable to say that, “if you wish to lose weight, then you ought to diet”, or, “if you want to save that child’s life, then you should jump into the water and prevent her from drowning”, or, “if you wish to live peacefully, then you ought to treat others with respect”. However, it is incoherent to say, without condition, that “you ought to diet” or that “you should jump into the water and prevent that child from drowning” or that “you ought to treat others with respect”.

All we can say of morality, then, is that if we wish to make people happy, or reduce suffering, then we ought to adopt certain behaviors. Similarly, we can say of science that if we wish to discover facts about the natural world, then we ought to think scientifically. Returning to the language used in preceding sections, we can say that if we wish to adopt a particular moral theory, then we ought to perform the action it ranks most highly (that is what adopting a moral theory means).

I suspect many people are uncomfortable with the voluntary nature of the condition accompanying the moral “ought”, because they are bothered by the specter of mass immorality born of widespread apathy: if we all decide that we have no desire to make one another happy or to reduce suffering, then surely society as we know it would collapse? Yes, it probably would. However, such a collapse has not occurred yet, and it does not look about to happen any time soon. For that, we can thank our evolutionary history, which has implanted in us the desire to behave morally. Indeed, by the very definition of evolution by natural selection, these instincts are required for the survival of a social species such as our own. In other words, we are only here because we have evolved a natural inclination to behave morally.

The unpleasant ramifications of behaving immorally provide us with further impetus to behave morally. They are conditions that justify the use of “ought” and “should”. The last of the three examples provided above, namely “if you wish to live peacefully, then you ought to treat others with respect” demonstrates this point: our desire to live peacefully provides us with a reason to adopt certain moral behaviors – it justifies the “ought”.

However, it must be emphasized that if someone does not wish to live peacefully, then he will have fewer reasons to behave morally, just as someone who does not wish to seek facts about the natural world will have fewer reasons to follow the scientific method.  The only difference between these two scenarios is that we do not generally frown upon people (such as entertainers, perhaps) who have no interest in seeking facts about the natural world, but we do frown upon those who have no interest in behaving morally. This only serves to demonstrate how strong our innate desire for morality is: it is so strong that it results in an almost universal consensus that moral behavior is important. There is no similarly wide consensus that the search for facts about the natural world is important.

5.2 Is there anything to recommend one moral theory over another?

Despite the care I took in the preceding discussion to avoid comparing one moral theory to another, I think some real judgments of this kind can be made. However, if we are to compare moral systems, we must be guided by the evidence, not by personal intuition or emotion.

One of the qualities of the moral systems discussed above is that they are objective. The principal requirement for an objective system is that its inputs must be objective. Objective inputs are those whose (numerical) values would be identical no matter who was determining them. Importantly, this means that objective inputs cannot be influenced by personal preferences, but must be based on evidence.

Do our moral systems fulfill the requirements for objectivity? When I first introduced brains, I defined relevant brains in the following way: “Relevant brains are those containing at least one of the relevant desires.” Once I established relevance in this way, I did not draw any further distinctions among the brains aside from the observed desire intensities or emotional values contained therein. In particular, I did not lend special weight to the brain belonging to the person who was posing a moral problem. I did not lend special weight to brains of a certain age, gender, or race. I did not omit such special treatment because it was convenient to do so. Rather, I omitted such special treatment because I did not consider there to be sufficient evidence to warrant it. Now, it is quite possible that we could argue, for instance, about younger brains having a longer life ahead of them than older brains, lending them more weight in moral computations. Or, we could argue about the variation in pain threshold from one person to another, or the variation in intelligence. My assumption, made implicitly throughout, is that if such factors produced variations in the nature and intensity of desires from brain to brain, such variations would be included in the measurements of desire intensity used in the chosen moral calculus. In summary, then, I have avoided imposing any personal values on the brains involved in our moral computations. This satisfies part of the requirement for objectivity.

The second component of objectivity is that the moral system should produce the same result no matter who applies it. I have achieved this goal by basing moral systems on observables. Any operator with accurate equipment will measure, in any given brain, the same desires and desire intensities as any other operator. In short, desire intensities in brains are not up for debate: they are facts that can be objectively measured. And any system that takes objectively measured quantities as their sole inputs, must produce the same solution no matter who the operator is.

Is an objective moral system preferable to a subjective one? A subjective moral system is one whose output depends, to some degree, on the personal preferences of the operator. For instance, a moral system that requires the operator to prescribe behaviors based on her “gut feelings” is subjective, because each operator is likely to have somewhat different gut feelings. There is a fundamental problem with such systems, namely how to decide who should be the operator. There are two options here: either everyone operates the system independently, or a single individual is chosen to operate the system. If everyone operates the system independently, a cacophony of competing prescriptions will arise, and no decisions will be possible. If a single individual is to operate the system, some metric must be devised to choose the operator. But what metric should be used? How does one decide who would be the best operator of a given subjective moral system? Such a question can only be answered by using moral judgments that lie outside the system itself, implying that some other moral system must already be in place.

It seems, therefore, that objective moral systems are preferable in that they do not require an external moral system in order to select an operator, and they produce the same prescriptions even if operated by everyone independently.

However, I must emphasize that the debate about selecting an operator for the moral system is distinct from the debate about choosing which moral system to adopt. This latter debate must ultimately be decided by personal choice no matter how objective or subjective the various options are. This is because we have no meta-moral system to guide us in our decision about which moral system to adopt. As discussed in the previous section, we can only use the word “ought” if we already have a qualifying condition in hand. Thus, only if we wish to improve the happiness of society, for example, ought we to select a moral system that furthers that aim. If, on the other hand, we wish to improve financial wealth, we should adopt a moral system focused on that goal.

5.3 Identifying our Moral Theories

In this section, I briefly compare the moral calculi of Chapters 1, 2, and 3 with moral theories posited by other philosophers. My first observation is that all the calculi are consequentialist, rather than deontological: they make prescriptions based on outcomes rather than on duty or motive.

Solution 1 of Chapter 1 is a form of Alonzo Fyfe’s desire utilitarianism. It is commonly called a rule utilitarianism because it bases its decisions on relationships (or rules) between desires, without regard to how many brains contain those desires.

One of the problems I have with Fyfe’s description of desire utilitarianism is his use of the word “stronger” when he speaks about a desire fulfilling the “most and stronger” of other desires. It is my understanding that Fyfe’s “strong” is equivalent to my “intensity”. When we considered the intensity of desires in our analysis, we discovered that it was not possible to do so without taking into account which brains those desires were found in, since intensity may vary from brain to brain. However, desire utilitarianism, as proposed by Fyfe, does not take the distribution of desires among brains into account. I therefore cannot see how the strength of desires can be included in his moral calculus.

The happiness-based system derived in Chapter 2 is akin to act utilitarianism, which holds that the best action is that which produces the greatest amount of happiness for the greatest number of individuals. In our calculus, the number of brains in which happiness increased was not independently optimized. Instead the total increase in happiness across the population of brains was optimized, an approach that may be skewed in favor of the handful of brains in a population for which certain acts produce very great changes in happiness.

Finally, I am not currently aware of a moral theory that compares well with the hybrid system developed in Chapter 3.

Previous post on moral calculus.

Advertisements

One Response to Morality

  1. […] Now that we have established relevance, let us consider the calculus of contentment utilitarianism. There are several available calculi depending on the type of moral problem we wish to solve. By saying this, I am essentially stating that there is no solution to the “is-ought” problem. We first have to agree what the overarching goal of morality is before we can establish a moral system that will try to achieve this goal. I have discussed this issue in greater detail in section 5.2 of my essay on moral calculus, here. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: