Calculus

Setting up the Problem

How can our goal of pursuing happiness be achieved? The most obvious solution is to maximize the measure of happiness we adopted in previous parts of this book, namely the action utility. There are at least two frameworks available to us:

Framework 1: Devise a calculus that selects a preferred action based on some maximization of action utility.

Framework 2: Employ a heuristic that is known to reliably maximize action utility.

Method 2 (unlike Method 1) is not strictly consequentialist. It does not discriminate actions based on their predicted outcomes, but on a set of predetermined selection rules (although these rules may be based on observations of past outcomes). Method 2 is therefore guaranteed to fall short of maximizing action utility in some situations.  However, the information needed to apply Method 1 may not always be readily available, or it may be too complex to assess within the given time constraints. Under these conditions, Method 2 must be used. However, I will not explore Method 2 in further detail here because it is a derivative of the more fundamental approach of Method 1.

To develop Method 1 further, we recall that action utility has a numerical value specific to a given action and a given person. If we are to compare actions, we need a way of combining each action’s set of person-specific action utilities.

To see how we might do this, consider the simple example shown in Figure 3.

Let us suppose that two people, Adam and Bella, are trying to decide who will get to eat a delicious ice cream (it’s the last one in the store, so they can’t both have one). The only two available actions are Adam eating the ice cream and Bella eating the ice cream (for simplicity, we’ll ignore the possibility of sharing).

The top panel in Figure 3 shows the CIs we expect to see if Adam eats the ice cream. The ice cream will make Adam happier for, say, twenty minutes, while the lack of an ice cream will make Bella less happy for, say, ten minutes. Conversely, if Bella eats the ice cream (bottom panel) she will be happy for only ten minutes, while Adam will be unhappy for twenty. (In the real world, only very crude estimates of expected CIs will generally be possible, but I use a quantitatively precise approach here for completeness.)

 image003

Figure 3. The CIs of two hungry would-be ice cream eaters, Adam and Bella, under two actions: Adam eats the ice cream (top) and Bella eats the ice cream (bottom).

Comparing the top and bottom panels of Figure 3, we see that Adam’s CI is affected by the ice cream problem for twenty minutes, after which his CI is expected to be the same regardless of whether he eats the ice cream or not. Bella’s CI, on the other hand, is only affected for ten minutes (perhaps she is a quicker eater than Adam and, if she doesn’t eat the ice cream, perhaps she gets over her disappointment sooner). When we calculate action utilities, then, we must integrate over the first twenty minutes for Adam, and the first ten minutes for Bella (these are the relevant time periods as defined in the previous chapter).

So, let’s do the calculations. If Adam eats the ice cream then

Adam’s action utility is 3 times 20 minutes = 60

Bella’s action utility is 1 times 10 minutes = 10

If Bella eats the ice cream then

Adam’s action utility is 1 times 20 minutes = 20

Bella’s action utility is 3 times 10 minutes = 30

(As a scientist by training, I cannot move on from these calculations without first saying a quick word about units. I have not said anything thus far about what units CI has, I have simply suggested that it falls on a scale from -10 to +10. Whatever these units may be – let’s call them “happions” for the sake of argument – the unit of action utility will be happion.minutes in the above calculation, reflecting the fact that action utility is the area under the CI vs. time curve.)

Having computed these numbers, there are a several ways we could compare them. For instance, we could simply add them up. If Adam eats the ice cream, the total action utility for Adam and Bella combined is 60 + 10 = 70. If Bella eats the ice cream, the combined action utility is 20 + 30 = 50. So, a suitable conclusion might be that Adam should eat the ice cream, because that action produces the greater total action utility.

There is one serious problem with this approach, though. Because its metric is total action utility, or total amount of happiness over time, it does not consider how happiness is distributed among the population. It would thus be susceptible to actions that produced extreme happiness for a few people while making everyone else less happy. However, since no person has an inherently greater claim to happiness than any other, a system that is free to favor actions with lopsided distributions of happiness cannot be justified. It is also unlikely to be adopted by any society.

Perhaps we can borrow from Hippocrates, who said “Whenever a doctor cannot do good, he must be kept from doing harm”. At the very least, then, we could strive to raise the minimum happiness of the population, so that we do not fall into the trap of continually sacrificing the happiness of one person in favor of another. We could therefore select the action with the greatest minimum action utility, an approach I shall call the greatest minimum method. Our task would basically be to select the least bad option. This is akin to John Rawls’s difference principle (see his book A Theory of Justice), which states that an unequal distribution of goods (such as liberty, opportunity, income, wealth, etc.) is permissible only as long as it benefits the worst-off members of society.

There is a third option that lies between the above two. Recall that in the chapter on Utility, we suggested that suffering is generally a more extreme state than happiness, warranting larger negative CI values for states of suffering and smaller positive CI values for states of happiness. If this nonlinear approach to CI were incorporated into the first of the above approaches, i.e. choosing the action with the greatest total action utility, then states involving extreme suffering would be less likely. In particular, it would be harder to find a state in which the suffering of one person balanced out the happiness of everyone else, because the total suffering would tip the scales more heavily toward negative action utilities, while the happiness of everyone else would tip the scales less heavily toward positive action utilities.

END CURRENT EDIT

Distribution of Goods

Before we see how this new approach affects Adam and Bella, I would like to provide additional support for it by considering the distribution of goods problem in more detail. The distribution of goods constitutes a special type of moral problem in which a very large number of similar actions lie along a continuum. The continuum appears because any small change in the distribution of the good (such as government funding for health) causes a small change in the distribution of happiness among the relevant people, and therefore constitutes a unique action. Objections have been raised to the application of Rawls’s difference principle to health care, because it advocates that the very sickest be taken care of first. This requires a large amount of funding to be dedicated to a relatively small number of patients, leaving few resources for the much greater number of people suffering less serious ailments.

A possible solution to this problem can be found in the area of taxation. Taxation is generally set as a fraction of the earner’s income, meaning that, in absolute terms, richer people pay more while poorer people pay less. Some tax systems are, however, progressive, meaning that the rich pay a somewhat higher proportion of their income than the poor.

In the realm of health spending, this would mean that, at the most basic level, spending on individuals would be in proportion to the severity of their illnesses. In absolute terms, sicker people would receive more spending than healthier people. The system could also be “progressive” in the sense that healthy people could receive a little less than their share of funding in order to help out those who are most ill. The main point, though, is that no one would ever receive exactly zero funding, no matter how mild her ailment.

Organizations such as the United Kingdom’s NICE (National Institute for Health and Clinical Excellence) are already in the habit of quantifying the degree of suffering or disability associated with various medical conditions for the purpose of allocating funds to drug therapies. The basic data that would determine the details of a taxation model of resource distribution are therefore already in place.

Much as I would like to apply such an approach to my moral system, it is generally not possible, because the available actions in most moral problems do not comprise a near continuum of actions like they do in the distribution of health care. It is therefore difficult (if not impossible) to pick an action that produces a distribution of utility inversely proportional to existing levels of happiness. Instead, our only option is to raise the minimum happiness of the relevant population as much as possible.

But it turns out that even in the continuum model, selecting the action with the greatest minimum action utility produces a better outcome than might be expected, provided that sufficient resources are available. In fact, if enough resources are available, the minimum action utility approach consistently produces equal action utilities across the population, even if the distribution of resources is not equal.

This can be demonstrated by a simple example in which a fixed quantity of some resource must be distributed among five people. Let us assume that the amount of this resource in a person’s possession is the sole determinant of her CI. Let us further suppose that zero units of the resource produce an action utility of -10, while 100 units of the resource produce an action utility of +10.

In our example, we will consider what happens if there is one “special” person in the group of five. We will give this special person some fraction of 100 resource units that have just become available, and we’ll divide the remainder among the four “normal” people. Thus, if the special person gets 40 units, the remaining 60 units will be split four ways among the normal people, so that each normal person gets 15 units. This will be considered a single action. Another action might see 41 resource units being given to the special person, while yet another action might see 42 units being given to the special person. And so there is a series – a near continuum – of actions, each one differing slightly from its neighbor.

In the simplest case, which serves as a baseline for examples to follow, all five people start off with the same number of resource units (50 units each, say), and therefore the same action utility (a value of 0). This is akin to all people enjoying the same level of health prior to the distribution of funding for health care. Perhaps not unsurprisingly, the greatest minimum action utility will be obtained by distributing the 100 new resource units evenly among the five people.

This is demonstrated in Figure 4, which shows the greatest minimum action utility for the continuum of actions described above. Each point along the continuum is represented by the number of resource units allocated to the special person, and is shown along the horizontal axis. The vertical axis shows the minimum action utility. The graph looks like a mountain peak, with the peak occurring at an allocation of 20 resource units to the special person, implying 20 resource units to each normal person, too.

 image004

Figure 4. One “special” person in a group of five is assigned a variable quantity of some resource (horizontal axis). The remainder is split equally among the group members, and the minimum action utility determined (vertical axis). In this case, all group members start with the same quantity of the resource.

By seeking the action with the greatest minimum action utility (i.e., by seeking the mountain peak in Figure 4), we have arrived at the action that also produces a fair distribution of the resource. If we had allocated more than 20 resource units to the special person, the normal people would not have been as happy, and the greatest minimum action utility would have dropped (this is represented by points along the right hand slope of the mountain peak in Figure 4). If, on the other hand, we had allocated fewer than 20 resource units to the special person, she would have been less happy (this is represented by points along the left hand slope of the mountain peak).

Next, let us consider a state in which the special person starts with a lower action utility than everyone else. Say, for example, she starts out with only 25 resource units, and therefore an action utility of -5. The normal people start out with 50 resource units as before. This situation is analogous to starting out with one small but very sick group of people in a population. As can be seen in Figure 5, the greatest minimum action utility is achieved by giving the special person 40 of the 100 new resource units and dividing the remaining 60 units evenly among the normal people. This results in everyone achieving an action utility of +3.

 image005

Figure 5. The example from the previous figure, except that the special person begins with an action utility lower than those of the other group members.

As a final example, consider what would happen if the special person started from a happier position than everyone else (representative of, for example, a small wealthy class in a taxation problem). Let us give the special person 75 resource units to begin with, equivalent to an action utility of +5. In this case (Figure 6), it turns out that the greatest minimum action utility is achieved by splitting the 100 new resource units among the normal people who, as before, began with 50 units each. (The special person receives no additional resources.) Everyone therefore ends up with an action utility of +5. (If we had started the special person off with more resource units than 75, it would not have been possible to bring all the normal people up to the same level as the special person. More than 100 new resource units would have been required.)

 image006

Figure 6. The example from Figure 4, except that the special person begins with an action utility higher than those of the other group members.

The lesson to draw from this exercise is that an emphasis on alleviating the worst suffering in a population can actually lead to the most equitable distribution of happiness. The only caveat is that enough resources must be available. If resources were scarce, the greatest minimum approach would distribute them all to the worst off, and most people would receive nothing. As discussed above, a tax-like system would allow everyone to receive resources proportional to their relative needs, regardless of the total quantity available.

It should be noted that in all three of the above continuum-action examples, the total amount of happiness (the total action utility) is the same for every possible action (i.e., for every possible allocation of resource units to the special person). This is because the same number of resource units is being allocated to the population in each case. The total action utility is not sensitive to the specific allocation of this resource among individuals in the population. Using total action utility as our metric, then, would actually prevent us from choosing a preferred action in this problem.

Back to That Ice Cream: Suffering vs. Happiness

Let us put the idea of a continuum of actions aside for now, and return to the example of Adam, Bella and the ice cream. We previously considered this problem in terms of total action utility. Let us see what would happen if we chose the action with the greatest minimum action utility instead. The minimum action utility that occurs if Adam eats the ice cream is 10, while the minimum action utility that occurs if Bella eats the ice cream is 20. Under this algorithm, then, we would prefer Bella to eat the ice cream.

In choosing the greatest minimum approach we have essentially given the eradication of suffering greater priority over the acquisition of happiness. To see this, let us temporarily redefine the CI scale. Instead of -10 corresponding to great unhappiness and +10 to great happiness, let us suppose that -10 corresponds to the least suffering while +10 corresponds to the greatest suffering. This would be a suffering index (SI), not a contentment index. Importantly, a CI value of -10 would be equivalent to an SI value of 10, and vice versa: The SI scale is the CI scale in reverse. To get from a CI value to an SI value, then, we simply change the sign of the CI value.

Using this mapping, we can translate Adam and Bella’s CI-based action utilities in the above example to SI-based action utilities:

If Adam eats the ice cream then

Adam’s action utility is -3 times 20 minutes = -60

Bella’s action utility is -1 times 10 minutes = -10

If Bella eats the ice cream then

Adam’s action utility is -1 times 20 minutes = -20

Bella’s action utility is -3 times 10 minutes = -30

To pick the action that involves the least suffering, we need to avoid actions with high SI-based action utilities. We must therefore prevent Adam from eating the ice cream, since this produces a high action utility of -10. Instead, Bella should eat the ice cream because action utilities for that action are lower (-20 and -30). This gives us the same outcome as choosing the greatest minimum action utility on the original CI-based scale. To summarize, the act of selecting the action associated with the least amount of suffering is equivalent to selecting the action associated with the highest minimum amount of happiness.

What if multiple actions share the same greatest minimum action utility? In this case, we can resort to the other available comparative measure, namely the total action utility. We therefore have a two-tiered ranking system: Rank first by minimum action utility, and then by total action utility.

Summary of the Calculus

Here is a summary of the calculus we have formulated thus far:

  1. For the given moral problem, determine the relevant actions, people, and time intervals.
  2. Make the best possible prediction of the action utility for each combination of person and action.
  3. Rank each action according to its minimum action utility, from highest to lowest.
  4. If multiple actions share the greatest minimum action utility, rank these actions according to their total action utilities, from highest to lowest.
  5. The preferred action is the one that ranks highest.

I must emphasize that during the preceding development process there have occasionally been a variety of possible ways forward. There is no guaranteed “right” answer at every step. If morality really is a tool for satisfying a particular human (or even animal) need, we can try to devise the best tool possible, but there is no guarantee that such a single optimal configuration exists, or that we can find it. In the same way, we might agree that there is no single ideal form of a hammer, wrench, or screwdriver. The form of these tools depends on the goal sought, and there is no guarantee that a single perfect form exists for any given goal.

In the next chapter, I will discuss the second important component of consensual utilitarianism: Consent.

Return to the previous chapter.

Return to the table of contents.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: