This is the fourth of five posts making up my essay on moral calculus.
In this post, I discuss situations in which the systems from the previous chapters reveal certain “blind spots”. I use that term with some trepidation, because I do not wish to make value judgments about what a particular system can and cannot do, since we have not established a suitable metric against which such judgments can be made. However, there are two particular areas of discussion that can be suitably divorced from such judgments. The first concerns situations in which the ranking algorithms of our various problem solutions produce ties. The second concerns implicit assumptions made in the development of these systems, that were not mentioned previously.
Let us begin with the desire-based system of Chapter 1, in particular the solution we offered to Problem 1. This solution is probably more likely to return equal rankings – and therefore stalemates – than any of the solution algorithms considered. This is because it is based purely on simple binary relationships (fulfill or thwart) with no continuous scales such as intensity or emotional value. This can be demonstrated by a very simple situation in which there are only two relevant desires (A and B), and in which desire A fulfills desire B and desire B fulfills desire A. Solution 1 asks us to rank each desire by summing its values relative to the remaining desires. Since each desire has only one value, and that value is 1, the ranking of each desire is also 1, and the desires thus rank equally. Stalemate is also reached if, instead of the desires being mutually fulfilling, they are mutually thwarting. In this case, the value of each desire with respect to the other desire is -1, which is also the ranking of both desires. Although it less likely, it is still possible for similar stalemates to arise for larger sets of desires. As long as every desire in the set fulfills and thwarts the same number of desires, each will acquire the same ranking.
Stalemates become much less likely with subsequent solutions offered in the first three sections of this essay. For instance, with Solution 2 we see that even if there exist only two mutually fulfilling (or thwarting desires), no stalemate would result unless the intensities of the two desires were exactly equal. Even the smallest difference in intensity would result in unique, albeit extremely close, rankings. (We note that in such a case, the normalized rankings would still span the range from 0 to 1, but the “raw” rankings would be two very similar numbers.)
A similar situation applies to Solution 3. Two actions will only rank equally if they produce exactly the same change in emotional value. The same line of reasoning applies to Solutions 4 and 5.
4.2 Hidden Assumptions
The first implicit assumption of potential importance is the one made in Chapter 1, and first identified in Chapter 3: the claim that desire A fulfills (or thwarts) desire B implies that any action that fulfills desire A must fulfill (or thwart) desire B. This assumption is a generalization, and was eliminated by our hybridization of the desire-based and happiness-based systems.
At this point, we might ask the question: how do we know that desire A fulfills desire B? A possible answer is that we have observed that any action that fulfills desire A also fulfills desire B. Thus, to have any confidence in our statement that desire A fulfills desire B, we must be aware of at least one action that fulfills both of these desires, and we must be confident that there are no actions that fulfill desire A without also fulfilling desire B. It seems unlikely that such confidence can ever be extremely high. If two desires are truly distinct, it is likely that there exists some action that fulfills only one of these desires.
Of course, as noted in Chapter 2, it is likely that we will only ever have some finite set of relevant actions to choose from. If this is the case, then a statement like “desire A fulfills desire B” is only true if every relevant action that fulfills desire A also fulfills desire B. Deciding the truth of our statement thus requires us to work through all the available actions. Consequently, there is little merit in ignoring actions altogether, as we did in Chapter 1.
Another implicit assumption was made when developing systems corresponding to Solutions 1, 2, and 5. It has to do with the location of the relevant brains. These systems used Definition 1.4 to determine which brains were relevant: “Relevant brains are those containing at least one of the relevant desires.” According to this definition, relevant brains could include one situated in the United States and one in Australia: location is irrelevant. I make no judgment about this assumption, I simply make note of it here because it might be of interest when considering moral systems below.
The happiness-based systems in Solutions 3 and 4 offer a somewhat different prescription for relevant brains (Definition 2.3): “Given a set of relevant actions, relevant brains are those whose emotional value can be modified by at least one of the actions.” In our discussion of Solutions 3 and 4 we noted that actions modify emotional values by being detected by the senses. This observation places a constraint, albeit a fairly loose one, on the location of the relevant brains. Specifically, a given brain will not be relevant to a set of actions if its location makes it unable to detect any of those actions. However, an action may influence a brain indirectly even if the action is not observed. For instance, a president, in the privacy of her office, may sign a bill into law unobserved by all but a few advisors, yet this bill may have a very real influence on the happiness of millions of citizens. Although we used the language of sense detection in Chapter 3, the wording of Definition 2.3 makes no explicit reference to how actions modify the emotional value of brains, so indirect modifications of the type just described are implicitly included.