Update 3: Relevant actions

In this update to my essay on morality, I would like to relax one of the restrictions I set in place when first formulating my moral system.

When previously considering relevance, I began with actions, saying the following:

In most problems, we are presented with a relatively small set of available actions to choose from, so this is where we begin our analysis of relevance.

By “problems” here, I was referring to my definition of a moral problem:

A moral problem is a question of the following kind: “Which action should I perform in order to best fulfill a particular moral goal?”

This is a very generally stated problem, and does not suggest any obvious restriction on the actions being considered. Although some problems permit only a few possible actions, others require us to choose from a much larger set, or to actively seek novel actions that might serve us better than the available set.

Take, for instance, the rather interesting moral problem raised on the Wikipedia entry for “justice“, in the context of utilitarian punitive practices:

So, the reason for punishment is the maximization of welfare, and punishment should be of whomever, and of whatever form and severity, are needed to meet that goal. Worryingly, this may sometimes justify punishing the innocent, or inflicting disproportionately severe punishments, when that will have the best consequences overall (perhaps executing a few suspected shoplifters live on television would be an effective deterrent to shoplifting, for instance).

Contentment utilitarianism supports the idea of punitive measures as a means of maintaining the proper functioning of morality in society. Thus, if faced with a scenario in which only two actions were allowed: to execute shoplifters on live television, or to take no punitive action against shoplifters, the contentment utilitarianism calculus might end up preferring the former action.

The reason such an odd, intuitively immoral, conclusion would be reached is because such an oddly restrictive set of actions is being offered. In reality, the problem of shoplifting can be tackled with a much wider range of possible actions. Perhaps stores could post large pictures of past shoplifters on their doors, thereby shaming said shoplifters into reforming their behavior. Or, more conventionally, shoplifters could simply be fined.

If these actions were added to the original two, then contentment utilitarianism would immediately alight on them as preferred options, since they cause less suffering to the shoplifters while still deterring future crime.

The point I am making here is that we can easily shortchange ourselves by unnecessarily restricting our available options in a particular moral problem. And, since my original restriction on the available relevant actions was a practical rather than logical necessity, it can be lifted without sacrificing reason.

However, there is one difficulty that must be addressed before we can go on. As we can see from the above definition of moral problems, a crucial defining component of any moral problem is the set of actions available. Consider the following example:

Which of the following two actions should I perform to best fulfill the goal of contentment utilitarianism (i.e., minimizing LODs): steal food from a store, or buy the food?

The specificity of this problem lies in the actions listed: this is a moral problem about shoplifting. If we were to generalize the available actions arbitrarily, including such unrelated options such as “go for a walk”, then the moral problem would become hopelessly general. Therefore, to be more general about actions without becoming more general about our moral problem, we need to restructure our definition of the latter.

If we look more closely at what happens in a typical moral decision-making situation, we see that an individual (or sometimes a group) has a particular desire, and wishes to know whether it would be “right” or “wrong” to take an action that would satisfy that desire. In this framework, the shoplifting moral problem could be restated as follows:

I have a desire for food, and I could reduce the intensity of this desire by stealing food from the store, or by buying it. Which of these two actions is preferred by contentment utilitarianism?

When phrased this way, the problem can be opened to a much wider range of actions without diluting the central problem. A complete generalization would look something like this:

I have a desire for food. Which of all the possible actions that could affect the intensity of this desire would be preferred by contentment utilitarianism?

This problem is still about the desire for food, but it considers absolutely any action that might increase or decrease the intensity of this desire, not just a small predetermined set of actions. Of course, as we noted previously, it might actually be preferable (morally) to perform no action at all.

I therefore redefine “moral problem”as follows:

Moral problem

A moral problem is a question of the following kind: “Of all the actions that would change the intensity of a particular desire, which would best fulfill a particular moral goal? Would this preferred action fulfill the moral goal more completely than taking no action at all?”

It is important to note that the action that solves the moral problem may in fact increase the intensity of the initial desire (for instance, the action of not stealing food may cause the the potential thief to become more hungry). It must be remembered that solving moral problems is not intended to satisfy one particular person’s desires over another’s: it treats all relevant people equally.

Given this new starting point, the choice of relevant desires, actions, and people, must be reconsidered.  Because a given moral problem is associated with a particular desire (which we will call the initial desire), we can use this desire to choose a set of relevant actions as follows:

Relevant actions

The set of relevant actions X contains all actions capable of changing the intensity of the initial desire. The set also contains the option of taking no action at all.

From this point on, relevance can be defined as before: relevant people are those affected by any action in X, and relevant desires are those whose intensity would be affected by any action in X (this definition necessarily includes the initial desire, but does not give it any special significance over the other desires). We also need to maintain the modification to X that says that any action going against the will of a relevant person must be removed from X.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: