Quick guide to consensual utilitarianism

I make occasional modifications to my moral theory, consensual utilitarianism, and this makes it a little difficult to see the overall structure of the moral system as it currently stands. This page therefore serves as a quick guide to consensual utilitarianism with all modifications included.

Explanations are intentionally kept to a minimum here, so please refer to the original essay, and updates, for more information.

Groundwork

My moral system is based on desires. It seeks to maximize the fulfillment of desires in as many people as possible, and is therefore a utilitarian theory. Its currency is not happiness, but contentment, which I define as a function of desire.

A desire implies that the desire-holder wishes to be in some other state in which that desire is fulfilled. In other words, the presence of a desire leaves the desire-holder in a less preferred state. Consensual utilitarianism offers a mechanism for deciding which action, if any, will satisfy a given desire in such a way that makes all relevant people as content as possible.

Given that the presence of a desire is not a preferred state, it can more strictly be associated with discontentment rather than contentment, hence the following definition:

Level of discontentment (LOD)

A person’s level of discontentment is defined as the sum of intensities of all the desires she currently holds.

An action may influence the intensity of desires in a person, and will do so over a finite period of time. I therefore define the time-dependent utility of an action as follows:

Action utility

Consider an action that produces a change I in the intensity of a particular desire, and maintains this change for a time t. The utility of the action with respect to the desire is the product of I and t.

I now define a moral problem, in the context of consensual utilitarianism, as follows:

Moral problem

Given a set X of possible actions, and the alternative of taking no action at all, select the option which is likely to produce the largest decrease in the average LOD of all people relevant to the problem (relevance is defined below).

This formulation says nothing about why a particular set of actions should be relevant, and it therefore includes a wide range of problem types. For instance, if the moral problem can be expressed as “I want to do w, but should I actually do it?”, then the two relevant actions are w and not-w. On the other hand, if the problem is of the sort “I want to achieve y, but I’m not sure how to go about it in an ethical manner”, then the relevant actions might be all of those that would achieve and, as always, the option of not taking any action at all. Yet another common problem type is “Of the actions available to me, which should I choose?”. In this case, the relevant actions are simply those that are practically possible.

Given the initial set of actions, then, the next step is to identify the people relevant to the moral problem posed:

Relevant people

A relevant person is one whose LOD would be changed by at least one action in X. This may include people who have yet to be conceived at the time the moral decision is made.

At this point, I note that it is unfair to expect an action to be included in X if it would go against the will of any relevant person, barring the situation in which that action is performed with the purpose of maintaining the moral system itself (here, I am thinking of punishment for crimes, etc.). Such actions must therefore be removed from X. See more here.

A person is most likely to object to an action if it will increase her discontentment. However, it is also possible that a person would object to an action even if it decreased her discontentment, and we see no good reason to disallow this class of objection. We therefore make no further attempt to quantify, and therefore predict, objections to actions, but leave these decisions to the individuals involved.

Relevant desires

A relevant desire is any desire whose intensity can be changed by at least one of the actions in X. This includes desires that only come into existence when one of the actions in X is performed.

Relevant time period

The earliest moment at which any of the actions in X starts to affect any relevant desire, is denoted T1. Similarly, the latest moment at which any of the actions in still effects any relevant desire, is denoted T2. The relevant time period starts at T1 and ends at T2.

Calculus

Consensual utilitarianism determines which action in X is preferred by performing the following calculus:

  1. For a given person and action, sum the action utilities for all the desires associated with that person and action, if the person exists for that action. This yields a total action utility for each person-action pair.
  2. For a given action, add the total action utilities of all existing people, and divide by the number of existing people. This yields a per capita utility for the relevant, extant population.
  3. The preferred action is that which has the smallest (or most negative) per capita utility.

Examples

I will not reproduce here all of the applications of consensual utilitarianism that I have covered in this essay series. Instead, I refer the reader to the Examples page, and to further examples in the various updates to the original moral system, which can be reached from the table of contents.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: