In laying out the groundwork for this book, I made an argument for the importance of consent in moral decision making. A moral system that forces its dictates on people is doomed. This is especially true of any system that holds happiness as its principal value. People are not likely to be happy if forced to act against their will. Instead, the best tool for promoting a moral system is persuasion. And persuasion works best if it uses sound rational arguments. Indeed, the aim of this book is to provide a sound rational argument for a moral system.
Unfortunately, consent cannot be universally respected. I have already suggested that for any moral system to succeed, it must have some sort of reinforcement mechanism (a criminal justice system, for instance) that prevents people from taking advantage of others. This may involve forcing people to act against their will by, say, putting them in prison. Another issue with consent concerns the person on whose behalf it is given or withheld. I discuss both of these complications in greater detail later, but for now, here is a preliminary formal definition of consent:
If a person is fully informed of the consequences of an action and agrees, without being coerced, to the execution of this action, then she has given her consent to it.
In the simple example moral problems provided thus far, it has been assumed that all relevant beings have consented to all relevant actions. It has also been assumed that all relevant beings were fully informed of the available actions. The assumption of full access to information is the same requirement we made when defining relevant people. The arguments made then apply here, too.
However, it is not possible to consider the mental state of having access to information, or of giving consent for any being who, at the time the moral decision is made, does not exist or otherwise does not have the cognitive ability to experience these states. And yet all relevant beings do, by definition, have a stake in the outcome of the moral problem. We therefore have to make some sort of assumption about their likely state of consent – if they had been able to form one – at the time the moral problem was being decided. Since happiness is the goal of our moral system, it makes sense to assume that consent would only be granted to whichever action produced the greatest action utility for the being in question. In other words, we assume that the being would look after its own best interests if it were capable of doing so.
Under the above assumption, and considering the rarity with which people consent to actions that work against their own interests, most moral problems will involve the withholding of consent by at least one relevant being. This is not a particularly surprising outcome. Morality would not be considered particularly important if people’s interests never conflicted and everyone lived in uninterrupted peace and harmony.
We now provide a formal description of the assumed state of consent for cases in which it cannot be obtained in the usual way:
If a relevant being does not exist at the time a moral problem is decided, or if the cognitive state of the relevant being prevents her from formulating or expressing her state of consent, it is assumed that this being consents only to that action which produces the highest action utility for that being.
I henceforth use the term “consent-incapable” to describe beings that fall under the assumed consent definition, and “consent-capable” for those capable of granting or withholding consent. In the chapter on relevance, we described a moral problem involving a missile launch that would cause suffering to people centuries from now. These people are consent-incapable because they do not exist at the time the moral decision is made.
Consent can be abused in certain ways. A balanced view of consent is based on the interests of the consenter alone. If multiple people were to base their consent on the interests of only one particular individual, then the interests of that individual would be over-represented. There is no a priori justification for such an imbalance. And the only way to ensure that the interests of every individual are given equal weight is to stipulate that consent be based on self-interest alone. Each person looks out for his- or herself. Importantly, the specification of assumed consent for consent-incapable beings ensures that these beings’ interests are always represented.
This approach may smack of selfishness, but it actually leaves plenty of room for compassion. To see this, we should consider what self-interest means in the context of a happiness-based moral worldview. To defend your interests is to avoid any action that would decrease your happiness. This general definition is not limited to cases in which a person stands to lose personal property, social standing, or any other good he deems of value. It also includes threats to the well-being of people in whom he is emotionally invested. When a man defends the interests of his brother, he is also defending his own interests, because harm to his brother will cause (emotional) harm to himself.
Indeed, a suitable definition of friendship or love might have at its core the contingency of one person’s emotional state on another’s. To love someone is to suffer when they suffer and to be glad when they are glad. We cannot say we love someone whose emotional ups and downs have no impact on our own. In a sense, love is the extension of the self to include the other. Our self-interest grows to include that of another person.
We also need to be careful about a second, more pernicious, imbalance of consent. This arises when people are motivated by a desire to do harm. If one person wishes to harm another, he may not accept any action that would protect the other person. Not only would this result in an imbalance of consent (the interests of each person would not be equally represented), but it would directly contradict the stated purpose of the moral calculus, which is to promote happiness and reduce suffering.
In light of this discussion, we consider consent only so far as it involves the self-interests of the consenter, with self-interest defined in the extended sense described above. Put differently, if a relevant being does not consent to an action, even if that action poses no threat to her own happiness, her lack of consent should not be considered binding. Indeed, even if a person nobly withholds her consent because she fears the action will threaten the interests of another person, we can reject this lack of consent knowing that the other person’s interests are already defended by his or her own consent.
Likewise, if a person withholds consent because she actually wishes harm to come to another person by doing so, such consent is overridden by the consent of the person who stands to be harmed, since only this latter consent is made on the basis of self-interest.
We can now incorporate the requirement of self-interest into our formal definition of consent:
Consent (Modified Definition)
If a person is fully informed of the consequences of an action and agrees, without being coerced, to the execution of this action, then she has given her consent to it. Such consent is considered binding only if it is motivated by that person’s self-interest.
Now that we have a full definition of consent, we can consider the simplest state of consent in a moral problem, namely one in which all relevant beings are capable of giving consent and all do, in fact, consent to at least one available action. To address this situation, we proceed directly with a first rule of consent:
First Rule of Consent
When at least one action exists which all relevant people would, if fully informed, consent to, the relevant people should select whichever of these actions they prefer. If they seek guidance regarding the moral fitness of these actions, the consensual utilitarian calculus can provide a ranking.
Under this rule, it is quite possible that all relevant people will agree to an action that makes one person happier and everyone else more miserable. This situation might arise in, say, a family who wishes to send their daughter to college. To save money for her tuition, the parents might agree to decrease their own happiness by working longer hours. Indeed, any example of voluntary sacrifice for the benefit of another person fits this general scenario, provided all relevant people, including the person benefiting from the sacrifice, consent.
If consent-incapable relevant beings exist, then the first rule is not likely to apply because, by the preceding definition of assumed consent, consent-incapable beings will be assumed to object to certain actions. We can express this observation as a formal caveat to the first rule:
Caveat to the First Rule of Consent
The first rule of consent does not apply if:
1. There exists at least one consent-incapable relevant being and
2. At least one of these beings objects (under the requirement of assumed consent) to at least one of the actions agreed upon by the consent-capable relevant beings.
What if all beings are consent-capable (the caveat to the first rule does not apply) but there are no actions which they all consent to? If we truly respect the idea that all people have the same authority to make decisions about their happiness, we have no basis upon which to force any of them to act a certain way. The only exception occurs when people agree beforehand to follow the recommendations of the moral system, even if they occasionally find these recommendations objectionable. In the absence of such an agreement, though, we cannot force people to act in a particular way. We do not have the authority to do so.
But there are two things we can do. First, we can search for actions that have not yet been considered. Usually the available actions under consideration concern specific desires held by the relevant people. If it is possible for these desires to be fulfilled by performing some action that was not in the original set, then such an action should be sought. What this essentially amounts to is an attempt to reformulate the moral problem in such a way that the first rule of consent would apply.
If no additional action can be found, there may be another way forward. This is an appeal to persuasion. If we make the best case we possibly can for the recommended action, then one (or more) of the relevant people might change his or her mind, and grant consent to that action. As has already been mentioned, one of the purposes of constructing a well-formulated moral system is, in fact, to provide a persuasive argument in favor of a particular action.
The search for additional actions and the use of persuasive argument can be encapsulated in a second rule of consent.
Second Rule of Consent
If there is no action which all consent-capable and consent-incapable relevant people would, if fully informed, consent to, the following steps should be taken:
1. An additional action should be sought which, when included in the moral calculus, will win the consent of all relevant beings.
2. If no additional action of the above kind can be found, a persuasive case should be made to all consent-capable beings for whichever action in the current set is preferred by the moral calculus, in the hope that any relevant person who has withheld his consent will change his mind. Relevant people may not, however, be coerced.
The firm stand against coercion in the second rule follows naturally from the prohibition of coercion in our definition of consent. If the purpose of making a case for the preferred action is to persuade people to give their consent to it then, by the definition of consent, coercion cannot be used.
Step 2 in the second rule only makes sense when applied to consent-capable beings. It is not possible to discuss moral issues with cows or people in comas. The implication is that only consent-capable beings are able to change their minds and withdraw objections.
As an example of the second rule applied, consider the hypothetical situation in which a person desperately needs a kidney transplant, and only one viable donor exists. The potential donor, however, refuses to undergo the necessary surgery. The two available actions are to do the surgery or do nothing. The person in need of a kidney does not consent to the idea of doing nothing, while the potential donor does not consent to giving up his kidney. There is thus no remaining action upon which both relevant people agree.
The second rule of consent suggests that we find a new action to which both parties can agree. Such actions may, in the current example, include the search for a willing kidney donor. Or, if the original potential donor is of an advanced age, perhaps the sick person can agree to go on dialysis until such time as the donor passes away and her kidney becomes available.
In the absence of such options, the second rule of consent requires that we try to persuade the donor to give up her kidney. This action, if performed voluntarily, would result in the best outcome according to the consensual utilitarian calculus. We cannot, however, coerce the donor, and if she refuses to give up her kidney, that is just the way it has to be.
The next issue we need to address is the circumstance in which all relevant people consent to an action but, at some point after that action is performed, one or more people regret their decision and withdraw their consent. What, for instance, should we do if the potential organ donor in the above example agrees to donate his kidney but, when he arrives at the hospital to be admitted for surgery, has a change of heart (excuse the pun) and decides to reverse his decision?
The first thing to note about this situation is that it arises after the initial moral problem was solved. It might therefore make sense to treat it as a new problem with its own set of available actions. Hence the third rule of consent:
Third Rule of Consent
If a person withdraws the consent she gave to the recommendation of a previous moral problem, a new moral problem is created and must be solved like any other.
It may happen that someone withdraws her consent to the preferred action in a previous moral problem, but has no practical recourse in reversing that action. For instance, if a woman decides to have an abortion and then, some time after the procedure, regrets her decision, there are no available actions that can reverse it. In this case, there is no second moral problem, because a moral problem must have at least two actions to discriminate among.
As already noted, the concept of criminal justice is a necessary one in any society. No moral system can succeed if its recommendations are routinely ignored with impunity. Let us suppose, then, that in order for a moral system to succeed, it must put forth a set of laws which, if broken, subject the law breaker to some sort of punishment or rehabilitation, provided this serves some useful purpose such as deterrence or a reduction in recidivism (it is not likely that other motives such as revenge or retaliation will be preferred actions under consensual utilitarianism). I have already argued that in order for a moral system to be adopted, consent must be given by the adoptive population. Yet if consent must be given to the moral system itself, then consent must also be given to the system of laws and punishments that promote its functioning. There is no a priori reason why one system should require consent while the other should not.
Built into the agreement to abide by the laws of the moral system is the acknowledgement that people might later object to being punished if caught breaking the law. Thus, if someone breaks the law and decides he does not want to be punished, even though he earlier agreed to abide by the laws of the moral system, he has no proper grounds for objection.
This leads to an exception to the second rule of consent:
Exception to the Second Rule of Consent
A person may be forced to act against her will if (and only if):
1. The action being forced is a recommendation of the reinforcement (i.e., criminal justice) system and
2. The person previously agreed to honor the recommendations of the reinforcement system.
How far should the law reach? As indicated by the first rule of consent and its caveat, an action that is agreed upon by all relevant beings is morally permissible provided it serves the best interests of any consent-incapable relevant beings. The law, then, should not be in the business of regulating such actions. What consenting people agree to do among themselves is their business.
The law should therefore focus on situations in which objections are raised either directly or by the assumed consent rule. Furthermore, because laws are generally fixed, they should apply only to situations in which a particular action is almost always associated with a low minimum action utility relative to the available alternatives. Murder is an obvious example. Consensual utilitarianism will essentially never recommend that a person be murdered, since this leads to lower action utilities than if the person were not murdered. It therefore makes sense for there to be a law prohibiting murder.
More generally, if there is no consensus on which moral system should be adopted, the law should, at the very least, protect people’s freedom to live according to whichever moral code they prefer. Even this requirement, though, imposes its own prescriptions on behavior. If people are to exercise their freedom, they must be protected from harm, meaning that prohibitions on acts like murder still make sense even in the absence of an underlying moral code.
Another point to note about the law is that it requires the use of a sliding scale. How consistent should the negative effects of an action be to warrant a law against it? I doubt if there is a firm numerical answer that can be derived from first principles, and I will not make a case for any particular approach here. I will simply note that, given the argument for consent made in this chapter, people should have recourse to object to any law their elected law makers deem necessary.
Until now, I have considered the consequences of actions under the hypothetical condition that relevant beings are fully informed, but I have not been clear about whether relevant beings should be fully informed. In our moral decision making, do we have a duty to disclose our decision process to all those who may be affected by it, or is it permissible to leave people “in the dark”?
The second rule of consent requires a case for the preferred action to be made to the relevant beings. This exercise cannot be carried out if information is withheld. If a being is relevant, then her interests are, by definition, contingent on the action chosen, and she deserves to be kept informed of the decision-making process. I should emphasize that even if people were not kept informed, the consensual utilitarian calculus would still produce the same results, since it requires the hypothetical assumption of full disclosure. However, execution of the second rule of consent would not be possible.
There is one exception to this argument. If a relevant being is not acting in a self-interested manner as required by our modified definition of consent then his consent is not considered binding. He is therefore in the same category as a person who raised no objections in the first place. And such beings do not need to be included in the execution of the second rule. The entire purpose of the second rule is to persuade relevant beings to withdraw their objections.
The only relevant beings who should be fully informed of the decision-making process are therefore consent-capable people who participate in the discussion triggered by the second rule of consent.
Importantly, the assumed objection of a consent-incapable being can trigger the use of the second rule even if that being is, by necessity, excluded from the discussion demanded by that rule.
This concludes our discussion of consent. In the next chapter, I consider some general implications for consensual utilitarianism.
Return to the previous chapter.
Return to the table of contents.