My choice of moral system has some important implications, which I will discuss in no particular order here before diving into examples of how the system might be applied.
A Life Not Worth Living
When I formulated the consensual utilitarian calculus, I argued that death corresponded to an action utility of zero. This means that suffering, which by definition has a negative action utility, is worse, under certain circumstances, than death. Recall that a comparison of action utilities requires a relevant time period to be established. If we are to compare death to life, then the relevant time period is the entire lifetime of the person in question (or at least the remaining lifetime of that person). And if the person’s suffering is so persistent and so extreme that, over the course of her lifetime, it outweighs her happiness then, and only then, is her life worse than death. By “outweighs”, I simply mean that the action utility established over her lifetime has a negative value: Her life, taken on balance, is one of net suffering.
Consensual utilitarianism therefore openly embraces the idea that some lives are simply not worth living. There is no magical quality to life that makes it worth holding on to no matter how burdened with sorrow and suffering it may become. This means that pity and compassion are the only proper responses to tragedies like suicide and euthanasia.
Status Quo Bias
Nick Bostrom, a philosophy professor at Oxford, makes a compelling argument that ethical decisions are often clouded by status quo bias which, as the name implies, involves a preference for the current state of affairs over any new action. Such bias is reflected in the old idiom “better the devil you know than the devil you don’t”.
Consensual utilitarianism deliberately avoids status quo bias. It does this in two ways. First, it ignores the levels of happiness that exist before the moral decision is made. Instead, it considers only the anticipated consequences of each action. Second, if one of the available actions is the preservation of the status quo, this action is put on the same footing as every other available action.
Peter Singer, in Practical Ethics, argues that hedonist utilitarians are obliged to take the view that beings are replaceable under certain circumstances. Since hedonist utilitarians use pleasure or happiness as their moral metric, one happy being is morally equivalent to another, and beings can therefore be freely replaced as long as the total amount of happiness remains the same.
For example, there would be nothing morally wrong with cutting short (in a humane fashion) the pleasant life of a young sheep, provided another sheep with a similarly happy disposition could be produced to take its place. Such an argument might come in handy for those supporting the breeding of animals for meat.
Although this idea doesn’t seem too terrible when applied to farm animals, it becomes rather grotesque when extrapolated to humans. Should we feel free to kill people as long as we know there will be others to take their place?
Consensual utilitarianism, being a hedonist moral system appears at first glance to be subject to the replaceability argument. However, the argument fails to account for other characteristics of living beings. If happiness is our moral metric, then while it is true that other aspects of personality (like emotions, beliefs, and cognitive skills) are not directly relevant to morality, they may be indirectly relevant. This is because the happiness of one person is often strongly influenced by the personality of another. In the calculus of consensual utilitarianism, then, utility is dependent on the relevant beings’ personalities as well as their level of happiness.
To see how this might play out, consider two people, Adam and Steve, who have been happily married for ten years. Imagine winding the clock back fifteen years and interfering with the lives of these two people in such a way that they fail to meet. Instead, Adam ends up marrying a different person: John. As it turns out, John experiences roughly the same level of happiness during his ten years of marriage with Adam as Steve did during his ten years of marriage with Adam.
According to the replaceability argument, the two situations in this example appear to be morally equivalent. We have simply replaced Steve with an equally happy John. However, the argument does not take into account the happiness of Adam. What if it turns out that Adam is happier when married to Steve than when married to John? Taking Adam’s happiness into account introduces a moral asymmetry to the problem, and we cannot simply replace Steve with John.
There is another reason for objecting to the replaceability argument. You might notice that the above example was constructed in such a way that Adam did not actually have to experience the death of his first husband Steve, recover from that loss, and begin a new relationship with John. In the real world, however, it is not generally possible to kill people without such collateral suffering taking place. And as soon as this suffering is taken into account, we see that it’s absurd to argue that there are no moral consequences to killing a person and replacing him with someone who is equally happy. The death of the first person will cause considerable suffering among his friends and family. The mere act of replacing someone, then, has unavoidable moral implications. This is, of course, true even if the original person does not die, but her social network is damaged in some other way.
There is one rather unrealistic exception to this rule. Imagine some isolated part of the world that contains only one person. This person never comes into contact with other people. She has no family, and no relationships. Now imagine that when this person is only 15 years old, she accidentally slips off the edge of a steep cliff, falls to the bottom and dies instantly. At that moment, a clone of this person, also 15 years old, appears some distance away in the forest, and proceeds to live the same isolated life as the original person until, some months later, she too falls to her death and is replaced.
Is there anything morally objectionable in this scenario? Is it tragic that a person should die so young, and that this event should be repeated every few months? I would argue not. The person herself never suffers (she dies instantly), and she is never aware of her impending doom. Furthermore, no one else suffers as a result of her death, since she has no loved ones who will experience grief at losing her. In this rather bizarre case, then, the replaceability argument holds true. The death of the young girl is of no moral consequence as long as she is always replaced.
Of course, if we had reason to believe that the person in this example would become much happier later in life if she were able to avoid falling off the cliff, then the moral calculation would change, since one option (the person continuing to live) would involve greater happiness than the alternative (the person dying and being replaced by a person of similar happiness).
Total View vs. Prior Existence
Consensual utilitarian defines a relevant being as any present or future person whose happiness is contingent on the choice of action under the given moral problem. This definition falls under what Peter Singer calls the “total view”, which says that the total amount of happiness is what counts, regardless of whose happiness it may be. The alternative to the total view is the prior existence view, which gives moral consideration only to beings that exist at the time the moral decision is made.
I object to the prior existence view because it introduces what appears to be an arbitrarily restrictive condition. If an action is going to have implications for people’s happiness, why should it matter if these implications are felt tomorrow rather than three hundred years from now? What makes people living today more worthy of moral consideration than those living centuries from now? I see no justification for such special treatment.
Because of its bias toward the present, the prior existence view can appear to be rather selfish. For instance, it would declare that there is little point in taking action against global warming because most people who are currently alive will probably not bear the brunt of its effects.
The total view is not without its quirks. For instance, it holds that a world with many happy beings is better than a world with few happy beings implying, at least on the face of it, that we should all have as many children as possible. This is not exactly what consensual utilitarianism advocates. There is a limit to the number of happy children the earth can sustain. Not only are resources for those children limited, but the existence of more children places a burden on those who must share these resources or work to replenish them. Consensual utilitarianism would require all of these things to be taken into consideration, including the happiness of the parents. And, if these factors combined favorably, having a child would indeed be the recommendation of consensual utilitarianism. Of course, the rules of consent would not allow people to be coerced into having children, so the final decision would always be left to them.
Hedonistic moral systems are open to criticism regarding artificially created happiness. What I have in mind here is the (unlikely) scenario in which either a drug or some other technology is developed that allows us to experience a permanent state of artificially generated bliss, even if this means separating us from contact with the outside world. Would consensual utilitarianism recommend such a state, given that happiness is its sole aim? The simple answer to this question is “yes”.
This does not, however, mean that we should follow such a recommendation. I have made an argument that the desire for happiness and the aversion to suffering are the most fundamental needs of human beings (and other animals). The purpose of consensual utilitarianism is to take care of these needs. This does not, however, mean that the desire for happiness and the aversion to suffering are the only needs people have. People also have strong desires for social interactions, for exploration and discovery, and a host of other things. Consensual utilitarianism does not consider these needs directly, it only considers how they might affect a person’s happiness. If a person wishes to sacrifice permanent, drug-induced happiness for a more variable, often lower level of happiness in order to fulfill other desires, she should by all means do so. Consensual utilitarianism in no way prohibits this.
Indeed, recall that the first rule of consent deems morally permissible any action which all relevant parties consent to. Thus, if a person wishes to subject herself to considerable pain in exchange for the experience of, say, climbing Mt.Everest, consensual utilitarianism would not discourage her from doing so. It would, in fact, be completely silent on the matter, provided no one else raises a binding objection to her decision.
The above argument applies also to the more common issue of debauchery. If it were the case that a deep, sustained happiness could be achieved by constant drinking, consumption of narcotics, indiscriminate and unsafe sexual practices, and other examples of debauched behavior, and that such a lifestyle did not harm others, it would follow that consensual utilitarianism would recommend such an approach. If, however, a person had desires in addition to the desire for happiness, he would have to weigh the recommendation of consensual utilitarianism against these desires.
There are three additional, and more direct, arguments against debauchery. First, it is far from clear that a debauched lifestyle can be sustained without putting someone else’s happiness (or life) in danger. For instance, drunk driving and spousal abuse are just two of the outcomes strongly associated with heavy drinking. Second, debauchery is unlikely to afford deeper, more sustained happiness than a more moderate lifestyle (or even an ascetic one). Heavy drinking, drug abuse, and unsafe sex are all associated with serious, often terminal, health problems.
Even in the unrealistic case of an artificially constructed state of perpetual bliss which, for the sake of argument, has no negative side effects, the maximum possible level of happiness that can be achieved in this way may, perhaps, turn out to be less than that which arises from interactions with the real world and the people in it. Such questions can only be properly resolved once such technology is developed, if indeed it ever will be.
The second argument against debauchery is that it does not take long term happiness into account. The consensual utilitarian calculus is strongly dependent on the duration of an action’s influence. An action that causes happiness for a longer time is preferable to an action that causes a transient moment of happiness. And it seems to me that a debauched lifestyle is doomed to failure in the long term.
In terms of philosophical traditions, consensual utilitarianism is therefore more closely aligned to the Epicureans rather than the Cyrenaics. The latter school promoted the pursuit of whatever pleasures were immediately available, while the former was concerned with the long term.
Third, it might turn out to be the case that we are constrained by our neurobiology to achieve long term happiness only if such happiness is regularly interspersed with short periods of suffering. In other words, experiencing the contrast between emotional extremes may be required to fully appreciate happiness.
Religion and Happiness
There is some evidence, easily obtained through a quick internet search, that religious people are somewhat happier than non-religious people in some settings. The correlation is not straightforward, since some studies indicate that very religious people are about as happy as non-religious people, with moderately religious people being the least happy (see, for example, the 2011 study by Erich Gundlach and Matthias Opfinger published by the German Institute of Global and Area Studies).
But let us assume for now that the basic idea holds – that people are happier if they are religious. Would it not be true, then, that consensual utilitarianism would recommend religious belief? Once again, the short answer is “yes”.
But there is a catch. It is one thing to recommend a belief system to people, but it is another for them to actually adopt it. People are not going to become Christians (to pick one popular religious tradition) simply because studies indicate that Christianity makes people happy. Being a Christian requires sincere belief in certain claims such as the divinity of Jesus Christ. We cannot make ourselves believe in such things because of their promised benefits alone. Instead, we only believe in such things if they appear to be true: If there is compelling evidence in their favor.
All consensual utilitarianism can do, then, is encourage people to maintain their religious beliefs if they already have them (assuming these beliefs do, in fact, make them happy, and do not harm others).
There is another complication. One of the main reasons religious people are happier than their non-religious counterparts is their access to institutions that provide social structure and support (see, for instance, the 2012 article in Scientific American by Sandra Upson). If this is the case, then the recommendation of consensual utilitarianism would be for people to seek social structure and support, be it religiously themed or not. If the religious doctrines themselves do not influence happiness, they become morally superfluous.
There may, however, be an argument in favor of the psychological comfort that certain religious beliefs offer. The promise of heaven, for instance, may assuage the fear of death. Does consensual utilitarianism recommend that people hold such beliefs? There are two problems with answering this question in the affirmative. The first problem is the one already mentioned. It is not possible to adopt a belief simply because of its purported benefits. The second problem is that many religious beliefs, because of their lack of scientific evidence, may well be false.
So why not hold false beliefs if they make us happy? We can approach an answer to this question by realizing that no one can actually hold a belief if they know it to be false. This is a logical impossibility. If I believe X, then I take X to be true. It does not make sense to say I believe X while simultaneously regarding X to be false. The question should therefore be modified: Should we deceive people, for the sake of their happiness, into holding false beliefs? For example, should we raise our children as Christians even if we, ourselves, are atheists?
I have argued in this book that an action should only proceed if it has the consent of all relevant people. And, importantly, I have argued that the consideration of consent must assume that all relevant people have full knowledge of the actions being considered. To ask if it is morally permissible to teach people false beliefs, then, we need to ask whether these people, if they knew what we were doing, would consent. I think it is reasonable to conclude that few people would approve of being deceived into believing falsehoods, even if it was for their own benefit. Consensual utilitarianism would therefore not recommend that such action be taken.
This does not mean that people should be discouraged from communicating their religious beliefs to others. If a parent sincerely believes that, for example, heaven awaits those who believe, she should not be discouraged from passing that teaching on to her children. The same goes for any belief that does not cause harm.
The discussion of deceptive teachings leads naturally to the topic of lying. Is it ever permissible to lie? As you will have gathered, consensual utilitarianism does not take an absolutist approach to morality. It is not a deontological system based on rigid rules of behavior (interestingly, Sam Harris, a well known atheist author, embraces such a hard-lined attitude against lying, as expressed in his book by that name). I therefore have to conclude that there may be situations in which lying is acceptable. I consider one of these in detail in the next chapter (see the discussion on the Jewish sympathizer during the Second World War).
To determine whether a lie is permissible under a particular set of circumstances, one simply has to follow the consensual utilitarian calculus. The more interesting question, though, is whether consensual utilitarianism has any general claim to make about the virtues of lying, or lack thereof. Since happiness and suffering are of fundamental importance under consensual utilitarianism, we can rephrase the question as follows: Does lying usually cause suffering or happiness? I think it is reasonably clear that lying is not a happiness-producing activity. If lying has any value, it must be in preventing suffering.
There is, however, a serious problem with lying not directly related to its influence on happiness and suffering. The proper functioning of any moral system relies on the truth being known. No reliable moral guidance can be given when the facts are unclear or deliberately falsified. It therefore seems a fundamental requirement for the proper functioning of morality, and indeed of society in general, that lying be avoided as far as possible.
In the next chapter, I will consider a series of examples that demonstrate further the workings of consensual utilitarianism.
Return to the previous chapter.
Return to the table of contents.