An episode of BBC Radio 4’s “The Moral Maze” entitled “Moral Philosophy for the Internet” looks at the question of whether incendiary material on social networks ought to be censored and, if so, by whom.
A guest on the program suggested that we are too preoccupied with the online format of today’s offensive material (for example, recruitment videos for the Islamic State). We don’t like the idea of censoring books, so why should we consider censoring online publications? Another guest, however, suggested we censor not only original online content, but material that started out as written, but is now online. In particular, she mentioned parts of the Bible that condemn homosexuality and speak permissively about slavery.
Consider the principle of free speech, which holds that everyone is permitted to express his or her opinions and beliefs, no matter how offensive they may be to us, unless they explicitly incite harm to others. For example, a girl was recently convicted of involuntary manslaughter for encouraging her boyfriend, via text message, to go through with his planned suicide (see article here).
It seems reasonable to want Google, Facebook, Twitter, and other online entities to properly censor the content they host. Currently, many of these companies do so by waiting until users flag the content as incendiary – the public is essentially recruited as a censorship panel. Although this is undoubtedly the easiest and cheapest approach, it is obviously not ideal. How long, exactly, is Facebook comfortable displaying a post before it’s flagged?
Text messaging is impossible to censor – it would be like trying to censor a telephone conversation in real time. Instead, as is the case with the suicide story, our only resort is to prosecute crimes once they’ve occurred.
The panel on the Moral Maze also questioned the actual harm done by internet postings. Do ISIS recruitment videos really convince people to become terrorists, or do they simply push over the edge those who are already becoming radicalized?
And what value is there in leaving this kind of material uncensored? Even if removing it prevents a single death, or some lesser act of violence, isn’t that worth it? What do we lose by censoring it? Surely this material is not elevating our society and its ideals in any way?
A common objection here is that we risk picking an unscrupulous or biased censor. What if the head of censorship at Facebook is an evangelical Christian who chooses to suppress anything even vaguely related to Islam? This slippery-slope fear drives a lot of people to advocate for no censorship at all.
I think, as a modern, sophisticated society, we have to tackle such problems intelligently. We have to strike a balance. As long as the public is kept in the loop about how censorship is done, and is given some power to prevent the system being abused, we should be able to reach a happy medium.