It’s clear that beliefs can be wrong about the way the world is, but can they also be wrong in a moral sense? Lewis Ross looks at the moral status of belief.

A Morality of Belief?

The bread-and-butter of ethics is working out what sort of actions are morally right and wrong. Is it wrong to spend thousands of pounds on a postgraduate course when so many are suffering from malnutrition? Is it permissible to cause one person to die in order to save five others? Is it ever morally permissible to disobey the law? Actions are familiar objects of moral evaluation.

But what about beliefs? Can a belief be morally wrong? The answer is not at all obvious. There is some temptation to suppose that we overreach by suggesting that a mere thought can be morally blameworthy. But, in other moods, we are inclined to accept that what goes on in our minds is open to criticism. There is, surely, something unsavoury about wishing ill to those around you or feeling gleeful at the misfortune of your friends. What’s more, we can readily imagine certain views – e.g. Holocaust denial, or racist and misogynistic perspectives – that seem abhorrent. Perhaps beliefs can be morally bad (or even good) after all? This question has been scrutinised in recent literature, so let’s dig a little deeper.

While it is uncertain whether beliefs can be morally wrong, they can certainly be wrong from what philosophers call the “epistemic” perspective. We criticise people for what they believe all the time. But when we ask, “What makes a belief good or bad?”, it is tempting to suppose that beliefs go awry simply for truth-related reasons. For example, a standard way to criticise a belief is to suggest that it is irrational – say, because it is obviously false, or because it is contradicted by the evidence. Holocaust denial is a bad opinion both because it is false and because there’s lots of evidence demonstrating its falsity. Do we really need to go further and suppose that, in addition to being irrational, beliefs can be morally wrong too?

Two Cases

Some philosophers have argued “yes”: beliefs can be morally wrong independent from whether they are supported by good evidence. I’ll discuss two cases that some have taken to support this conclusion. One stems from the unsavoury nature of certain types of demographic profiling. Here’s a slightly amusing example taken from the experience of a friend of mine:

Drinker. You are a Scottish person visiting Houston, Texas. As you shop in the supermarket, picking out two bottles of beer, you chat to your partner on the phone. Hanging up the call, you catch the eye of another customer who asks: “That’s a Scottish accent, right?” You tell him he is correct. He nods, turning away. A minute later, the man reappears, this time looking more serious. “You know”, he intones confidentially, “all you have to do is walk through the door – AA saved my life.”

This is a case of profiling – where someone draws an inference about you (that you have a drinking problem) based on membership of a particular demographic (Scottish persons). Suppose for sake of argument that it was really was the case that male Scots like myself were more likely than not to have a drinking problem. Even if that were true, there seems to be something amiss in forming such a belief. Some have thought that cases like these – including much more problematic examples involving overt racism and misogyny – are cases where we morally wrong the person we are profiling.

Another argument is that we have a sort of moral duty not to believe ill of our friends or family. Suppose that someone is telling you a story that paints your close friend in a bad light. The evidence doesn’t look great, but it isn’t conclusive. Should you believe badly of your friend? Some have thought not – and not because of the lack of evidence, but simply because the ethical requirements of being a good friend demand that you give them the benefit of the doubt.

At this point, you might think: “Well, what’s wrong here is acting on the basis of such beliefs. If the person with the apparently unsavoury belief keeps their mouth shut, then there would be no harm done, regardless of what was going in their head.” This isn’t an entirely satisfying response. One important concern is that it doesn’t explain what is wrong with private profiling. Suppose someone flicks through a magazine and forms nasty beliefs about people photographed within based on their gender or skin-colour. There seems to be something rather objectionable about this, even if they somehow manage to scrupulously ensure that it doesn’t influence their behaviour.

Despite the apparent force of these cases, accepting that beliefs themselves can be morally good or bad isn’t an entirely happy conclusion either. For one thing, morality is typically thought to provide us with powerful – perhaps even non-negotiable – reasons to do certain things. But, the fact that some thought is morally beneficial doesn’t seem, at first blush anyway, like the sort of thing that could compel us to believe it if it happened to be contradicted by the evidence. And secondly, taking beliefs to be morally good or bad leaves us susceptible to conflicts between different ways we can assess beliefs. Take the example of believing ill of your friend. If the belief is based on good evidence then you might think it is epistemically correct to believe accordingly, praiseworthy even. But if the conjecture about the moral evaluability of belief holds true, then there’s a sense in which it is bad to believe ill of your friend – even criticisable. So, what should we say about someone who has such a belief? Should they be praised or criticised? Answering this question becomes very difficult indeed if beliefs are objects of moral evaluation, because there’s no obvious way of balancing the value of intellectual and moral considerations. Perhaps they do well from one perspective but badly from another? But then we are left saying that there is no ideal state to be in – we are doomed to be criticised whenever we encounter cases where we have good evidence for a “morally wrong” belief.

A Solution: The Neutral Option

I think there is a way out of this impasse. Our intellectual options are not restricted to simply believing something or disbelieving it. Another option is to remain neutral. We often talk about this neutral option as “suspending judgement”. We suspend judgement about many things, often as we wait for more evidence to come in. For example, you might suspend judgement about whether your serious-minded new colleague has a sense of humour – you’ll have to wait until the Christmas party to find out. An interesting feature of suspended judgement is that it can be justified – perhaps even required – by factors that extend beyond the evidence you have at a given time. While we usually think that a belief is rational or not simply depending on whether supported by the evidence you have right when you form it, suspending judgement can be justified by pragmatic factors: say, the fact that you’re going to get better evidence shortly, or because you are cognitively impaired (perhaps you’ve been visiting Houston and have had a few beers).

Another especially relevant fact about suspended judgement is that it seems to be naturally justified by moral considerations. Take a simple example: near the end of a trial, a jury member leaning heavily towards “guilty” should suspend judgement until they’ve heard the final day of evidence, because making their mind up at that point would be unfair to the accused. The reason that remaining neutral in morally risky situations is an attractive response is that suspending judgement doesn’t involve believing against the evidence – it doesn’t involve believing at all. Therefore, crucially, we don’t need to say that people who suspend judgement in response to the cases discussed earlier must either be epistemically blameworthy or morally blameworthy. By refraining from forming a belief, they avoid breaking any epistemic rules and they meet any moral obligations they might have regarding the management of their beliefs.

I don’t think there’s anything unusual in supposing that the failure to suspend judgement can be something we can be morally criticised for, even while saying that beliefs themselves are not objects of moral evaluation. After all, the fact that we are open to moral criticism for creating or maintaining something doesn’t mean that the thing itself is a subject of moral criticism. A simple example makes this clear (with overdue apologies to my old officemate Dr Mattia Gallotti). Take the example of leaving a horrendous mess in your shared office. What’s inconsiderate is creating and preserving a mess. When we call the mess morally bad, we don’t really mean that the mess itself is morally evaluable. Really, what is criticisable is the choice to make the mess in the first place and to leave it there. The same goes with our cognitive lives. Sometimes we form beliefs when we ought to have remained neutral – perhaps because we have an obligation to have faith in our friends or to not expose individuals to the risks inherent in profiling them. The moral problem is allowing that to happen and not going back to clean up by suspending judgement when appropriate.

This is all perfectly compatible with the traditional philosophical idea that the “aim of belief” is truth or knowledge. Just as the aim of a landmine might be to explode when stepped upon (a good landmine is one that fulfils this function), the aim of a belief is to accurately reflect the world (a good belief is one that fulfils this function). But this doesn’t mean that we shouldn’t morally evaluate those who put landmines in the wrong places or those who form beliefs when it would have been more appropriate to reserve judgement.

In sum, I think we should agree that many instances of profiling and (perhaps) believing against your nearest and dearest involves morally dubious behaviour. But we needn’t say that beliefs themselves can be morally good or bad. Indeed, I think that this would be a sort of category error. What is morally dubious is failing to remain neutral in certain morally sensitive situations until you have decisive evidence either way.

By Lewis Ross


Dr Lewis Ross is Fellow of Philosophy and Public Policy at LSE. He researches the relationship between epistemology (the study of what we can know or rationally believe) and law, politics and ethics.


Further reading