With fake news and disinformation seemingly thriving during the COVID-19 pandemic, J. McKenzie Alexander looks at the epistemology and psychology of fringe beliefs.

It goes without saying that the coronavirus pandemic has brought about huge changes in contemporary life. Many international flights are grounded, many people have been kept inside their homes in lockdown, and economies across the world are virtually frozen. Enormous uncertainty exists regarding what form the “new normal” will take when this is all over. And, if all of that were not bad enough, arsonists throughout Europe have set about torching 5G towers because they believe that they are connected, somehow, to the spread of coronavirus.1 According to the Financial Times, as of 16 April at least 60 towers have been burnt down in the UK, where the movement started, and more across Europe. Given that these beliefs are not based on evidence, and lead to actions detrimental to society, believing that 5G towers are implicated in the spread of coronavirus seems like some seriously weird sh*t.

Now, believing weird sh*t (to introduce a term of art) is by no means new. People have believed weird sh*t ever since humans evolved the ability to form beliefs. But one thing which cries out for explanation is why, in a time when scientific knowledge is growing faster than ever, so many people believe so much weird sh*t. For example: that airplane contrails are actually a governmental effort to release toxic chemicals into the sky, that Hilary Clinton was involved in a paedophile ring based in Comet Ping Pong pizza, that Barack Obama was not born in the United States, and – somewhat reassuringly down-to-earth in contrast – that anthropogenic climate change has been vastly overstated. What’s going on?  One important aspect, I suggest, is that in contemporary society, for some people, what matters most about beliefs is not their truth-value, but rather their role in signalling group membership and in alleviating unmet psychological needs.

So how should we think of beliefs? Does it matter if they are not true? The question is tricky because some beliefs – like those underlying the theory of Newtonian mechanics – are, strictly speaking, false. But Newtonian mechanics, although false, is approximately true and well-founded on evidence. We can use Newtonian mechanics to accurately describe the world and to make powerful interventions in it. Although both are literally false, there is a world of difference between Newtonian mechanics and the weird sh*t some people believe.

Two models of belief formation

When philosophers think about beliefs – cognitive attitudes towards propositions that have a truth-value – they often worry about the following questions: (i) what counts as appropriate evidence for a belief? or, (ii) what is the appropriate degree of belief, given the evidence? Both questions are motivated by the desire to avoid false beliefs. We cannot entirely avoid error, but we can try to minimise the likelihood of error. And minimising the likelihood of error means that an epistemically careful, rational agent will try to ensure that they base their beliefs on appropriate evidence. And this is where things become interesting, because people often form beliefs by relying on the wider community in which they are embedded.

Let’s consider an issue which affects us all: anthropogenic climate change. The belief that it exists has a truth-value independent of whether we actually know it. One idealised model of how rational actors form beliefs is that they use a process, like Bayesian updating, to adjust their credences based on the available evidence.  However, in the real world, most of us face the problem that we cannot engage with the relevant evidence directly because we have neither the time nor the expertise required. In cases like this, we often rely on another person’s expert opinion to form a belief by proxy, using their judgement to fix our belief. Or, since most of us don’t have a single scientific expert on whom we rely, what we do is rely on the considered opinion of a reliable epistemic community. For example, we rely on epidemiologists to tell us about the spread of SARS-CoV-2, and what we should do to avoid developing COVID-19. Traditionally, in the case of beliefs about the nature and causal structure of the world, we would look to the epistemic community of scientists: their expert knowledge and track record give them the credibility to act as arbiters on such matters. In this idealised model, beliefs aim at the truth.

What the model of idealised rational belief formation neglects is the fact that, for some, perhaps many, people, what matters most about a belief is the relationship between the person and a group associated with that particular belief. Individual belief often serves as a signifier of group membership, and endorsing a particular belief often becomes an informal requirement of group membership. The idealised model assumes the following process of belief formation: beliefs should be based on evidence and, therefore, when I cannot fix a belief with sufficient reason to be confident that it is true (or likely to be true), I should rely on a reliable epistemic community whose considered judgement is unlikely to lead me astray. An alternative sociological model conceives of belief formation as follows: my beliefs are often associated with a particular group, and that group contributes (sometimes significantly) to my social identity. My social identity, as a source of esteem, friendship, and camaraderie, constructs meaning in my life. Preserving my social identity is a matter of great importance to me. Certain matters are earmarked as constitutive of membership in that group and, hence, when it comes to beliefs concerning those matters, they are not determined by evidence but rather determined by my group identity. And this can happen even when, objectively, none of the group members are qualified to advise on those beliefs. Why would any agent behave in such a way? Because doing so is the best way to ensure continued group membership.2

I have described these two models of belief formation as though they were mutually exclusive, but we must keep in mind that both can operate within individuals to varying degrees at different times. The two models should be thought of as ideal types, useful to distinguish for the purpose of understanding. It is not uncommon for both processes to operate at the same time, and for the individual to feel torn between accepting a belief constitutive of group membership, and rejecting that belief based on evidence to the contrary. Some religious beliefs fall into this category, particularly ones concerning metaphysics. And we should also not overlook that the category “a believer in science” can also be understood as a social identity, so there is interplay between the two models as well.

The decoupling of belief and truth

The inversion of grounds for belief on the sociological model might strike some as odd. To begin, it would seem to decouple beliefs from reality in ways that the “reality-based community”3 would legitimately see as harmful.  If there is no mental state a person can adopt which, for example, would make bleach safe to consume,4 why wouldn’t blind deference to beliefs – especially that kind of weird sh*t – determined by a group identity be eventually eliminated by natural selection?  How can obviously false, potentially harmful and socially damaging beliefs persist and spread?

Here we must recognise that, in the developed world, we have engineered society in such a way that we rarely need to rely on the accuracy of the vast majority of our beliefs in order to navigate the world safely.  If you have what David Graeber calls a “bullshit job,” you don’t need many accurate beliefs to do your job and get paid. And if your most deeply held beliefs go against what you are required to do in your job (because they are racist, sexist, Islamophobic, not politically correct, anti-religious, etc.), you can go through the motions bracketing what you really think and tell yourself “this is what I need to do in order to get paid”. A person doesn’t really need to have accurate scientific beliefs, coherent political beliefs, or reasonable economic or social beliefs to go to the supermarket and buy food. A person doesn’t have to believe in evolutionary theory to go to the doctor, be prescribed antibiotics, take them and get better. A person doesn’t have to believe in general relativity to use their phone’s GPS to find their way, even though the technology would malfunction if the designers didn’t take the gravitational timeshift implied by general relativity into account. People who believe weird sh*t can free ride on those in society who strive for truth, because technology and social institutions provide a pretty big buffer between any single person’s beliefs and the stark fist of reality.

Of course, to someone fundamentally committed to truth it appears deeply hypocritical to behave in this way but, from the weird sh*t believer’s point of view, so what? The person who thinks like that will still feed themselves, be cured of their infection, and be able to get to where they want to go. The only beliefs a person really needs to survive in a modern society are elementary ones like how to cross the street without being run over, how to drive a car (not an issue if you can take public transportation), how to pay your bills on time, how to cook over a gas stove without blowing the house up (not an issue if you have an electric hob), how to use your smartphone, and so on. These beliefs are instances of highly specific pragmatic local knowledge, and are compatible with a wide variety of nonstandard theoretical beliefs about the world. Members of the Flat Earth Society can negotiate daily life just fine. Society’s collective technological prowess has, for better or for worse, radically decoupled people’s ability to survive from the theoretical coherence, truthfulness and accuracy of their beliefs. Furthermore, certain false beliefs can even be fitness-enhancing. If a person erroneously believes that crime is on the increase and, as a result, they insist on staying home in the evenings rather than going out in public or driving places, this change in behaviour reduces their exposure to car accidents and being mugged or otherwise assaulted.

So let’s recap: this decoupling of a person’s ability to survive from the truthfulness and accuracy of their beliefs frees up a person’s system of beliefs to assume a different functional role. What may matter most about a belief is not its particular content or actual truth-value, but what that belief signals about a person and a group with which they identify. The denial of anthropogenic climate change provides a nice illustration of this phenomenon in action. Climate change requires a coordinated, global response in order to be combated effectively. The actions of any single person are entirely irrelevant to the global outcome. Furthermore, due to the long-term time-delay in the climate’s response to environmental legislation, it is essentially impossible for any person to see a material difference between what they do and any improvement to the environment over the short to medium run. Given this, beliefs about whether anthropogenic climate change exists can be co-opted to serve a signalling function about which group a person belongs to, because over the short-run they are decoupled from noticeable material consequences.

Looking back over recent history, this seems to be what has happened with the Republican party in the US on several issues.  I find it remarkable that, under Nixon, the Republican party passed a number of pro-climate pieces of legislation and founded the EPA. Yet today, the majority of Republican supporters are sceptical about climate change. Why? The importance of their social identity as Republicans leads them to defer to a core belief of their group. The real question becomes why have Republicans arrived at a consensus denying climate change when the evidence points to the contrary? The answer, I believe, is this: because the majority of economic and business interests represented among top Republican donors benefit from continuing with business as usual, rather than taking the effective change required to combat climate change. To a great extent, the Republican party line has been ideologically captured and subordinated to these economic and business interests. Furthermore, the denial of climate change can be spun, in the public context, as having two consequences.  First, a rejection of “liberal science” with its purported political bias. (The comedian Stephen Colbert once said that “reality has a well-known liberal bias”.)  Second, as an attempt to bring back traditional extractionary industries (such as coal mining) or new ones (such as fracking), for which there is an interest in communities that have historically voted conservative.  In either case, both aspects of climate change denial primarily have the function of reaffirming the group identity of those who adopt the belief and advancing their local self-interest.  Concerns about truth and evidence take a back seat to these other functions.

The social and psychological functions of systems of beliefs

This last example illustrates how the various social functions played by beliefs can subordinate the truth-functional role of beliefs. Once we recognise that there are alternative functional roles played by beliefs that can trump an interest in the truth, we see that the way to confront beliefs in weird sh*t involves engaging with the underlying functions that are served by that system of beliefs, rather than engaging with their theoretical content or evidential basis. And this shift towards the functionality of systems of beliefs means we have to acknowledge that, sometimes, the real social function served by a system of beliefs is not necessarily known by many – perhaps any – of the people who have those beliefs.

What we are talking about, indirectly, is the well-known distinction between manifest and latent functions, as featured in classic work in anthropology. A manifest function of a social practice is one which that practice has been deliberately designed to have. For example: randomised police patrols keep criminals from being able to predict a safe time to commit burglaries. A latent function of a social practice is one which that practice has, but for which it was not deliberately designed. The classic anthropological example is how the practice of extended lactation in hunter-gatherer tribes (i.e., breastfeeding infants for longer than twelve months) has the latent function of controlling the population of the tribe, since breastfeeding reduces fertility.

When it comes to the social functions of systems of belief, the distinction between manifest and latent functions helps us understand better why people believe weird sh*t. Beliefs which seem irrational when we treat them as evidence-based vehicles of truth can easily make sense from another point of view. When we appreciate that systems of beliefs can have, as their latent function, the satisfaction of other needs, the fact that those beliefs are unresponsive to contrary evidence no longer seems unusual. If a system of beliefs gives me a way to understand my place in society, an explanation of why I am unhappy (or happy), unsuccessful (or successful), and a justification for feeling the way I feel, there’s little incentive for revising those beliefs on evidence to the contrary because doing so leaves me with unaddressed psychological and sociological needs.

This brings us back to the phenomenon with which we started: the torching of 5G masts. Providing evidence that 5G masts are not harmful will not stop the arsonists because, I suspect, this is not what is driving their actions. I think they are driven by a combination of several factors. To begin, there is widespread fear in the SARS-CoV-2 and a belief – in many cases justified – that governments have fallen short in their response and are therefore perceived as having failed, in whole or in part, in their duty to keep the public safe. Given this, people want there to be an easy solution, a “magic bullet” which will make everything better. In addition, there is another long-standing feeling held by many that decisions which negatively impact them are imposed from the outside: a new road that increases pollution, increased numbers of flights to an airport which raise noise, or the installation of an unsightly and unwanted 5G mast in the community. And we know that some industries, like the tobacco and fossil fuel industries, really have in the past deliberately worked to suppress information about the harmful effects of their products. This volatile cocktail of unaddressed desires and feelings and beliefs creates the space where beliefs in weird sh*t can multiply and flourish. A virus of the mind, so to speak. The metaphor is apt because, just like a virus, you cannot reason with it: eradication requires identifying, and removing, the background conditions which made it possible to take hold and spread in the first place.

By J. McKenzie Alexander

 

J. McKenzie Alexander is Professor of Philosophy and head of LSE’s Department of Philosophy, Logic and Scientific Method. His primary field of research concerns evolutionary game theory as applied to the evolution of morality and social norms, but more recently he has worked on the foundations of decision theory. He also has broad interests in the philosophy of science and social science.

 

Notes

1 – https://www.ft.com/content/1eeedb71-d9dc-4b13-9b45-fcb7898ae9e1 

2 – In speaking of a single group, I do not mean to ignore or downplay the importance of intersectionality. In some cases, of the many groups to which we belong simultaneously, there is one which is particularly salient. In those cases, the salient group can drive the process of belief formation. However, in other cases, no one group is salient and different groups can pull in different directions. How a person negotiates this is a fascinating question, but one which falls outside the scope of the present discussion.

3 – In a 2004 article in the New York Times, Ron Suskind quoted an anonymous White House aide who said people “in what we call the reality-based community, believe that solutions emerge from your judicious study of discernible reality.” The aide then said, “That’s not the way the world really works anymore.”

4 – https://www.theguardian.com/us-news/2019/apr/19/church-group-to-hold-washington-event-despite-fda-warnings-against-miracle-cure

 

Featured image: Public Domain