In Good Faith: How belief in the unknown guides moral action
Article 1 of 3 - How belief influences moral decision-making
In business, there are two critical inputs into any decision: the desired outcome and a belief about how each potential action will play out. In fact, this is true for any decision we face - from the small details we navigate each day, to the way we approach our interaction with the world over the course of a lifetime. When it comes to moral decisions, this way of thinking is most clearly laid out in consequentialist philosophies: where the correct judgment between right and wrong depends on an accurate forecast of impact. (For the philosophy buffs, I will tackle deontology later).
The role of morality in defining the ‘outcome’ part of this equation is obvious - in fact there are multiple philosophical approaches that tackle how we might measure it: whether it be maximising pleasure, reducing pain, satisfying individual preferences or the “greater good”. But the morality of our beliefs about how the world works, and therefore how our actions will translate into outcome, has been given much less focus.
Since the enlightenment’s recognition that our actions play out in objective reality according to natural laws, science has rightly claimed ownership over that domain of knowledge. And it’s true that any good decision-maker will work to understand the evidence for and against their beliefs and weigh them accordingly, improving the likelihood their action will lead to their desired outcome. But Western thinking has gone too far in reducing the function of belief to pure observation and reason.
In ‘Ethics of Belief’, W.K. Clifford argues that it is morally “wrong everywhere and for anyone, to believe anything upon insufficient evidence”.1 The claim is that the only moral beliefs are ones that are backed up by some standard of proof. This concept has been a cornerstone of 20th century Western rationalism since the Ethics of Belief’s publication in 1901, and I suspect is an overreaction to the historical dominance of religious thought up to that point.
Under this approach, the question becomes: how far should we go setting an evidentiary standard for acceptable belief? Commonly understood standards include the 95% confidence used in scientific testing, the “beyond reasonable doubt “ of criminal prosecutions or the “balance of probabilities” that assesses civil litigation (50/50). Let’s proceed on the basis that the lowest reasonable standard of 50% likelihood is good enough to be considered a moral belief.2
Even with that low bar, Clifford’s position would still leave most of the decisions we face in life as unguided by moral reasoning. This is because functional and social survival, let alone progression in pursuit of our personal values, requires our projection into the future of a world that is complex beyond our individual and sometimes combined comprehension. Even in a strictly deterministic world, the amount of information required to accurately predict any outcomes that actually matter will always be beyond our grasp. We may have a pretty good handle on nuclear fusion, liquid dynamics and gene expression, but when it comes to biology, psychology and sociology, our predictive abilities become much more suspect. By excluding from moral judgment the predictions we must make about how our actions will play out in a complex social world, we are left with limited guidance about how to live a moral life.
And with our most fundamental beliefs - the ones closest to our core assumptions about the nature of reality - both the less sure we can be of our accuracy and the more influence they have on our moral reasoning. These ‘ontological beliefs’ include the nature of physical reality, consciousness, free will, the existence of God or a higher power, and our interpretations of other people’s state of mind. The truths of these matters are largely unknown, and by some accounts unknowable, yet they are fundamental components of moral decision-making.
Take for example nihilism: nihilists reject the assumption that there is value to the human experience. On that basis we cannot build any moral judgments at all. All decent moral claims rest on a belief that others have personal experience similar to our own, and that those experiences matter. But some would argue that these claims cannot be proven, because they are not verifiable beyond our individual perspective.
Even after granting those assumptions, for morality to get off the ground, we must also believe that it’s possible to make a positive impact on the world, and that others won’t tear down the progress we make. Otherwise there would be no reason for us to make a personal contribution in the first place.
Furthermore, we must believe that collaboration with others can be ‘positive-sum’ - that it can generate incremental value compared with acting alone - else we will run into significant moral difficulties in situations where resources are fixed. Now that there are no geographical frontiers left to exploit, we are faced with a choice: either work together to grow the size of the pie with the resources we have, or face the realities of a zero-sum world - where the only way to increase our personal share is to take another’s piece.3 Unless we want to gamble that the entire world will somehow accept that they’ve had their fill, we best believe in the possibility of fruitful collaborative effort.
As such, having faith in ourselves, our spouses, and our neighbours is a necessary condition for moral progress. And this is the case even if that faith is built on shaky ground.
While I agree with Clifford that, in the realm of the knowable, we should only believe what is evident. Of course we should seek out and follow the evidence where we are able to. But given the complexity of the social world and our limitied cognitive capacities, a large degree of doubt will necessarily underpin our actions. It’s in the unknown where we must fill in the gaps of reason with moral beliefs.
Instead, I propose that there are beliefs that we are morally obliged to hold, regardless of the level of confidence one might have that they’re objectively true. These beliefs are centered around the expectations we have for our own contribution to the world and how others will engage our efforts. Put simply: to act morally, we must believe in the possibility of a better future.
The influence of belief in the unknown can be clearly observed in intimate relationships, which by-and-large would not survive if subject to constant objective scrutiny about the likelihood of success. These relationships require the development of a shared vision, one that appeals to the dreams and desires of our partner. This vision is a careful balance between hopeful fantasy and brute fact, with far from certain outcomes.
And while a cold-hearted rationalist might claim that their faith in their spouse is merely an expectation based on past experience, when they hope for a better future in the face of current strife, they reveal the moral projections we place on top of objective fact.4
This doesn’t just apply to intimate relationships, it also applies to all social interactions. We must allow our potential collaborators the benefit of the doubt if we want to build anything of value.
To illustrate the role that belief plays in relationship dynamics, we can look to Game Theory for some useful insights. In “The Emergence of Norms”, Edna Ullmann-Margalit describes a situation involving two soldiers, previously unknown to each other, who are charged with defending a mountain pass (analogous to the classic Prisoners’ Dilemma). From the Stanford Encyclopedia of Philosophy:5
“If both flee, the enemy will not only be able to take the mountain pass, it will also be able to overtake and capture the fleeing soldiers. And if just one of them stays while the other flees, the brave soldier will die trying to hold off the enemy, while the other will have just enough time to escape to safety…[ ]
In this situation each soldier has a reason to flee. This is because fleeing provides the only chance of escaping unharmed, which is the outcome each most prefers. However, the situation is a dilemma, because if each soldier pursues this course of action, then they will both end up worse off than if neither had…[ ]
Ullmann-Margalit pointed out that if the soldiers understood the nature of their predicament they might want to do something to prevent themselves from both fleeing and subsequently being captured… Ullmann-Margalit’s suggestion was that it is moral norms that do this work.”
In this view, the function of morality is to prevent failures of rationality. Acting independently, the best option for yourself is to flee and sacrifice your comrade. But if we add in a reasonable expectation that each soldier will live up to their duty - a social norm - and the shame that would come if they don’t, the personal incentive now lines up with the action that will yield the optimal outcome.
But when we go so far as to equate social norms with morality itself - that is, if we consider it morally acceptable to follow a norms “only when expectations are satisfied for a sufficiently large number of people”6 - we reduce morality to just another artifact to be judged against an evidentiary standard.
The problem with this view is that it excludes morality from the very situation it’s needed most: when social norms are in doubt. Under Ullmann-Margalit’s definition of ‘morality-as-social-norms’, once the mob turns and expectations are low, it’s no longer a moral obligation to resist the urge to join in. If there’s anything we should learn from 20th century conflicts, it’s how easily the suffering of others can be legitimised when moral duties are informed only by what we expect of those around us, and the horrors that can follow from that line of thinking.
This view also clearly conflicts with our intuitive sense of morality. It’s obvious that a moral actor would stand up for principle even when (especially when!) others do not. It’s something we teach our children - “and if your friend jumped off a bridge, would you too?” The lesson is that, sometimes, the right thing to do is to stand apart from the crowd. Outsourcing our judgment about right and wrong is no moral position at all.
Instead, a moral person acts in a way that would create the best outcome, assuming others will take the same approach when faced with the equivalent situation, regardless of how likely it is that will actually happen or the personal injury that might come if it doesn’t. Adherence to this strategy is necessary for morality because if we don’t accept the burden of this responsibility on ourselves, then how can we expect it from anyone else? If no one takes on the burden, the game of life will surely collapse into sub-optimal outcomes. To be moral, we must put ourselves forward as the prime mover towards a positive-sum future.
It’s worth noting that this approach isn’t synonymous with willful naiveté - there is still room to reinterpret the game and the payoffs where information is known, and to build an evidentiary base of knowledge where it’s lacking.
Going back to the soldiers on the mountain pass, my proposed moral standard would conclude that the right approach is to stay and defend the post, under the assumption the currently unknown comrade will make the same assessment. But if we were to learn that the other soldier was a double agent, or had a history of abandoning their post, then we would be wise to reassess the situation and factor that into our decision.
What I’m arguing is that it’s in the treatment of the unknown that the morality of belief comes into play. It’s how we act in the face of doubt that determines the righteousness of our decision. All else being equal, we should give the benefit of the doubt to our fellow citizens. This approach rests on a faith in others that is not grounded in sufficient evidence, but is required if we are to have any hope of defending decent society.
The legal principle of ‘good faith’ captures this approach well.7 To act in good faith means to act with “an honest belief” that the collaborative endeavor will work out, in the expectation that the other party will do the same. We ought to apply this principle across everything we do, because only then can we claim to be living up to our side of the deal that underpins the generation of value. We must act with faith in ourselves and faith in others, even if no one else does, or we destroy the possibility of moral progress from the outset. Acting in this way demonstrates an understanding of what we might consider ‘metaphorical truths’; beliefs that, while not objectively true, are true in how they manifest in the world.8 The truth is that only through our own personal effort and faithful engagement in collaboration can we expect to achieve a better future. As Jordan Peterson would say, “we all must strike a bargain with existence”; I’d just add “and we should fulfill that deal in good faith”.
In the next article I will put forward a case for defining articles of secular faith, based on the critical role belief plays in motivating the human psyche, and the potential that a shared set of beliefs has in uniting people with differing values.
Following that I will suggest some specific beliefs that I consider necessary conditions for morality, as a starting point for discussion.
A note on deontology
It’s worth noting the similarities between my position and rational deontology. Deontology - or “duty ethics” - claims that right and wrong can only be judged by intent, and that this intent must be pure. For the religious, this means being pure to divine commands. For the secular, it’s to be pure in consistency of will, where we only act in a way that we would be willing to apply as a universal law. Comparing this to my proposal that we should act in a way that would yield the optimal outcome assuming that others would do the same in that exact situation, you can see that it has similarities to Kant’s requirement for ‘consistency of will’ regardless of personal circumstance.
The major difference I’d note is that I consider it perfectly morally valid to think like a consequentialist when the impacts of your actions can be fairly well known. If you are confident that being blunt about your child’s musical talent will dampen their enthusiasm for their new passion, then it’s okay to say their recital was a joy to hear even if it wasn’t (unlike what you’d do if you were acting like Kant, who’s Categorical Imperative demands that lies are impermissible in any context).
The reason for the difference in treatment between the known and the unknown, is because in the known situation the implications of your decision are bounded by the specific case. Put another way: if the situation was different, you would act differently, so it is unnecessary to distill from it a universal maxim. But where the implications are unforeseeable, we must acknowledge our limited predictive capacities and defer to an approach that is based on a principle that is consistent across multiple contexts.
The danger with consequentialism is applying it in situations where our confidence should be recognised as well below any reasonable standard of proof, such as in complex systems that are far beyond our predictive capabilities, particularly social ones which are a mess of competing psychological factors. It’s in these situations I’d suggest switching to a deontological framework, which show due deference to the many unknowns we are faced with, including whether or not others will live up to our own moral standard.
“The Ethics of Belief” - W.K. Clifford (1901). .https://users.drew.edu/jlenz/clifford-ethics-of-belief.pdf
By ‘belief’ here and throughout the article, I refer only to beliefs that inform action. I consider beliefs that don’t influence decisions as irrelevant to morality. For example, the belief in a deity is morally arbitrary, unless it justifies the treatment of others for good or ill
I must credit Bret Weinstein and Heather Heying’s ‘DarkHorse’ podcast for this idea - Spotify.com
Stanford Encyclopedia of Philosophy. First published Mon Sep 27, 2021 - plato.stanford.edu
Stanford Encyclopedia of Philosophy. First published Mon Sep 27, 2021 - plato.stanford.edu
Legal Information Institute’s Legal Encyclopedia (taken November 2023) - law.cornell.edu
I must credit Chris Williamson’s podcast ‘Modern Wisdom’ for this useful concept - Spotify.com