Skip to main content

Verified by Psychology Today

Ethics and Morality

Forming Beliefs That Suit Our Moral Inclinations

Sometimes we think it's best for morality to override accuracy.

Key points

  • New research shows that we don't always endorse objective reasoning.
  • When evidence is risky to believe, we prefer to avoid believing it.
  • It's time to rethink the common assumption that people reliably strive to form impartial beliefs.

A basic premise of critical thinking is that our beliefs should be formed impartially, based on raw facts. Accordingly, we have a fundamental expectation that biologists in laboratories, journalists reporting news, and justices on the Supreme Court should bracket their moral convictions when assessing evidence. Yet, new research indicates that we do not always strive to form the most objective conclusions possible.

Psychologists have long known that people often form beliefs in biased ways. If you support the death penalty, for example, you will probably be more susceptible to accepting unsubstantiated claims that capital punishment benefits society by deterring future criminal acts. Our tendency to evaluate facts in motivated ways pervasively skews our reasoning, leading to polarization and other unsavory outcomes.

Courtesy of Allysa Adams
Source: Courtesy of Allysa Adams

This plague on proper reasoning has propelled the search for antidotes that allow people to gain awareness of these biased forms of reasoning. The presumption is that, once people realize that their thinking is infused with irrational and motivated tendencies, they will reliably correct for these biases. However, it now seems that pointing out cognitive biases may not always be an effective strategy.

New Research

In an article published this week in Cognition, Corey Cusimano and Tania Lombrozo show that people do not always aim to think in undistorted ways. Instead, even after becoming aware of moral biases in their reasoning, people sometimes endorse these illogical tendencies. This is especially true when evidence points in morally repugnant directions.

Consider the proposition that Black people are worse tippers than white people. This claim harbors odious implications that could inflame prejudice, and the motives of people who hold this belief might be suspect. At the same time, it is an empirical claim—meaning that it could be confirmed or disconfirmed by collecting data about the average tipping propensities of different racial groups. Thus, when people are placed in a situation in which they can choose whether to form a belief about the existence of racial disparities in tipping tendencies, they can either take the stance of an impartial observer and rely purely on scientific evidence, or they can instead lean upon their moral commitments and refuse to believe problematic ideas.

Evidence for the claim that Black people tip less than White people has been found in a paper published in the 1990s—and evidence against this claim has been found in another paper published in the 1990s. Cusimano and Lombrozo showed a summary of each paper to different groups of participants. They found that when participants read about the findings of the first paper, they were more likely to doubt the quality of the study and less likely to accept its conclusions. Moreover, these participants tended to accurately report an awareness that their assessment of the research was biased, and they tended to condone their bias. As the riskiness of ideas increases, people become less disposed to form beliefs based on uncertain, defeasible evidence—and they tend to think that's ideal.

In addition to showing that people knowingly and proudly discount confirmatory evidence if they consider a belief to be perilous, this new paper additionally uncovered evidence that people sometimes feel justified in holding certain beliefs despite lacking evidence to support them. For instance, people often realize that they have limited evidence for some of their beliefs, such as the existence of heaven and the benefits of immigration on the economy. When these beliefs are thought to be morally desirable, people don’t mind that they lack solid empirical justifications. This dovetails with other recent research indicating that people endorse varying criteria for justifying their beliefs.

In their prior work, Cusimano and Lombrozo have found that people endorse motivated reasoning not only in themselves but also in their evaluations of others. For example, people think that a 19-year-old newlywed should believe that their future divorce is unlikely, even if most similar marriages do not last. Rather than prescribing the formation of beliefs according to base rates or other forms of evidence, people think beliefs should be heavily biased toward whatever is most beneficial.

If you’re like many of the participants in these studies, perhaps this seems like a noble aspect of human nature that we should continue to cultivate. Indeed, evidence is often messy and uncertain, so relying on objective reasoning can sometimes lead us down misguided paths that cause great harm. Examples aren’t hard to find, and we are still suffering from the repercussions of what were previously considered to be scientific “facts” about eugenics, repressed memories, and the safety of lead paint.

Questioning Our Reasoning Tendencies

Nevertheless, these findings should make us question our reasoning tendencies, particularly in situations in which moral values are contested. In politically polarized debates and other contexts where people have conflicting moral views, people on different sides of an issue may come to divergent conclusions in part because each group has more stringent requirements to be convinced of the opposing position. Our comfort with tendencies to engage in motivated reasoning may help us to ignore or discount evidence informing important social and economic issues (costs and benefits of corporate taxation, drug regulations, police funding, and the like), feeling complacent in assuming that it would be too risky to entertain the possibility that we’re wrong.

Sometimes, divergent assessments of the hard evidence could be reduced if everybody acknowledged the potential fallibility not only of their factual understanding but also of their moral convictions. By taking a more scientific stance and looking at the evidence as dispassionately as we can, we might be better able to agree about what to think in morally charged situations.

advertisement
More from Joshua Rottman Ph.D.
More from Psychology Today