Skip to main content

Verified by Psychology Today

Decision-Making

Who Is Rational—and How Irrational Are We?

Research shows that most people follow rules of thumb half of the time or more.

Key points

  • We often make decisions following simple rules of thumb even when we could do better.
  • For example, we repeat what worked in the past—or simply make the same choice again, even if it didn't work.
  • Some people are more prone to these behaviors than others, but almost everybody falls for them sometimes.
  • Research shows that, on average, we might be using rules of thumb more than half of the time.

Making good decisions is not easy, and making the best possible (“optimal”) decisions is even harder. Instead, humans often use shortcuts and rules of thumb.

From firm managers and policymakers to physicians and judges, people often go with what feels right instead of trying to find out the best that can be done. This is often the case for complex, important decisions where new information needs to be taken into account.

That people are not always rational is well understood among psychologists and other scientists. The late Daniel Kahneman, famous psychologist and winner of the 2002 Nobel Prize in Economics, called the mental shortcuts that people use and their consequences heuristics and biases.

But, surely, some people are more rational than others. Or, at least, some people are rational sometimes. Certainly, nobody is seriously saying that we all go through life acting on gut feelings alone, even if it sometimes looks like that when watching the news. The question is, how often are we rational, and what are the common traps into which we fall?

Rules of Thumb We All Use

In an article published in the journal Management Science last year, my coauthor Michele Garagnani and I asked precisely that. We ran a large experiment with problems chosen to be representative of relevant decisions where new information is important.

Roughly speaking, you have some idea about whether an action might be good or bad (a business plan, a medical treatment, an economic policy), then you get some information on how it is going so far (first-quarter profits, results of a medical test, early economic indicators), and then you have to decide whether to continue or do something else.

You might think that this is easy. If it is going well (high profits, good test results, lower inflation), go with it. If not, do something else. Right?

Wrong. That is a rule of thumb: “win-stay, lose-shift,” or simple reinforcement. That rule is often correct, but not always (here is why it can be a problem). Sometimes, the best thing to do after positive feedback is to change.

For instance, a physician might notice that you are reacting well to mild medication and hence might get even better results with more aggressive treatment. Or the unexpectedly high profits might signal that the market is ripe for introducing other, new products. In decisions where new information is involved, rational decisions require updating beliefs correctly (technically, using a result called Bayes’ rule), and people usually can work it out if they think about it.

People also often use other rules of thumb. An important one is decision inertia, which happens when they ignore new information and just stubbornly repeat the previous decision—for example, always buying the same brand of cereal, or the same brand of car, even when they are not happy with it (although sometimes there might be reasons to be loyal to a brand).

Most People Follow Rules of Thumb Most of The Time

In our experiment, 268 people made (many) decisions involving new information, in two groups with higher or lower pay per successful outcome. We used problems where reinforcement and decision inertia were sometimes right and sometimes wrong. Thus, we could see whether people actually made rational decisions or just followed rules of thumb which sometimes happened to be right.

We then used a kind of statistical model called finite mixture models to estimate which rule people used most of the time. Almost half of the people (48 percent) were “mostly rational,” meaning that they mostly did the right thing, even when some rules gave the wrong answer. A bit more than a quarter of the people (28 percent) were mostly reinforcers, chasing wins and losses (hint: if you are like that, don't buy stocks).

Around 15 percent mostly followed decision inertia, blindly repeating their first choice a second time no matter what. And around 8 percent followed other rules which also ignored belief updating.

This could be seen as a half-full, half-empty glass: half of the people mostly follow rules of thumb (at least in our experiment), but half of the people are mostly rational even in relatively hard decisions.

Reality is more complicated. We also used a more sophisticated statistical method, called hidden Markov models, to estimate whether people were changing rules across decisions and how often were they using each rule. A way of thinking about this is that half of the people might be “mostly rational,” but we do not yet know whether “mostly” means 51 percent or 99 percent of the time.

The answer, at least on average and for our experiment, is 70 percent. In other words, people classified as “mostly rational” also used rules of thumb, around 30 percent of the time, on average. So even if half of our participants were rational most of the time, a good part of their decisions weren’t.

Worse, when we paid them more (higher monetary incentives), they used rules of thumb more, up to 40 percent. This is a case of the reinforcement paradox, where paying more for a win makes people react more strongly to it and use reinforcement more often.

It also turns out that almost everybody is rational at least sometimes. Even those classified as mostly following some rule of thumb also followed rational thinking around 20 percent of the time on average. And they switched around across rules, for example sometimes just following inertia even if they were mostly following reinforcement.

This shows that reducing people to a type of behavior (e.g., “reinforcers”) is always too simplistic. But putting everything together, we estimated that, over all participants and all decisions, the rational way of thinking was used 43 percent of the time (and reinforcement was at 35 percent).

What We Learned About Decision-Making

What does this all mean, beyond this particular experiment?

First, there are indeed "types" of decision-makers among us, at least for the problems we used: one size does not fit all. There are people who fall more often for reinforcement or inertia, and people who most often try to figure things out if given a chance.

Second, and perhaps most importantly, nobody is purely rational, or a pure reinforcer, or acting exclusively out of inertia, even for specific problems. People who mostly follow their gut (reinforcers) also try to figure things out sometimes, and people who mostly think things over also fall for rules of thumb a large part of the time.

It is not only that one size does not fit all, but rather, that, very often, one size does not even fit one.

References

Alós-Ferrer, C., and M. Garagnani (2023), “Part-Time Bayesians: Incentives and Behavioral Heterogeneity in Belief Updating,” Management Science, 69 (9), pp. 5523-5542. (Open Access Link)

advertisement
More from Carlos Alós-Ferrer Ph.D.
More from Psychology Today