Skip to main content

Verified by Psychology Today

Cognition

Why Expertise Is Better Than "Lots of Experience"

Critical thinking about practical distinctions between expertise and experience.

Key points

  • Using experience as the basis for vital decisions should be avoided or supplemented by empirical evidence.
  • If we care about the outcome of an important decision, engaging credible expert opinion is the way to go.
  • An expert generates, monitors, and modifies the plan to meet the needs of the situation as they arise.
Stockfour/Shutterstock
Source: Stockfour/Shutterstock

I’ve been playing chess for around 30 years. I’d say I’ve won matches as much as I’ve lost. I’m not particularly good, but I’m not horrendous, either. I suppose it depends on who I’m playing. I know one or two standard opening moves, but, beyond the opportunities said moves afford me, I’m kind of winging it for the remainder of the match. What this little description should tell you is that, though I have quite a bit of experience playing chess, I am by no means an expert. Indeed, I would not bet money on my success in any game.

Understanding the difference between experience and expertise is an important distinction—one that could be of great importance to you in real-world decision-making.

This is chess, not checkers.

Having conducted quite a bit of research in the field of health psychology, it is fair to say that I know a thing or two about health. However, when that "thing or two" isn’t directly related to cognitive functioning, cognitive rehabilitation, experiences of chronic illness, or clinical decision-making, I’m out of my depth. Such diversion would result in it not being the "same game," and any expertise I would have would become irrelevant. When a family member or I am sick, we’re going to our GP. Sure, I’ve had strep throat at least 20 times in my life (not an exaggeration—it’s a wonder that I still have my tonsils), but just because I have a fair amount of experience with it doesn’t mean I could confidently identify it (or differentiate it from other ailments like bronchitis); I’m not a qualified medical doctor. Despite this, many people often fail to recognise such distinctions and, subsequently, make decisions for themselves and their families that are out of their realm of knowledge, let alone expertise.

"It depends on who I’m playing."

When I want to feel good about my chess-playing ability, I play a novice. I win with relative ease, given that I’m assessing strategies and trying to run mental simulations, while they’re still getting a grasp of how the pieces move. Like a form of downward social comparison, this can be good for the old confidence. Of course, if I ever want to get better at chess, I’ll need to play others who are at least on the same level as me and work my way up as I improve (like a form of ZPD or scaffolding; e.g., Wood, Bruner & Ross, 1976). Beating 100 novices in a row might make me feel like I’m good, but if I sat down with someone better than me (who has engaged the game much more frequently than I), they will not assess strategies and attempt mental simulations like I do. They just see the strategies; they know the outcomes of the simulations much quicker than I would. It’d be a quick match. Of course, I’m cognisant of this. But, if all I have is novices around me, it’s easier for me to be fooled into thinking I’m a better chess player than I actually am. Likewise, if you’re in a real-world problem-solving scenario and no one is around to tell you that your strategy is flawed, then there’s a good chance that you will proceed with the faulty strategy.

"I’ve been playing chess for around 30 years now."

Thirty years? Wow. You’d think I’d be better. The reality is I just didn’t play enough: maybe 10 games a year, before I became an adult, then cognitive development peaks and I might play a game here or there once every year or two—certainly not enough to put that cognitive development to good use. But, I’m a smart guy, I’ve been playing a long time. I should pick up wins—and I do; they just come against others who aren’t very good, either.

You’ll often hear people talk about their 30 years of experience doing this, that, and the other. Depending on the context, we might extend to them our faith in their credibility; other times, we may not. An important question to consider is that, after 30 years of experience, shouldn’t we be really talking about expertise? If I were to put my faith in a person to help me make an important decision, I’d want to hear about expertise—not experience. Of course, some people are modest and humble (and probably won’t use the word expert—underestimating their own abilities [see Kruger & Dunning, 1999]). I respect that. The onus is, then, on you to decipher whether what they’ve been doing for 30 years is building expertise or "playing chess once or twice a year." In the case of the latter, it may just be that someone has a lot of experience doing something wrong (Kahneman, 2011)—and they may very well not see that; instead, they overestimate their ability/knowledge.

"I’m kind of winging it."

In a previous post, I discussed how messy things can get when making decisions based on experience. Using experience as the basis for your important decision should either be avoided or at least supplemented by some form of empirical evidence. Of course, empirical evidence is not always available. On the other hand, gut-level feelings and bias are always available, and they will remind you of a time when you or someone you know engaged a situation like this and solution strategy x, y, or z was utilised. Our intuitive judgment—fed by emotion and bias—will engage some heuristic that worked in the past for application in the present situation. But, that doesn’t mean it will turn out the same, given that no two situations or experiences are the same. There are going to be differences, and when things don’t go according to plan, you will need to adapt on the fly and "wing it" to an extent (hoping for the best), as opposed to having planned ahead, as diligently as possible.

Success

Outcomes are important. Sure, using your "experience" might often lead you to success. It might also lead you to an outcome that is "good enough" (e.g., consider satisficing; Simon, 1957). How often, though, can people tell the difference between these two outcomes? What if your experience led you to a "good enough" outcome, but a higher-level or even "optimal" outcome was what was actually needed? I’m not saying experts get everything right (no one’s perfect), but when they do succeed, when decisions and actions count, the outcome is more often at that higher level. The point is, if you care about the outcome of an important decision, engaging credible expert opinion is the way to go—not toward what your experience biases you.

Research on expertise suggests that such outcomes boil down to how individuals organise and approach their thinking (e.g., Chi, Glaser & Rees, 1982). Moreover, while expertise is, of course, experience-based (How else would the expertise be developed?), the experience is applied according to a repertoire of patterns (Klein, Calderwood, & Clinton-Cirocco, 1986), wherein the expert generates, monitors, and modifies the plan to meet the needs of the situation as they arise (Klein, 1989; 2008). Somewhat similar to the example above with respect to my rudimentary "mental simulation" of the chessboard, experts do (much more efficiently) simulate the scenario with respect to givens and potential alterations—for example, accounting for environmental cues, anticipated outcomes, and goals (and sub-goals), as well as appropriate responses to each. Of course, such expertise is not domain-general; it is specific to the task/topic at hand, where it takes an expert to readily recognise such patterns.

References

Chi, M.T.H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R.S. Sternberg (Ed.), Advances in the Psychology of Human Intelligence, 7–77. Hillsdale, NJ: Erlbaum.

Kahneman, D. (2011). Thinking Fast and Slow. Penguin: Great Britain.

Klein, G. A. (1989). Recognition-primed decisions. In W. Rouse (Ed.), Advances in Man-Machine Systems Research, 5, 47–92. Greenwich, CT: JAI Press, Inc

Klein, G. (2008). Naturalistic decision making. Human Factors: The Journal of the Human Factors and Ergonomics Society, 50, 3, 456–460.

Klein, G. A., Calderwood, R., & Clinton-Cirocco, A. (1986). Rapid decision making on the fireground. In: Proceedings of the Human Factors and Ergonomics Society 30th Annual Meeting, 1, 576–580. Norwood, NJ: Ablex.

Kruger, J. & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, (6): 1121–1134.

Simon, H. A. (1957) Models of Man. New York: Wiley.

Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2), 89–100.

advertisement
More from Christopher Dwyer Ph.D.
More from Psychology Today
More from Christopher Dwyer Ph.D.
More from Psychology Today