Skip to main content

Verified by Psychology Today

Replication Crisis

Dishonest Research on Honesty

Did a Harvard Business School professor fake data on honesty?

Key points

  • Data detectives identified data manipulations in a spreadsheet by a Harvard Business School professor.
  • These manipulations allowed her to claim stronger evidence in support of a theory she advanced about honesty.
  • As an institution, science is founded on the principles of anyone being able to critique evidence.
Source: A. Danvers
Data on honesty may have been falsified.
Source: A. Danvers

Over a year ago, I wrote about an ironic case of research fraud: Dan Ariely, a leading behavioral economist, published a paper on honesty that was very likely dishonest. Another group of researchers had carefully examined the data underlying this publication and found it was likely faked.

All of the authors on the paper—including Ariely himself—agreed that he provided the data to the research team. He suggested that maybe the business that had agreed to provide him with the data had somehow doctored its own data to support his psychological theory. [Note: in personal communications, Dr. Ariely states that his previous statements were incorrect, and that other researchers could have accessed the data.]

More likely, in my view, was that Ariely, like an entire generation of behavioral scientists working in social psychology and adjacent fields, was trained to treat experimental research more as “a rhetorical flourish” (to quote fallen social psychology luminary Daryl Bem) than as something to do carefully and to report on with full transparency and honesty. In recent months, I’ve moved in the direction of writing about more of psychology’s successes and exciting new developments—particularly in the area of mental health. Then lightning struck twice.

Stats sleuths who write the Data Colada blog have uncovered another case of fraud in the exact same paper (also covered in the Chronicle of Higher Education). This paper reported on three studies on dishonest behavior, and it now appears that two of the studies had data independently faked by two different authors. In this case, the fraud appears to have been committed by Harvard Business School professor Francesca Gino, a collaborator of Ariely’s.

In the blog post, the researchers use a little-known technique for forensically analyzing an Excel file to see what changes have been made. I’ll leave the details to readers who want to visit their original post, but suffice it to say that they have good evidence that Gino switched the numbers on a handful of data points so that the actual result—her intervention to reduce dishonesty didn’t work—was reversed. By selectively doctoring the data, she made the evidence look like it did support her intervention to reduce dishonesty.

This finding is only one part of a larger planned series of posts by the authors outlining a pattern of research misconduct by Gino. They’ve identified at least four papers where they believe she manipulated or falsified the data. As they report, Gino has been placed on “Administrative Leave” by Harvard, which has completed its own internal investigation. They say that a contact at Harvard has told them the university requested these four papers be retracted from the scientific literature.

I’ve written about problems with psychology (and related research) here for over four years. At times, I’ve gotten the advice—and seen others covering the same issues get the advice—that talking about these failures is anti-science.

In the current social and political landscape, people are increasingly distrustful of institutions. Science is often questioned and dismissed as a flawed source of knowledge. Instead, people often choose to believe in the ideas that best conform to a predefined narrative they want to believe in, or to believe that what is old or familiar must be right—and challenges to that are malicious. On this view, science is a monolithic institution, and, at the same time, a sort of mascot for all rational thought.

Source: A. Danvers
Some scientists suggest criticizing science too publicly undermines trust.
Source: A. Danvers

What this misses is the obvious truth: Specific scientific findings, and specific scientists, can be flawed sources of knowledge. But to zoom out to the overall category of capital-s Science is to miss the fact that science is mixed.

There is very well-done science, and very poorly-done (even fraudulent!) science. There is active debate in science over ideas, and there are big studies whose results can truly change what we understand about the world.

There is the possibility of overturning conventional wisdom with new evidence. There are also written and unwritten rules about what that evidence can look like, however. Truly paradigm-shifting evidence needs to rule out plausible alternatives—including, first and foremost, the plausible alternative that the result came out the way it did due to a quirk of luck. Good scientific training gives a person the skill to sort out what type of evidence is convincing, and what type is itself just choosing to believe the story a researcher wants to believe.

Because the truth is that the dynamics we see playing out in broader society also play out within the arena of science. Powerful, well-connected insiders do get special treatment. (See Simine Vazire’s recent blog post.) People are motivated by the stories they want to hear, which are often stories where “one weird trick” can be used to combat a large, systemic problem. (See the review of “Nudge” on If Books Could Kill, or a previous post I wrote.) And sometimes people make genuine, well-intentioned errors.

What’s supposed to be special about science is that there is the possibility that new evidence really does matter. Old beliefs can be overthrown by people going out and taking careful observations of what’s really going on in the world.

Science doesn’t magically self-correct because the peer review process for vetting articles is perfect, or because research university hiring committees always (or even mostly) get it right when deciding whose career to promote. Science has followed a lot of blind alleys: believing in bloodletting as a medical treatment, believing that fire was caused by the combustion of a mysterious substance called phlogiston, believing in phrenology—that the shape of lumps on your head dictate your character.

Science self-corrects because of its guiding anti-authoritarian principles: No one is above questioning, no result is above scrutiny, and our understanding is always evolving based on new evidence. We can question the validity of the work of a Harvard professor and one of the most famous behavioral economists in the world.

It can feel like we are in dark times for rational discourse. In many arenas—social, political, environmental, business—it can feel like evidence, particularly evidence that contradicts well-established narratives, doesn’t carry the weight it once did. Yet trying to protect institutions like scientific research from attack or negative publicity actively counters the principles that have allowed science to be such a powerful force for progress in human history. So here we are again, reporting more of the same: a Harvard professor made up her data, and staying true to the scientific evidence requires that we point it out.

advertisement
More from Alexander Danvers Ph.D.
More from Psychology Today