Skip to main content

Verified by Psychology Today

Punishment

The Psychological Silver Lining of Faceless Bureaucracy

Moral qualms seem to explain a preference for impersonal enforcement.

Key points

  • Many people have strong negative feelings toward impersonal bureaucracy, especially that which doles out punishment.
  • Yet its relative impersonality is part of what society values in the rule of law, research suggests.
  • Study participants hesitated to personally penalize others' apparent misbehavior when they knew reports had a small chance of being in error.
  • Presence of potential errors led them to hand enforcement over to an impersonal algorithm, though the algorithm had no less likelihood of erring.
Pavel Danilyuk/Pexels
Source: Pavel Danilyuk/Pexels

Behavioral economists and others have long been studying how institutions of governance arise by conducting laboratory decision-making experiments, in which participants get to select rules and institutions by voting or by individually opting for one of several options available to them. Classic settings are social dilemma situations, in which all members of a group or society can be better off if all contribute to some group effort or resist over-exploiting a shared resource, but each member faces an individual incentive to shirk their part.

Many studies suggest that in modern societies, the modal individual prefers to cooperate, provided that others do, and that a substantial fraction of cooperatively disposed individuals are also ready to incur a cost to punish the uncooperative. The presence of peer punishment then stabilizes cooperation—so effectively, in some studies, that some behavioral theorists have suggested that anger at rule-violators is an evolutionarily selected feature of human nature.

One question that naturally arises is why societies establish centralized institutions for enforcing rules, and why one-on-one enforcement by individuals is frowned on as vigilantism in many settings. Of course, the answer depends in part on the scale of the joint action problem: Enforcement is naturally left to the group members themselves in a very small group (say, of roommates or the co-owners of a small store), but assigned to formal institutions in a large-scale setting (enforcing tax collection from millions of households).

Having studied the question in simpler forms, researchers are lately tackling thornier dimensions—such as the impact of unreliable information, which could lead the innocent to be punished and the guilty to get off free. Initial studies by experimental economists showed, unsurprisingly, that when reports on cooperation levels are known to be prone to error, one-on-one (peer) punishment functions less effectively.

Somewhat reassuringly, in Andreas Nicklisch's, Christian Thoeni’s, and my follow-up study, we found many participants to be willing to incur at least a small cost to improve the quality of the information reported to them in order to punish more “justly.” Not only was there a positive effect on cooperation levels and earnings, in our study, but we found evidence that would-be punishers incurred more monitoring (i.e. information improvement) costs than required from the standpoint of disciplinary efficacy, evidently due to the desire for fairness in punishing as a good in its own right.

In a more recent study soon to be published in the London School of Economics-based journal Economica, Thomas Markussen, Liangjun Wang, and I look at how the knowledge that cooperation levels are observed “noisily” (i.e. with a known probability of error) affects participants’ preferences for giving punishment responsibility to the individual group members. The alternative is having a centralized algorithm—which might be thought of as representing an impersonal state—automatically punish when there is a report of non-cooperation.

Crucially, the state’s or algorithm’s information is inaccurate with the same frequency as are individual members’ reports, in the version of the experiment in which both get imperfect information. We also studied treatments in which both get perfect information and ones in which only individuals or only the state gets imperfect information. Imperfect information arrives with 10 percent probability, its randomness preventing effective guesses as to which reports are erroneous and which are accurate.

We began by reproducing the past result: that centralized and peer-based punishing are roughly equally popular when the information is perfect, groups are small, and the centralized scheme is available at a modest cost. We then find that when only group members, or only the central punisher, receive imperfect information, participants vote more to put punishment in the hands of whichever gets better information—an outcome that can be viewed as unsurprising and, in a sense, as simply confirming that the participants were awake and were striving to achieve higher earnings.

We notice, though, that the switch away from peer punishing is considerably more pronounced than is the switch from central punishing, when only one suffers from imperfect information. Finally, we find that when information reaching group members and the central punishment device is equally imperfect, considerably more groups choose to assign punishment to the device—or what can be termed an algorithmic punisher (simply a routine in the experimental computer program that deducts earnings from an individual that is reported to have failed to contribute to the public good even when the report is in error, as it is 10 percent of the time).

Why do our experiment participants prefer the algorithm over individual punishing when both are beset by false information with equal frequency? Our analysis suggests that the explanation lies in psychological discomfort over engaging in punitive action when aware that the targeted individual might be innocent.

We show, by way of theoretical analysis, that in order to make the possibility of receiving punishment function well as a deterrent to free riding, the amount of punishment given when a group member is reported to be a non-contributor needs to be larger under imperfect than under perfect information. But our participants actually give less, not more, punishment when information is known to suffer occasional errors, suggesting that scruples over punishing the innocent override instrumental concerns for substantial numbers of subjects.

We also analyze participants’ votes about whether to permit peer punishing versus installing the central punishment scheme. Participants are first required to engage in some periods of interaction with no punisher, some with peer punishment, and some with the central punishment scheme. They then vote between peer and central punishment at the start of each of three sets of interaction periods. We ask whether the frequency of votes for one approach versus the other is explained well by differences in amounts earned by specific participants under each approach during the early phases.

Our analysis finds that there are more votes against the peer punishment approach when the information is imperfect than can be explained by earnings differences only. We argue for an explanation of this based on preferences to avoid having to “wrestle with scruples” about whether to punish or not.

Our research represents one very specific piece of what will undoubtedly be one of the biggest questions facing society in the next few decades: In which situations do we want to place our trust fully in automatons, and in which do we want to assign decisions to human agents? Although some of the domains in question may invite analysis based entirely on considerations of which mode will yield fewer mistakes (for example, in the case of driver versus computer-driven cars), normative psychology will also demand consideration. The impersonality of centralized rule enforcement might—somewhat surprisingly to Franz Kafka—turn out to be one of the virtues for which it will sometimes be preferred.

References

Andreas Nicklisch, Louis Putterman and Christian Thoeni, Trigger-happy or Precisionist? On Demand for Monitoring in Peer-Based Public Goods Provision, Journal of Public Economics 2021.

Thomas Markussen, Louis Putterman and Liangjun Wang, Algorithmic Leviathan or individual choice: choosing sanctioning regimes in the face of observational error, Economica, forthcoming.

advertisement
More from Louis Putterman Ph.D.
More from Psychology Today
More from Louis Putterman Ph.D.
More from Psychology Today