Skip to main content

Verified by Psychology Today

Media

Social Media Algorithms Are Not Undermining Democracy

Algorithms, in fact, steer people away from extremist content.

Key points

  • People consume fringe or extremist content directly, not through algorithms.
  • Algorithms steer people away from extremist content toward mainstream content.
  • Pundits and journalists should talk about internet algorithms responsibly.
Tymon Oziemblewski/Pixabay
Source: Tymon Oziemblewski/Pixabay

In my last post, we talked about how there are prevalent concerns about online misinformation. These concerns are mostly driven by the third person effect which is the belief that most other people are unintelligent and gullible, but "we" are not. In my opinion, the excessive handwringing about alleged harms due to misinformation is unwarranted and distracting from the true underlying problems in society.

Part of the concern about misinformation has to do with the supposed role that social media algorithms play in driving people to more ideologically extreme content. The story goes something like this: An otherwise moderate but naïve individual starts looking at posts on a social media app (e.g., YouTube). Then the user is led down a proverbial rabbit hole as the algorithm recommends increasingly fringe and extremist content, filled with debunked conspiracy theories and prejudice, which has the effect of radicalizing them. The initially level-headed user is brainwashed into a potentially violent, fanatical bigot. And why do these social media companies devise such nefarious technologies? To make money, of course. This paranoid fantasy has led many Americans to believe that internet algorithms are undermining democracy. But research on this topic has, in fact, revealed the exact opposite.

Research on algorithmic processes

Most social media algorithms actually recommend more mainstream and moderate content rather than extremist content. Far from “radicalizing” people, social media apps function as a way for people to vocalize and share their pre-existing viewpoints and identities.

We know this based on work by scientists who have systemically researched these algorithmic processes. For example, Mark Ledwich and Anna Zaitsev analyzed over 800 political channels on YouTube, categorizing them based on categories of extremist content (e.g., conspiracy theories, racism, trolling) and mainstream content (e.g., political news, social commentary). They scraped data on what gets recommended after viewing videos from these categories.

According to their analysis, after viewing a mainstream channel, the algorithm recommends more of the same mainstream content. But after viewing extremist content, there’s a much smaller likelihood of the algorithm recommending additional extremist content. The YouTube recommendations actively steer people away from viewing more extremist content. The authors conclude: “Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content.”

The myth about extremist content

But why would algorithms redirect people away from extremist content? Isn’t extremist content more profitable? This is a myth. Social media algorithms function on a system of monetized popularity. This means that whichever posts received a lot of attention from other users will be more likely to be recommended. This makes it extremely difficult for any type of fringe content to be recommended by an algorithm because it has to compete with mainstream media companies like Fox News or CNN which already have the largest audiences and the most widely viewed content. Algorithms tend to boost posts from organizations that have a lot of money to spend on advertising and promotion, and whose content tends to be more mainstream. This is why you’ve seen a ton of advertisements for Geico and Allstate but you’ve likely never heard of Lockhart's Insurance Services.

Those viewing extremist content are already radical

Other studies have found similar results. In one analysis from a nationally representative participant panel, a research team led by Annie Chen tracked YouTube users’ activity as they watched videos (with their consent). First, the researchers found that extremist content is viewed by a very small, highly concentrated group of “super consumers”—less than 2 percent of participants were viewing 80 percent of the extremist content. The vast majority of users would rarely if ever come across extremist content simply by following algorithm recommendations.

The tiny minority of “super consumers” tended to express quite high levels of racist or sexist sentiments to begin with. This means that their views were not radicalized based on their social media habits. They were already radical. And most importantly, the extremist content they watched was based on their deliberate searching habits. They were not recommended by the algorithm. Viewers mostly came to extremist videos directly from external links on other websites, or from subscriptions to channels that feature extremist videos.

As with the other studies, the research team found “little evidence for the typical ‘rabbit hole’ story that the recommendation algorithm frequently leads people to extreme content.” To summarize, people consume extremist content when they have initial views which are consistent with such content and actively seek it out.

Studies like these effectively challenge the notion that social media companies are devising algorithms to undermine democracy. And yet, this idea persists in our popular imaginations. It’s a strange irony to come to terms with. The folks who display excessive concern about radicalizing misinformation on the internet are paradoxically spreading false ideas about the way social media algorithms function. I hope that journalists, pundits, and social commentators will start behaving more responsibly in light of scientific research on this topic.

References

Chen, A. Y., Nyhan, B., Reifler, J., Robertson, R. E., & Wilson, C. (2022). Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos. arXiv preprint arXiv:2204.10921.

Ledwich, M., & Zaitsev, A. (2019). Algorithmic extremism: Examining YouTube's rabbit hole of radicalization. arXiv preprint arXiv:1912.11211.

advertisement
More from Dylan Selterman Ph.D.
More from Psychology Today