Skip to main content

Verified by Psychology Today

Artificial Intelligence

Critically Thinking About Identifying AI Creations

Considering our approach to engaging AI-generated data.

Key points

  • With AI-generated media, it’s about the decision-making process for accepting what to believe.
  • Since the turn of the millennium, we have seen an exponential increase in the amount of new information.
  • The world is forever evolving, and, so, we must adapt, ensuring not to solely rest on decline biases.

Someone asked me the other day how they can best go about identifying AI-generated media. "I don’t know" was my honest answer. To be completely truthful, I’m not even sure I can. With that, I am confident in my ability to distinguish essays written by students from AI (and have done so in the past). I can also pretty easily spot dodgy-looking fingers and feet in many AI-generated images. But, of course, such examples are not the case for all AI-generated media. Moreover, what about when AI advances to the point where it can overcome these little "nuances"? What happens when it’s capable of "fooling us" a majority of the time? Maybe I’ve already been duped and I don’t even know.

Perhaps a better way of looking at this question of identification is through a pre-emptive lens. That is, let’s assume that we will be duped on a regular basis (if it’s not already the case). I don’t think being able to "spot a fake" is necessarily the right approach. Consistent with a couple of pieces I wrote a while back for this blog, regarding spotting fake news and why we fall for it, it’s more about the decision-making process regarding accepting what to believe and the critical thinking necessary for such decision-making than it is about identification per se. Indeed, AI has been a topic of discussion within critical thinking research recently (e.g., Dumitru & Halpern, 2023; Eigenauer, 2023; Saiz & Rivas, 2023)—for example, with respect to its potential effects on human attention and decision-making, along with its current fallibility, as addressed above.

Asking Ourselves Questions

If you care about the information you’re reading or the video/image you’re seeing (the type of care that makes critical thinking necessary), you need to ask yourself some questions. For example, is "political candidate x," as featured in some hypothetical video, actually doing or saying what I’m seeing/hearing? Is it consistent with their past behaviour and/or attitudes or is it out of character for them? If the latter, is this a glimpse of who they really are or am I being duped? Are there other sources suggesting the same behaviour? Are there other videos of said individual available for comparison? Simply, we need to avoid jumping to a conclusion about what we have seen or read and engage the reflective judgment component of critical thinking.

Sounds like a lot of work. Well, no one ever said critical thinking was easy; but, unfortunately, that’s what it takes now if you truly care about the topic you’re thinking about. If you’re a critical thinker, arguably, you’re already somewhat prepared for these advances in AI. We’ve dealt with misinformation for hundreds of years. More recently, we’ve had "deep fakes" (aided through AI). We’ve had news sources compete against each other for viewership and readership through sensationalising their news and providing their own unique slants on things. We needed critical thinking for those and we need critical thinking for this.

The New Knowledge Economy

Take, for example, the "new knowledge economy"—a concept discussed often on this blog—and, truth be told, it’s anything but "new." Simply, since the turn of the millennium, we have seen an exponential increase in the amount of new information being created. From 1999 to 2002, the amount of new information created was said to equal the amount of information previously developed throughout the history of the world with further estimations that new information creation would double every two years (Jukes & McCain, 2002; Varian & Lyman, 2003)—that was back in the early noughties. Now, in 2024, we’re barely able to "guestimate" how many zettabytes of data are created each day. The point is, even back 20 years ago, simply acquiring information was insufficient. We were forced to adapt our thinking to account for multiple sources, multiple slants, multiple biases, and multiple "truths" of the data we received on any given topic. The mechanics behind how we adapt to this have not changed. We needed critical thinking back then and we still need it now—arguably more so now, particularly in light of world events occurring since the dawn of this "new knowledge economy," including growing political, economic, social, and health-related concerns (e.g., "fake news," gaps between political views in the general population, an economic crash, various social movements, and the COVID-19 pandemic).

Sure, there are many implications of AI’s introduction to the world. They are both spectacular and worrying, and though adaptation might be a difficult task (Saiz & Rivas, 2023), it remains that the mechanics behind such adaptation have not changed that much from what was requisite 20 years ago—we need to be able to think critically about the information we engage if we truly want to draw reasonable conclusions, solve problems, and make decisions regarding the topics we care about. Just as Socrates feared the written word more than 2,000 years ago, we now fear AI as this new frontier of technological advancement. The world is forever evolving, and, so, we must adapt, ensuring not to fall prey to solely resting on decline biases (e.g., fearing what is new and different) and, instead, being proactive in the ways we can adapt. We need critical thinking alongside added efforts to develop and enhance it. To conclude, I’m not saying that AI is all good and I’m not saying it’s all bad, but it is developing and advancing whether we like it or not—and the most proactive thing we can do about it in advance is to prepare ourselves through developing our critical thinking.

References

Dumitru D, Halpern DF (2023). Critical Thinking: Creating Job-Proof Skills for the Future of Work. Journal of Intelligence, 11(10):194.

Eigenauer J (2024). Mindware: Critical Thinking in Everyday Life. Journal of Intelligence, 12(2):17.

Jukes, I., & McCain, T. (2002). Minds in Play: Computer Game Design as a Context of Children’s Learning. New Jersey: Erlbaum.

Saiz C, Rivas SF (2023). Critical Thinking, Formation, and Change. Journal of Intelligence, 11(12):219.

Varian, H., & Lyman, P. (2003). How Much Information? Berkeley, CA: School of Information Management & Systems, UC Berkeley.

advertisement
More from Christopher Dwyer Ph.D.
More from Psychology Today
More from Christopher Dwyer Ph.D.
More from Psychology Today