Skip to main content

Verified by Psychology Today

Artificial Intelligence

Is AI Paving the Way for Smarter Thinking?

It's essential to embrace it as a learning medium.

Key points

  • AI can be used as a force for good in education.
  • Misinformation and disinformation is the biggest global threat.
  • Critical thinking skills will become evermore important in the era of super-intelligence.

In the 14 months since the launch of ChatGPT in November 2022, generative AI has been a hotly discussed topic, with many critics cautioning that AI has the potential to destroy critical thinking, education, and potentially even society as we know it. And it does. But it also doesn’t. Ultimately, it is rare for the worst-case doomsday to come to pass, and typically, the consequences will land somewhere in the middle of the two extreme views.

At the World Economic Forum meeting in Davos, Switzerland, this month, a panel of experts and educational leaders took the alternative stance: that the advent of AI will enhance rather than destroy education, as long as we prepare ourselves to embrace the changes needed. Their point is that AI cannot do many of the things that humans do, be originally creative for example, and positively it can actually be employed to challenge thinking and make humans even better at natural human skills. Indeed, as the panel highlighted, AI is going to become ever more pervasive; it is essential to embrace it as a learning medium, to ensure continued educational relevance and capability.

How do we embrace education and critical thinking in the age of superintelligence?

Humans are fallible. We have several psychological predispositions which make it easier for us to survive, but make it inherently more difficult for us to think critically. AI has no such qualms; given its algorithmic, process-based programming, its fallibility comes not in logical processing, but in how it is employed. If employed in the right way, it has the potential to counterbalance and combat human fallacy, while humans can counterbalance the AI fallacy in turn.

According to the Center for Countering Digital Hate, their report on AI found that it can be pushed to produce misinformation in 78 percent of inquiries if asked to do so. The key point is that it was asked to do so. Generative AI is reliant on data to produce a narrative, and it produces that narrative based on how it is queried. Employed positively, AI can be a useful tool; but employed by threat actors, there are, currently at least, ways to circumvent its safeguards and use it for nefarious means. There is much to be done to add guardrails, but in the medium term, if we can secure AI to utilise legitimate, verified data sources, and have correct oversight and safeguards to protect users, then there is the likelihood of using AI as a robust, beneficial resource.

AI shortcomings aside, the forum at Davos discussed a matured and safeguarded model of AI and looked at the positive use cases, for example in tackling climate change, or in enhancing education. The conclusion drawn by the education panel is that the future of teaching should focus on old-fashioned skills, including differentiating between true and false; having meaningful conversations; and thinking critically. It is here that AI has the opportunity to also enhance human education they argued, utilising it to challenge your thinking, and therefore your creativity.

Jeffrey Tarr, CEO at Skillsoft commented “How we teach is also changing at a dramatic pace.” Meanwhile Hadi Partovi, CEO of Code.org. commented, “When it comes to bias, there are many biases, but these are things that we can help teach people to recognise. Even without AI, the internet has bias. When you Google, you are not always getting unbiased results. You are getting many different results and it is up to you to figure out which ones to trust. Teaching students how to distinguish fact from misinformation, how to use critical thinking, and how to question their sources, are part of becoming digitally fluent, and these are skills that are relevant pre-AI, but even more relevant in a world where we rely more and more on technology for education.”

Artificial intelligence systems are not inherently good or bad, and when employed proactively can be an innovative and interesting resource. The argument should perhaps focus not on AI specifically, but on how we need to adapt education to suit the modern era. AI is an inevitable part of that, so educators must focus on teaching the right skills, and enhancing human capability, to ensure it can harness the advantages and avoid the pitfalls of emerging and maturing tech. As Michael Spend, president and provost of University College London highlights, “There’s things the machines can’t do that we need to be really good at teaching our students.”

Considering the reality of generative AI in education, alongside other AI applications, it is interesting to ponder the potential for improving education. Critical thinking improves an individual’s ability to differentiate between fact and fiction, true and false, information and mis- or disinformation. It is an essential life skill, and it is an imperative skill for inoculating your brain against nefarious threat actors. Indeed the World Economic Forum Global Risks Report 2024 highlights misinformation-disinformation as the biggest global risk for the next two years, falling to fifth for the decade only behind extreme weather and climate change events. The report states “perceptions of reality are likely to also become more polarized, infiltrating the public discourse on issues ranging from public health to social justice. However, as truth is undermined, the risk of domestic propaganda and censorship will also rise in turn.” Continued development of critical thinking skills is an essential means to withstand and undermine these challenges and will gain ever-increasing relevance in the modern world.

Returning to AI, it is interesting to consider how the technology can be employed to help safeguard societal norms and protect against global risk. Aside from the practical application of AI in identifying and removing propaganda, for example, AI can potentially be employed to educate and test skills in people. For example, humans are at risk from confirmation bias, truth bias, groupthink, and the illusory truth effect, any or all of which make us susceptible to believing what we read and hear, and to aligning our views to the views of those around us. AI can be more impartial, in theory, so there is the potential that it can be employed as an educator and counterbalance our psychological predispositions, encouraging debate, challenging thinking, supporting lateral reading, and even playing devil’s advocate. It will be interesting to see the gains that can be made.

References

World Economic Forum Global Risks Report 2024. See the PDF: The_Global_Risks_Report_2024

World Economic Forum AGM at Davos 2024: discussions on Artificial Intelligence. Also see: World-economic-forum-annual-meeting-2024

Duckett, S. and Warren, G., 2023. Foolproof: why we fall for misinformation and how to build immunity by Sander Van Der Linden book review.

advertisement
More from The Open Minds Foundation
More from Psychology Today