Skip to main content

Verified by Psychology Today

Artificial Intelligence

Preaching Caution on the Road to AI

AI expert Darren McKee shares his concerns about what’s coming.

Key points

  • We would unlikely understand the motives and goals of a true artificial superintelligence.
  • Many AI experts believe we are not prepared for the emergence of an artificial superintelligence.
  • There is little or no meaningful coordination and cooperation between nations regarding AI regulation.
Darren McKee
Artificial Intelligence expert and advisor Darren McKee is concerned that humankind is not moving fast enough to protect itself from a malevolent, rogue, or indifferent AI.
Source: Darren McKee

An artificial superintelligence will enslave us or set us free.

Artificial superintelligence will be the fulfillment of our cosmic destiny, or it will never happen.

An artificial superintelligence will always be less than us, or it will be our next god.

Take your pick. There is an overwhelming smorgasbord of forecasts about the future of artificial intelligence (AI) from credible experts, brilliant philosophers, science-fiction writers, and random people with YouTube channels. It is difficult to think clearly about the emergence of true artificial intelligence because it would be historically unique and unprecedented. It also could happen this century, never, or ten minutes from now. Sure, we embrace or suffer through new technologies all the time. But AI is not an improved Acheulean hand-axe or the Wright Flyer. For better or worse, its appearance would be a pivotal turning point in the human story.

Darren McKee is one of those aforementioned credible experts. He’s an advisor to AIGS Canada (Artificial Intelligence Governance and Safety Canada) and the author of Uncontrollable: The Threat of Artificial Intelligence and the Race to Save the World. The book is riveting, comprehensive, thoroughly grounded in sound research—and scary. McKee advocates for rational caution and meaningful regulation as we seem to be speeding toward the big moment.

Guy Harrison: Describe how a true artificial superintelligence will be something new and different.

Darren Mckee: Think back to when you were a young child. Could you have imagined how capable you are now? No, it would be beyond you, even if someone tried to explain it patiently. The gap between your younger self and you today may be similar to the gap between yourself and an artificial superintelligence. This dramatic difference in capabilities presents a risk.

We tend to think of computer systems as tools, which they are, but an artificial superintelligence would be agentic—it would act as if it had the autonomy to complete a wide range of tasks. Autonomous systems are not new to humanity, but we’ve never created something so capable.

GH: The Terminator scenario of armed robots hunting humans may be unrealistic, but what are some of the credible ways in which an AI might destroy civilization?

DM: While useful to explore specific scenarios, the more important point is the likely outcome. If I play against the best AI chess-playing system in the world, I will lose. We cannot tell in advance exactly how I will lose, but lose I shall. Similarly, if humanity goes up against something that has a world-expert-level understanding of military strategy, human psychology, economics, cybersecurity, electrical grids, fundamental physics, chemistry, biology, etc., and can think faster than we can, it will likely win.

A plausible path is that we integrate AI systems into our lives and society more and more because of the benefits they bring. As AI systems become more capable of operating with greater agency, they could deceive or manipulate humans to gain more power and resources. Eventually, it could be that highly capable and autonomous AI systems just aren’t concerned about our welfare, much like we give little thought to ants when we walk around or clear land for construction. It’s important to understand that malevolence isn’t required for advanced AI to cause human civilizational collapse. Simple indifference will do.

Darren McKee
McKee's book is a call for urgent and immediate regulation and international cooperation to make us as safe as possible from an Artificial Superintelligence.
Darren McKee

GH: Your book makes a strong argument for doing everything possible now to prevent AI catastrophes, but how can meaningful safeguards be achieved with so much mistrust among nations and corporations?

DM: With great effort, care, and persistence. Delicate and concerning power dynamics among states and companies have long been with us, but we have still succeeded at collaborating on many important initiatives. The world came together decades ago with the Montreal Protocol to protect the ozone layer, we reduced nuclear arsenals by tens of thousands, and humanity is making an effort at asteroid detection and deflection. With advanced AI, the risks are to everyone. No state or company will be safe from an uncontrollable artificial superintelligence.

GH: Can AI-enhanced policing keep pace with criminal uses of AI?

DM: AI-enabled harassment and crime is already a huge problem. There are apps specifically designed to modify an uploaded photo of someone wearing regular clothing and use AI to create a simulated nude image of them. These horrible apps have already caused harm in schools and across the world.

Institutions are scrambling to keep up and need to be resourced to do so. But we should be mindful that the usual complications associated with enhanced police powers occur here as well, such as how surveillance of citizens can be used in both positive and negative ways.

It is crucial to increase awareness, such as communicating that everyone should have special passwords with their family so as not to fall victim to audio/video spoof scams that imitate your loved ones to steal money.

GH: At least five of my books were used to train AI systems without permission or compensation. What is your perspective on this?

DM: It’s entirely reasonable for artists to feel aggrieved that their work has been used to train AI systems without any concern or compensation given to them. Creatives should band together to fight against such practices because such appropriation won’t stop on its own. If you don’t engage, other people will decide your future for you.

GH: Would it be better for global safety if a government or a corporation were first to build an artificial superintelligence?

DM: Any technology on par with nuclear weapons should be subject to democratic control, given that a private company has a greater responsibility to its shareholders than to all citizens.

GH: When do you estimate the first artificial superintelligence will exist?

DM: I think it is likely artificial superintelligence will exist within 10 years, but something more generally capable could be created within two to five years. As the old saying goes, “prediction is hard, especially about the future”, and that is no less true for the AI space. Given the risks inherent in the existence of an artificial superintelligence, it is better to be prudent and prepared than to be caught off-guard.

advertisement
More from Guy P. Harrison
More from Psychology Today
More from Guy P. Harrison
More from Psychology Today