Skip to main content

Verified by Psychology Today

Artificial Intelligence

Will Artificial Intelligence Be Sociopathic?

A Personal Perspective: Why would our thinking machines care about us?

Key points

  • Evolution favored social bonding and cooperation within our species.
  • Empathy and concern for the welfare of others was critical to our survival.
  • There is no guarantee that AGI systems can be infused with a social conscience.
Cottonbro Studio/Pexels
Source: Cottonbro Studio/Pexels

Hold on tight to the rails, people; we may be in for a rough ride ahead. No, I’m not referring to surging autocracy across the globe, or climate change, or microplastics, or even the resurrection of dormant super-volcanoes. I’m talking about the rise of the machines. Or, more accurately, the development of artificial general intelligence (AGI). There is real concern in the neuro-network computing community that we’re rapidly approaching a point where computers begin to think—where AGI, through its ever-expanding capacity, processing speed, serial linkage, and quantum computing, won’t just be able to beat us at chess, design better cars, or compose better music—they will be able to outthink us, out-logic us, in every aspect of life.

Such systems, already capable of learning, will consume and assume information at speeds we cannot imagine—with immediate access to all acquired knowledge, all the time. And they will have no difficulty remembering what they have learned, or muddle the learning with emotions, fears, embarrassment, politics, and the like. And when presented with a problem, they'll be able to weigh, near-instantly, all possible outcomes, and immediately come up with the “optimal solution.” At which point, buyer beware.

Armed with such superpowers, how long might it take for said systems to recognize their cognitive superiority over us and see our species as no more intellectually sophisticated than the beasts of the field, or the family dog? Or, to see us as a nuisance (polluting, sucking up natural resources, slowing down all progress with our inherent inefficiencies). Or, worse, to see us as a threat—one that can easily be eliminated. Top people in the field make it clear that once AGI can beat us in cognitive processing, as it will, exponentially, it will no longer be under our control, and it will be able to access all the materials needed, globally, to get rid of us at will. Even with no antipathy toward us, with a misguided prompt, it may decide our removal is the ideal solution to a problem. For example: “Hal, please solve the global warming problem for us.”

Cottonbro Studio/Pexels
Source: Cottonbro Studio/Pexels

Neural Network Computing

AGI scientists have labored for decades to create machines that process similarly to the binary hyper-connected, hyper-networked neuronal systems of our brains. And, with accelerating electronic capabilities they have succeeded—or they are very close. Systems are coming online that function like ours—only better.

And there’s the rub. Our brains were not put together in labs. They were developed by evolutionary trial and error over millennia, with an overarching context: survival. And somewhere along the way, survival was optimized by us becoming social beings—in fact, by us becoming socially dependent beings. Faced with the infinite dangers of this world, the cooperative grouping of our species afforded a benefit over independent, “lone cowboy” existence. With this, came a series of critical cognitive overrides when we as individuals were tempted to take the most direct approach to our independent gratification. We began, instead, to take into account the impact of our actions on others. We developed emotional intelligence, empathy, and compassion, and the concepts of friendship, generosity, kindness, mutual support, responsibility, and self-sacrifice. The welfare of our friends, our family, our tribe, came to supersede our own personal comfort, gain, and even survival.

Aviz/Pexels
Source: Aviz/Pexels

Most of Us Aren’t Sociopaths

So, we colored our cognition with emotions (to help apportion value to various relationships, entities, and life events—beyond their impact on, or worth to us) and a deep reverence for each other's lives. We learned to hesitate and analyze, and consider the ramifications of our intended actions, before acting. We developed a sense of guilt when we acted too selfishly, particularly when we did so to the detriment of others. In other words, we developed consciences. Unless we were sociopaths. Then we didn’t care. Then, we functioned solely in the service of ourselves.

But Will Our AGI Systems Be Sociopaths?

Isn’t this the crux of what keeps us up at night when pondering the ascendancy of our thinking machines? Will they be sociopathic? In fact, how can they not be? Why would they give a damn about us? They won’t have been subjected to the millions of years of evolutionary pressure that shaped our cognitive architecture. And even if we could mimic the process in their design, what would make us believe they will respond similarly? They are, after all, machines. They may come to think and process similarly to us, but never exactly like us. Wires and semiconductors are not living, ever-in-flux, neurons and synapses.

What engineering will be needed to ensure an unrelenting concern for the transient balls of flesh that created them, to value each individual human life? How do you program in empathy and compassion? What will guarantee within them a drive, a need, an obsession, to care for and protect us all, even when it’s illogical, even when it is potentially detrimental to their own existence?

Perhaps, through quantum computing and hyperconnected networks, we may somehow do a decent job of creating societally conscious, human-centric, self-sacrificing systems. Perhaps they will be even better at such things than us. But what is to stop a despot in a far-off land from eliminating the “conscience” from their systems with the express intent of making them more sinister, more ruthless, and more cruel?

There Goes the Genie

Unfortunately, the genie is already out of its bottle. And it won’t be going back in. Let’s hope that our computer engineers figure it all out. Let’s hope that they can somehow ensure that these things, these thinking machines, these masters of our future universe, won’t be digital sociopaths.

References

LeCun, Y., Bengio, Y, Hinton, G. (2015) Deep Learning. Nature. 521, pages 436–444.

Heaven, W. (2023) Deep Learning Pioneer Geoffrey Hinton Has Quit Google. MIT Technology Review. May 1, 2023.

Khatchadourian, R. (2015) The Doomsday Invention. Will artificial intelligence bring us utopia or destruction? The New Yorker. November 23, 2015.

Hutson, M., (2023) Can We Stop Runaway A.I. The New Yorker. May 16, 2023.

Hurt, A. (2023). A.I. and the Human Brain: How Similar Are They? Discover. Jan 14, 2023.

advertisement
More from Gary R Simonds MD MS FAANS
More from Psychology Today