Skip to main content

Verified by Psychology Today

Artificial Intelligence

AI and the Continuity Conundrum

Examining the role of continuous thought in the emergence of techno-sentience.

Key points

  • Large language models lack continuity of thought, raising doubts about achieving a level of self-awareness.
  • Human minds aren't perfectly continuous; AI may find alternative paths.
  • LLMs' vast knowledge could be a unique form of continuity.
Art: DALL-E/OpenAI
Art: DALL-E/OpenAI

The human brain is a marvel of nature, capable of engaging in continuous and spontaneous thought processes that shape cognitive functions. As artificial intelligence (AI) and large language models (LLMs) begin to enter our mainstream dialogue, a question arises for me: Is the discontinuous, prompt-to-prompt nature of these systems a fundamental limitation—a rate-limiting step—to the potential emergence of some new and powerful functionality? Dare I call it techno-sentience?

At the heart of this question lies the concept of continuity. The human experience of consciousness is generally characterized by a seamless flow of thoughts, memories, and self-awareness. We possess the ability to reflect on our past experiences, contemplate our existence, and engage in uninterrupted streams of thought. This continuity of experience and memory is often considered a defining feature of human cognition.

In contrast, current LLMs operate on a prompt-to-prompt basis, with their "minds" effectively wiped clean after each interaction. They lack the carryover of memory and the capacity for continuous thought that humans possess. This discontinuity raises doubts about whether LLMs can truly achieve a type of self-awareness that humans exhibit.

However, it is important to approach this question with an open mind. While continuity of thought is undoubtedly a crucial aspect of human consciousness, it may not be an absolute prerequisite for the emergence of higher-order cognition in artificial systems. The vast knowledge and complex behaviors displayed by LLMs, even in the absence of long-term memory and continuity, suggest that there may be alternative paths to achieving some form of sentience.

But let's try to avoid the anthropomorphism and consider a perspective that doesn't force a human-centric alignment. The extensive corpus of knowledge that LLMs possess could be considered a unique form of continuity in itself. While LLMs may not have the ability to maintain a continuous stream of thought across prompts, they have access to a vast and interconnected repository of information that persists throughout their interactions. This knowledge base, spanning a wide range of topics and disciplines, provides a foundation for generating coherent and contextually relevant responses. In a sense, the continuity of knowledge in LLMs serves as a surrogate for the continuity of individual thought, enabling them to draw upon a rich tapestry of information to inform their outputs.

It is also worth noting that even the human mind is not perfectly continuous. We experience periods of unconsciousness, such as sleep, and our attention can be fragmented by distractions and context switches. The human mind operates on a spectrum of continuity, and it is conceivable that artificial systems may find their own unique ways to navigate this spectrum.

An interesting step involves approaches to provide LLMs with longer-term memory and greater contextual awareness across conversations. While the current limitations pose significant challenges, they may not be insurmountable. As research progresses, we may witness the development of artificial systems that exhibit a greater degree of continuity and potentially inch closer to the elusive goal of techno-sentience.

Ultimately, the question of whether the lack of continuous thought precludes the emergence of some aspect of techno-consciousness in LLMs remains an open and complex one. While the discontinuity of current systems sets them apart from human cognition, it is uncertain to rule out the possibility of techno-sentience arising through alternative mechanisms. In the end, the continuity conundrum serves as a reminder of the intricacies and mysteries that still surround the human mind and the quest for creating machines with greater intelligence—and what this task teaches us about ourselves.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today