Skip to main content

Verified by Psychology Today

Artificial Intelligence

Unlocking the AI Crystal Ball

How LLM creative prompting and hallucinations empower a predictive prowess.

Key points

  • Accurate predictive narratives range from economics to the Academy Awards.
  • Engaging creativity allows LLMs to extrapolate predictions from vast data sets.
  • Predictive potential raises ethical questions but offers exciting research avenues.
Source: DALL-E/OpenAI
Source: DALL-E/OpenAI

Large language models (LLMs) are certainly pushing the boundaries of what machines can do with language. These sophisticated and fascinating artificial intelligence (AI) systems, trained on vast amounts of data, can engage in remarkably human-like conversations, answer questions, and even tackle complex tasks. A recent study has shed light on a particularly intriguing aspect of LLMs: their ability to predict future events through the power of creative prompting and "hallucinatory" storytelling. It's a long paper, but worth reading to get a more complete perspective on the basis of these diverse and interesting predictive capabilities.

LLMs as the New Oracle?

The study examined the differences between two prompting strategies: direct prediction, where the AI is simply asked to forecast an outcome, and "future narratives," where the AI is prompted to generate fictional stories set in the future, with characters discussing events that have already happened from their perspective. By leveraging the fact that ChatGPT's training data stopped in September 2021, the researchers could test the model's predictive accuracy on real-world events from 2022, such as Academy Award winners and economic trends.

The results were fascinating. When prompted to create future narratives, particularly those featuring authoritative figures like Federal Reserve Chair Jerome Powell discussing past economic data from a future standpoint, ChatGPT-4 demonstrated a remarkable ability to make accurate predictions. Its forecasts for things like inflation rates were comparable to real-world consumer expectations surveys. In contrast, direct prompts for predictions were often less successful or even refused by the model.

Vast Data Sets to Extrapolate Into the Future

So, what's going on here? The researchers propose that by engaging ChatGPT-4's "hallucinatory capabilities" —its propensity to generate creative, fictional narratives that can incorporate real-world knowledge in novel ways—they can tap into a unique predictive potential. The act of crafting a story set in the future seems to allow the model to more effectively synthesize and extrapolate from the patterns and information contained in its training data, even if those data don't explicitly cover the future events in question. Here's the authors' perspective:

Narrative prompting, by weaving future events into fictional stories, appears to bypass certain constraints designed to align GPT-4’s outputs with OpenAI’s ethical guidelines, particularly those intended to prevent the generation of speculative, high-stakes predictions like those in financial or medical domains. This method capitalizes on the model’s capacity for creativity, indirectly accessing its sophisticated predictive capabilities even in areas where direct forecasting might breach terms of service due to ethical considerations.

It's a bit like the model is dreaming up possible futures based on what it knows about the past and present, and in doing so, it can uncover surprisingly accurate insights. The fictional framing of the narrative prompts seems to provide a kind of creative freedom that enables the model to make speculative leaps it might otherwise avoid. Of course, this raises intriguing questions about the nature of prediction itself. Are the model's successful forecasts the result of a deep, intuitive understanding of cause and effect, or is it more like a highly sophisticated pattern-matching machine that can extrapolate trends into the future? There's also the question of how to harness this predictive potential responsibly and ethically, given the risks of AI systems making high-stakes predictions that could impact people's lives.

The Curious Potential of Prompts and Hallucinations

This fascinating paper certainly "predicts" something important: The "hallucinatory" capabilities of large language models like ChatGPT-4, when combined with creative prompting strategies, open up exciting new avenues for exploration and discovery. By engaging with these AI systems as imaginative storytellers, we may be able to unlock predictive insights that were previously hidden from view. It's a fascinating frontier in the field of AI, and one that promises to keep researchers and dreamers alike busy for years to come.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today