Skip to main content

Verified by Psychology Today

Artificial Intelligence

The Psychological Effects of AI Clones and Deepfakes

Concerns range from misuse to the potential impact on reputation and identity.

Donald Oliver / Unsplash
Source: Donald Oliver / Unsplash

How would you feel about having an AI clone or a digital doppelgänger of yourself?

The technology for digitally replicating the appearance and behavior of real people-- living or dead-- is now a reality with the advancements of artificial intelligence, voice cloning, and interactive deepfakes. There are many exciting possibilities and applications for AI clones--when done with the person's knowledge and permission-- including faster content creation, increased personal productivity, and creating a digital legacy. But this technology has also opened a Pandora's box of misuse-- misleading and deceptive deepfakes and scams that use AI-generated clones of people without their permission.

A company recently lost $25 million after an employee was tricked in a phishing scam using a video conference of deepfake AI-generated chief financial officer and colleagues. Taylor Swift's image was misused to create explicit deepfake images, and her supporters mobilized on social media to have these images removed. In 2022, the FBI announced that deepfakes were being used to apply to remote jobs.

Several terms have been used interchangeably for AI clones: AI replica, agent, digital twin, persona, personality, avatar, or virtual human. AI clones for people who have died have been called thanabots, griefbots, deadbots, deathbots and ghostbots, but there is so far no uniform term that specifies AI clones of living people. Deepfake is the term used for when an AI-altered or -generated image is misused to deceive others or spread disinformation.

Since 2019, I have interviewed hundreds of people about their views on digital twins or AI clones through my performance Elixir: Digital Immortality, based on a fictional tech startup that offers AI clones. The general audience response was one of curiosity, concern, and caution. A recent study similarly highlights three areas of concerns about having an AI clone: "doppelgänger-phobia," identity fragmentation, and false living memories.

Potential Harmful Psychological Effects of AI Clones

There are several potential negative psychological effects of AI clones, especially when AI clones are stolen, manipulated, or fraudulently misused.

1. Stress and anxiety around AI clone misuse, including deepfakes

The security and safety of AI clones and its training data are top priorities. Unfortunately for digital creators and public figures, much of the training data needed to clone one's voice or image already exists through podcasts, videos, photos, books, or articles. Research on identity theft and deepfake misuse suggests that having an AI clone being used without their consent can lead to anxiety, stress, and feelings of helplessness and violation. The fear of having an AI clone has been called, "doppelgänger-phobia."

2. Lack of trust and uncertainty as the line between reality and imaginary becomes blurred

It is often impossible for the people to determine if a video, image, or voice is real or AI-generated synthetic media without forensic analysis. Research has shown that AI-generated faces appear more real than human ones. The public will have to rely on external vetting regarding the authenticity of media. This will likely reveal known biases-- negative information in the media is more likely to be believed than positive information. Some public figures have used this climate of uncertainty to their advantage, casting doubt on the authenticity of media in a phenomenon called "the liar's dividend."

3. Concerns about identity fragmentation and the authenticity of AI clones

Digital identities contribute to identity fragmentation but could also give people an opportunity to express more fluid identities. The impact of having an AI clone on one's identity is unknown and will likely depend on the application, context, and control over AI clones. Ideally, people should have direct ways to control the behavior and personality of their AI clones. For example, setting the "temperature" is a way for users to adjust the level of randomness in the output of an AI model.

4. Creation of false memories

Research in deepfakes shows that people's opinions can be swayed by interactions with a digital replica, even when they know it is not the real person. This can create "false" memories of someone. Negative false memories could harm the reputation of the portrayed person. Positive false memories can have complicated and unexpected interpersonal effects as well. Interacting with one's own AI clone could also result in false memories.

5. Potential interference with grief.

The psychological impact of griefbots or thanabots, or AI clones portraying people who have died, on loved ones is unknown. Some forms of grief therapy involve having an "imaginal conversation" with the person who has passed away. However, this is usually the last stage after several sessions that are guided by a professional therapist. Professional guidance from trained therapists should be accessible and integrated into platforms offering griefbots or thanabots.

6. Increased stress around authentication

The rapid rise of deepfakes for fraud and biometric authentication bypass increase pressure to put in place new methods of authentication. Some experts have predicted that 30% of companies will lose confidence in current face biometric authentication solutions by 2026. This will likely result in increased effort required by users to pass authentication.

Potential Solutions in Education, Deepfake Detection, and Innovative Authentication

Potential solutions to combat the misuse of AI clones and deepfakes include public education, AI disclosure requirements, deepfake detection, and advanced authentication. Education about deepfakes and AI clones will raise awareness so the public can be empowered to flag questionable high-stakes virtual interactions.

Platforms should also require that users disclose when AI-generated or -manipulated media is being uploaded, though enforcement of this requirement may prove difficult. YouTube recently required users to disclose when AI-altered content is uploaded. The integration of deepfake detection on video conferencing and communications platforms that flags synthetic media automatically will help protect users. Finally, the development of new technologies for authentication beyond face or voice biometrics will become increasingly necessary to protect consumers and prevent fraudulent activity.

Marlynn Wei, M.D., J.D. © Copyright 2024. All Rights Reserved.

References

Patrick Yung Kang Lee, Ning F. Ma, Ig-Jae Kim, and Dongwook Yoon. 2023. Speculating on Risks of AI Clones to Selfhood and Relationships: Doppelganger-phobia, Identity Fragmentation, and Living Memories. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 91 (April 2023), 28 pages. https://doi.org/10.1145/3579524

advertisement
More from Dr. Marlynn Wei M.D., J.D.
More from Psychology Today