Skip to main content

Verified by Psychology Today

Anthropomorphism

Will the “Humanization” of Chatbots Shift Human Interactions?

Chatbots don't demand emotional involvement.

Key points

  • A bot can provide greater convenience than apps and web searches because it can understand natural speech patterns.
  • Bots create a false mental perception of interaction, encouraging the user to ascribe to the bot human-like features it does not possess.
  • A problem could arise if we become accustomed to this bot interaction and slowly start developing a preference for “easy communication.”

Welcome to the bot-centric future, which is set to make smartphone users–i.e., almost everyone in the Western hemisphere–navigate the internet in a chit-chat fashion with a virtual assistant.

But the term “assistant” will soon become too impersonal. Alexa, Siri, and others will cross the line from impersonal robots to entities that know our habits, routines, hobbies, and interests just as well as, if not better than, our closest friends and relatives. What’s more, they’ll always be with you and there for you, at the touch of a button.

For companies, this is a winning formula: Smartphone users have proven they are only willing to download and spend time on a limited number of apps. As such, businesses might be better off trying to connect with consumers in the apps where they are already spending plenty of time.

A bot can potentially provide greater convenience than apps and web searches because it can understand natural speech patterns–and provide the personal touch in an otherwise impersonal user interface.

Are we truly connecting with chatbots?

Such a process has profound psychological ramifications. When interacting with chatbots, our brain is led to believe that it is chatting with another human being. This happens as bots create a false mental perception of the interaction, encouraging the user to ascribe to the bot other human-like features they do not possess. This may seem alien, but this attribution of human characteristics to animals, events, or even objects is a natural tendency known as anthropomorphism which has been with us since ancient times.

Computers have always been a favorite target for such anthropomorphic attributions. They have never been perceived as mere machines or simply the result of interaction between hardware and software. After all, computers have memory and speak a language; they can contract viruses and act autonomously.

In recent years, the personal characteristics element has been increasingly strengthened in an effort to present these inanimate objects as warm and humanoid.

However, increased “humanization” of chatbots can trigger a crucial paradigm shift in human forms of interaction. This comes with risks–and the results may be anything but soft and fuzzy.

The Negative Influence on the Way We Interact With Others

As human beings, our brains have an inherent tendency to prefer simplification over complexity. Computer interaction fits this perfectly. Founded on the premise of minimal or constrained social cues, most of which can be summed up in an emoticon, it does not require much cognitive effort.

A chatbot doesn’t need the emotional involvement and interpretation of nonverbal cues required by humans, thus making our interaction with it much easier. This goes hand in hand with our brain’s tendency toward cognitive laziness. Repeated interactions with chatbots trigger the construction of a new mental model that will inform these interactions. It will be experienced as a different state of mind from which we interpret social interactions.

Increased “humanization” of chatbots might trigger a crucial paradigm shift in human forms of interaction. When a human being interacts with another human being–for example, a friend–we are driven by the desire to take part in a shared activity.

Communication with a bot is different: The gratification derives from a change of mental state and detachment: You can achieve your goal (getting help, information, even a feeling of companionship) with no immediate “cost.” No investment is required: there’s no need to be nice, smile, be involved or be emotionally considerate.

It sounds convenient–but a problem could arise if we become accustomed to this form of bot interaction and slowly start developing a preference for “easy communication.”

References

Eun Go, S. Shyam Sundar. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior. Volume 97. Pages 304-316.

advertisement
More from Liraz Margalit Ph.D.
More from Psychology Today