Boy meets AI

I want to raise awareness regarding an emerging, potential problem with AI use by sharing a snippet from an article by Matt Miller entitled "AI, students, relationships and manipulation" published today at DitchThatTextbook.

*******

From a research study titled “Anthropomorphization of AI: Opportunities and Risks”

“With widespread adoption of AI systems, and the push from stakeholders to make it human-like through alignment techniques, human voice, and pictorial avatars, the tendency for users to anthropomorphize it increases significantly.”

The findings of this research study?

“[A]nthropomorphization of LLMs affects the influence they can have on their users, thus having the potential to fundamentally change the nature of human-AI interaction, with potential for manipulation and negative influence.

“With LLMs being hyper-personalized for vulnerable groups like children and patients among others, our work is a timely and important contribution.”

What happens when children and teenagers anthropomorphize AI?

  • Because AI chatbots look so much like a text message conversation, they might not be able to tell that AI isn’t human.

  • They develop harmful levels of trust in the judgment, reasoning and suggestions of these anthropomorphized AI chatbots.

  • They can develop an unhealthy emotional attachment to anthropomorphized AI — especially if it has a name, a personality, an avatar, even a voice.

They don’t know that AI isn’t sentient … that it isn’t human. To the AI, all of this is just a creative writing exercise, a statistics activity to predict the best possible response to the input provided by the user.

It isn’t real human interaction. It’s all a simulation. And it’s dangerous.

Biases and hallucinations in AI don’t just become a concern. They become a danger. Hallucinations — errors made by AI models that are passed off as accurate — become “facts” from a trusted source. Bias becomes a worldview espoused by a “loved one.”

When children and teenagers are fixated on this AI “loved one,” it can distort judgment and reality and cause them to make sacrifices for a machine — even sacrificing their own lives.

What can we do?

In short? A lot. And most of it doesn’t require special training.

  • Don’t model AI anthropomorphism. Don’t give it a name. Don’t assign it a gender. Don’t express concern for its feelings. Do this even if it contradicts our tendencies in human interaction. (Example: I always want to thank AI for its responses. It doesn’t need that. It’s a machine.) Students will follow our lead.

  • Talk about the nature of AI. Here are a few talking points you can use:

    • Natural language processing (NLP) is AI’s way of talking like us based on studying billions and billions of words in human communication. That’s why it sounds like us.

    • Large language models (LLMs) make their best statistical guess on what we request. They run like a great big autocomplete machine, much like autocomplete in our text message and email apps.

    • AI models emulate human speech. But they aren’t human and can’t feel and aren’t alive. They can’t love, but they reproduce the kind of text that humans use to express love. It’s all a creative writing exercise for AI.

  • Protect, advise, and intervene. Keep your eyes open for places where AI feels human — and be ready to protect children and teens (and even our adult friends and family) from them. Warn children and teens — and put adults on the lookout. And when kids enter dangerous territory, act. Step in.