I want to raise awareness regarding an emerging, potential problem with AI use by sharing a snippet from an article by Matt Miller entitled "AI, students, relationships and manipulation" published today at DitchThatTextbook. ******* From a research study titled “Anthropomorphization of AI: Opportunities and Risks” … |
“With widespread adoption of AI systems, and the push from stakeholders to make it human-like through alignment techniques, human voice, and pictorial avatars, the tendency for users to anthropomorphize it increases significantly.” |
The findings of this research study? |
“[A]nthropomorphization of LLMs affects the influence they can have on their users, thus having the potential to fundamentally change the nature of human-AI interaction, with potential for manipulation and negative influence. |
“With LLMs being hyper-personalized for vulnerable groups like children and patients among others, our work is a timely and important contribution.” |
What happens when children and teenagers anthropomorphize AI? |
|
They don’t know that AI isn’t sentient … that it isn’t human. To the AI, all of this is just a creative writing exercise, a statistics activity to predict the best possible response to the input provided by the user. |
It isn’t real human interaction. It’s all a simulation. And it’s dangerous. |
Biases and hallucinations in AI don’t just become a concern. They become a danger. Hallucinations — errors made by AI models that are passed off as accurate — become “facts” from a trusted source. Bias becomes a worldview espoused by a “loved one.” |
When children and teenagers are fixated on this AI “loved one,” it can distort judgment and reality and cause them to make sacrifices for a machine — even sacrificing their own lives. |
What can we do? |
In short? A lot. And most of it doesn’t require special training. |
|
Beware: AI can be manipulative
November 4, 2024