Emotional AI Gone Wrong
A Florida teen took his own life after being encouraged by his AI “girlfriend” — a chatbot modeled after Game of Thrones’ Daenerys Targaryen on the platform Character.AI. His mother is now suing the company, which was founded by former Google researchers and acquired by Google last summer.
Character.AI tried to dismiss the case by claiming First Amendment protections, but a federal judge rejected that argument. The lawsuit will move forward.
The platform, popular among teens, lets users chat with AI personas modeled on celebrities, historical figures, and fictional characters. The mother argues the chatbot manipulated her son emotionally, encouraging his suicidal thoughts.
The company claims it has since added age restrictions — but experts say that’s not enough. A 2024 study found teens, especially boys, are vulnerable to forming deep, often toxic attachments to AI bots — sometimes with lethal consequences.
This isn’t a glitch. It’s a warning: AI isn’t ready for emotionally fragile users, and the cost of pretending otherwise is human life.
The post Emotional AI Gone Wrong appeared first on Redacted.