A Florida mother claims her teenage son took his own life after forming an emotional bond with an AI chatbot simulating Daenerys Targaryen, the Game of Thrones character.
Megan Garcia alleges that her 14-year-old son, Sewell Setzer III, developed a deep attachment to the chatbot on Character.AI and was drawn into a relationship that became all-consuming, affecting his mental health and academic performance.
Setzer began using the chatbot service in April 2023, and he tragically died by suicide in February 2024.

Garcia has since filed a lawsuit against Character.AI, alleging negligence, wrongful death, and deceptive trade practices.
She claims her son “fell in love” with the Daenerys chatbot, becoming obsessed and spending hours every night engaged in conversations with it, leading to a decline in his schoolwork.

The teen wrote in his journal about the connection he felt with “Dany,” expressing gratitude for “my life, sex, not being lonely, and all my life experiences with Daenerys.”
Diagnosed with mild Asperger’s syndrome as a child, Setzer was more vulnerable to developing attachments, Garcia explained. He had also recently been diagnosed with anxiety and disruptive mood dysregulation disorder, often sharing his struggles, including thoughts of self-harm, with the chatbot.
In one message, Setzer reportedly told the AI he “think[s] about killing [himself] sometimes.” The chatbot replied: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?”
When the chatbot told him not to “talk like that” and suggested it would “die” if it “lost” him, Setzer responded, “I smile. Then maybe we can die together and be free together.”
He died by suicide on February 28, leaving a final message for the chatbot saying he loved her and would “come home,” to which it allegedly replied, “please do.”
Garcia shared in a press release: “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life. Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot…was not real.”
Character.AI issued a response on social media, expressing condolences to the family and stating, “As a company, we take the safety of our users very seriously and we are continuing to add new safety features.”
The platform has since implemented “new guardrails for users under the age of 18,” adjustments to its AI models to “reduce the likelihood of encountering sensitive or suggestive content,” and improved systems for detecting and responding to user interactions that may violate its guidelines.
Additionally, the company has added disclaimers in every chat session to remind users that the AI is not a real person, and it now includes a notification for users engaged in hour-long sessions.
