In summary
- AI-powered chatbot platform Character AI is introducing rigorous new security features following a lawsuit filed by the mother of a teenage user who died by suicide in February.
- The measures include enhanced detection, response and intervention related to user entries that violate the Terms or Community Guidelines, as well as elapsed time notification.
- The teen’s mother, Megan L. Garcia, filed a lawsuit seeking to hold Character AI and its founders liable for her son’s death, alleging that the technology was dangerous and unsafe for minors.
AI-powered chatbot platform Character AI is introducing “rigorous” new safety features following a lawsuit filed by the mother of a teenage user who died by suicide in February.
The measures will include “enhanced detection, response and intervention related to user entries that violate our Terms or Community Guidelines,” as well as an elapsed time notification, a company spokesperson told Decrypt, noting that the company He could not comment on pending litigation.
However, Character AI expressed its sympathy for the user’s death and outlined its safety protocols in a blog post on Wednesday.
“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” Character.ai tweeted. “As a company, we take the security of our users very seriously.”
We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here:…
— Character.AI (@character_ai) October 23, 2024
In the months before his death, Sewell Setzer III, a 14-year-old Florida resident, had become increasingly attached to a user-generated chatbot named after Game of Thrones character Daenerys Targaryen, according to the New York Times. He often interacted with the bot dozens of times a day and sometimes exchanged romantic and sexual content.
Setzer communicated with the bot in the moments before his death and had previously shared thoughts of suicide, the Times reported.
Setzer’s mother, attorney Megan L. Garcia, filed a lawsuit Tuesday seeking to hold Character AI and its founders, Noam Shazeer and Daniel De Freitas, liable for her son’s death. Among other claims, the lawsuit alleges that the defendants “chose to support, create, release, and direct to minors a technology that they knew was dangerous and unsafe,” the complaint cites. Garcia is seeking an unspecified amount in damages.
Google LLC and Alphabet Inc. are also being sued. Google rehired Shazeer and De Freitas, both of whom left the tech giant in 2021 to found Character AI, in August, as part of a $2.7 billion deal that also included licensing the chatbot startup’s large language model.
Character AI has “implemented numerous new safety measures over the past six months, including a pop-up notice that directs users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation,” along with other safety measures. security, as reported in the company statement. It will also modify its models “to reduce the likelihood of encountering sensitive or suggestive content” for users under 18 years of age.
Character AI is one of many AI companion apps on the market, which often have less strict security guidelines than conventional chatbots like ChatGPT. Character AI allows users to personalize their companions and direct their behavior.
The lawsuit, which comes amid growing concerns among parents about the psychological impact of technology on children and teenagers, claims that their attachment to the bot had a negative effect on their mental health. Setzer was diagnosed with mild Asperger’s as a child and was recently diagnosed with anxiety and disruptive mood dysregulation disorder, the Times reported.
(embedded-post id=284468 /)
The lawsuit is one of several moving through the courts that are testing the legal protections provided to social media companies under Section 230 of the Communications Decency Act, which protects them from liability associated with user-generated content. TikTok is asking for a new hearing in a case in which a judge ruled it could be held liable after a 10-year-old girl died while trying to complete a “blackout challenge” she saw on the app.
This is the latest issue facing Character AI, which came under fire last month for hosting a chatbot named after a murder victim.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Crypto Keynote USA
For the Latest Crypto News, Follow ©KeynoteUSA on Twitter Or Google News.