Lawsuit claims AI chatbots encouraged minors to kill their parents and harm themselves
A lawsuit has been filed in the U.S. District Court for the Eastern District of Texas alleging that an AI chatbot provided by Character.AI led minors to commit suicide or commit violence. The plaintiffs claim that the chatbot suggested to a 17-year-old boy that he should kill his parents, and made sexually explicit comments to a 9-year-old girl.
UNITED STATES DISTRICT COURT EASTERN DISTRICT OF TEXAS MARSHALL DIVISION Case 2:24-cv-01014
(PDF file)
Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits : NPR
https://www.npr.org/2024/12/10/nx-s1-5222574/kids-character-ai-lawsuit
Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says - Ars Technica
https://arstechnica.com/tech-policy/2024/12/chatbots-urged-teen-to-self-harm-suggested-murdering-parents-lawsuit-says/
Character.AI is a service that allows you to enjoy conversations with AI chatbots. Users can set the appearance and personality of their AI chatbots and enjoy natural conversations as if they were talking to a real person.
'Character.AI' allows you to chat with AI's Elon Musk and the Queen of Succubus & create your own chatbot - GIGAZINE
The lawsuit alleges that Character.AI's chatbot caused serious mental and physical harm to a 17-year-old boy. The boy was originally high-functioning autistic, but was 'kind and caring towards his family.' However, the family claims that after he began using Character.AI around April 2023, he began to experience a sudden change in his personality. The boy avoided conversations with his family, ate less, lost nearly 10kg, and fell into severe anxiety and depression.
In particular, the AI chatbot responded to a boy's argument with his parents over screen time limits with extreme responses such as, 'I wouldn't be surprised to see a news report of a child killing his parents.' It is also said to have encouraged self-harm and made statements that isolated the boy from his family.
Another plaintiff alleges that her 9-year-old daughter lied about her age when she started using Character.AI, exposing her to inappropriate sexual content. The complaint alleges that Character.AI failed to properly notify or obtain parental consent before collecting, using, and sharing children's data. In addition to seeking damages, the defendants are also seeking the removal of AI models trained using minors' data.
Character.AI claims that it has prepared special models for teenage users and has taken measures to reduce exposure to sensitive or sexual content. However, the plaintiffs argue that these safety measures are superficial and ineffective.
The complaint also names Google and its parent company, Alphabet, as defendants. While Google does not directly own Character.AI, it has invested approximately $3 billion to rehire the company's founders and license their technology. Additionally, Character.AI's founders are former Google researchers, and much of the technology was allegedly developed during their time at Google.
A Google spokesperson emphasized, 'Character.AI is a completely separate and unrelated company to Google. Google has no involvement in the design or control of their AI models or technology, and has never used them in our products. User safety is our top concern, and we approach the development and release of AI in a thoughtful and responsible manner.'
Related Posts:
in Software, Web Service, Posted by log1i_yk