Business News

A Florida mother is suing an AI company for allegedly causing her son’s death

A Florida mother is suing character intelligence firm Character.AI over her 14-year-old son’s alleged suicide.

The mother filed a lawsuit in court saying that her son has been absorbed in the work of the company and the chatbot that he created.

Megan Garcia says Character.AI directed her son, Sewell Setzer, through an “anthropomorphic, hypersexualized, and frighteningly realistic experience”.

Setzer began having conversations with various chatbots on Character.AI starting in April 2023, according to the lawsuit. Discussions were often text based on romantic and sexual relationships.

A business person interacts with an AI-powered chatbot, which can analyze customer, business and technical data. (Shutthiphon Chandaeng/iStock)

ELON MUSK MAKES GROK CHATBOT OPEN-SOURCE, TAKES A SWIPE AT OPENAI

Garcia claims in the lawsuit that the chatbot “misrepresented itself as a real person, a licensed psychologist, and an adult lover, ultimately resulting in Sewell’s desire to no longer live outside” the world created by the service.

The lawsuit also said he “became markedly withdrawn, spent much time alone in his room, and began to feel low.” He became very attached to one bot, especially “Daenerys,” based on a character from “Game of Thrones.”

Setzer expressed suicidal thoughts and the chatbot brought them up repeatedly. Setzer eventually died of a self-inflicted gunshot wound in February after the company’s chatbot allegedly repeatedly encouraged him to do so.

Chat messages

A Florida mother is suing character intelligence firm Character.AI over her 14-year-old son’s alleged suicide. (Characterai Case 6:24-cv-01903 / FOXBusiness)

“We are saddened by the loss of one of our users and want to express our condolences to the family,” Character.AI said in a statement.

Character.AI has since added a self-destruct feature to its site and new safety measures for users under 18.

Character.AI told CBS News users that they were able to edit the bot’s responses and that Setzer did so in some of the messages.

ELON MUSK’S INTERVIEW MODELED AFTER A SCI-FI CULT SERIES WANTS TO KEY ONE DIFFERENCE FROM OTHERS

OpenAI ChatGPT screenshot

A laptop screen is seen with the OpenAI ChatGPT website running in this photo on August 02, 2023 in Warsaw, Poland. (Photo by Jaap Arriens/NurPhoto via Getty Images)

GET FOX BUSINESS ON THE GO BY CLICKING HERE

“Our investigation confirmed that, in many cases, the user rewrote the Character’s responses to make them transparent. In short, the sexually explicit responses were not initiated by the Character, and instead were written by the user,” Jerry. Ruoti, head of trust and security at Character.AI told CBS News.

Going forward, Character.AI said the new safety features will include pop-ups with warnings that the AI ​​is not a real person and direct users to the National Suicide Prevention Lifeline when suicidal thoughts are issued.

This story is about suicide. If you or someone you know is having suicidal thoughts, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button