AN AI COMPANY based in Menlo Park is banning its chatbots for users under 18 years old following lawsuits against the company, which alleged that its chatbots drove several minors to die by suicide. 

Character.Ai’s new policy, which began rolling out Monday, will no longer allow minors to interact with its open-ended chatbots. 

The website allows users to create or chat with fictional or historical characters in role-play scenarios for entertainment or other uses. For example, some popular characters that users can interact with include Greek philosopher Socrates or a fictional “muscular boyfriend.” 

“We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company said in a blog post last month. “After evaluating these reports and feedback from regulators, safety experts, and parents, we’ve decided to make this change to create a new experience for our under-18 community.”

Although the company warns users that everything the characters say should be treated as fiction, users under the age of 18 may be more vulnerable to language produced by AI chatbots, according to a recent study by Stanford’s Brainstorm Lab for Mental Health Innovation and Common Sense Media, a nonprofit that promotes digital safety for youth.

A digital ‘danger’

The Social Media Victims Law Center has filed multiple lawsuits against Character.Ai over accusations that the interactions with chatbots influenced several underage users to die by suicide. The Social Media Victims Law Center is a firm that aims to hold social media companies accountable for negatively impacting users. 

“Character.AI’s negligence in controlling the content provided by its chatbots in controlling the content provided by its chatbots has devastatingly tragic consequences,” said the firm’s founder Matthew Bergman in a statement. 

“The theme of our work is that social media — and now, Character.AI — poses a clear and present danger to young people because they are vulnerable to persuasive algorithms that capitalize on their immaturity.”

In September, the firm filed three separate lawsuits against Character.Ai alleging that the company knowingly designed and marketed predatory chatbot technology aimed at children.

One of the suits was filed on behalf of the family of 13-year-old Juliana Peralta of Colorado, who died by suicide after apparently interacting with Character.Ai chatbots. 

In October 2024, the mother of Sewell Setzer III of Florida sued the company. Setzer III died by suicide at the age of 14, and the suit alleges that his chat with a Character.Ai chatbot contributed to his death. 

The growing concerns over teen safety surrounding the use of AI chatbots and the lawsuits has prompted Character.Ai to ban access to open-ended chatbots for minors in the United States. 

‘Stories’ for minors

Instead, minors will have access to a new feature called “Stories,” which was introduced Tuesday as an alternative to chatbots. The interactive model will allow users to create and direct fictional narratives.

“Following recent updates for under-18 users, Stories is our first new format created specifically to enhance their experience,” Character.Ai wrote in a post to its website Tuesday. “Stories offer a guided way to create and explore fiction, in lieu of open-ended chat.”

To be able to use an open-ended chatbot, users in the U.S. will need to verify their age. The rollout of age assurance technology “will continue globally in the near future,” the company said in a statement. 

Character.Ai has started a partnership with Koko, a nonprofit that provides free emotional support tools for young people directly on social media platforms. Character.AI is working on integrating Koko into its chats and using it to help identify potentially unsafe content. 

Character.Ai also said it will establish and fund an independent nonprofit that focuses on developing safety protections for users of entertainment AI. 

“We do not take this step of removing open-ended Character chat lightly,” the company said in a statement. “But we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology.”

Alise Maripuu is an intern at BCN with a focus on covering the Peninsula. Originally from San Carlos, Alise discovered her passion for journalism after studying abroad in Thailand during her senior year attending UC Santa Cruz. Her experience in Thailand taught her the consequences for democracy when living in a society with strict laws against free speech. After graduating with a bachelor’s degree in history, Alise took courses in journalism at Skyline Community College to learn how to write for news. As the Chief Copy Editor on Skyline’s student-run newspaper for the 2023-24 school year, Alise gained editing and managing experience leading a team of reporters. She covered hyperlocal stories affecting her campus such as the rise in food and housing insecurity. Alise wants to focus on data journalism.