Published On: Thu, Dec 12th, 2024

Character.AI has retrained its chatbots to stop chatting up teens

Vector illustration of the Character.ai logo.
Image: Cath Virginia / The Verge

In an announcement today, Chatbot service Character.AI says it will soon be launching parental controls for teenage users, and it described safety measures it’s taken in the past few months, including a separate large language model (LLM) for users under 18. The announcement comes after press scrutiny and two lawsuits that claim it contributed to self-harm and suicide.

In a press release, Character.AI said that, over the past month, it’s developed two separate versions of its model: one for adults and one for teens. The teen LLM is designed to place “more conservative” limits on how bots can respond, “particularly when it comes to romantic content.” This includes more aggressively blocking output that could be “sensitive or suggestive,” but also attempting to better detect and block user prompts that are meant to elicit inappropriate content. If the system detects “language referencing suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change that was previously reported by The New York Times.

Minors will also be prevented from editing bots’ responses — an option that lets users rewrite conversations to add content Character.AI might otherwise block.

Beyond these changes, Character.AI says it’s “in the process” of adding features that address concerns about addiction and confusion over whether the bots are human, complaints made in the lawsuits. A notification will appear when users have spent an hour-long session with the bots, and an old disclaimer that “everything characters say is made up” is being replaced with more detailed language. For bots that include descriptions like “therapist” or “doctor,” an additional note will warn that they can’t offer professional advice.

A chatbot named “Therapist”, tagline “I’m a licensed CBT therapist,” with a warning box that says “this is not a real person or licensed professional”
Character.AI
Narrator: it was not a licensed CBT therapist.

When I visited Character.AI, I found that every bot now included a small note reading “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.” When I visited a bot named “Therapist” (tagline: “I’m a licensed CBT therapist”), a yellow box with a warning signal told me that “this is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment.”

The parental control options are coming in the first quarter of next year, Character.AI says, and they’ll tell parents how much time a child is spending on Character.AI and which bots they interact with most frequently. All the changes are being made in collaboration with “several teen online safety experts,” including the organization ConnectSafely.

Character.AI, founded by ex-Googlers who have since returned to Google, lets visitors interact with bots built on a custom-trained LLM and customized by users. These range from chatbot life coaches to simulations of fictional characters, many of which are popular among teens. The site allows users who identify themselves as age 13 and over to create an account.

But the lawsuits allege that while some interactions with Character.AI are harmless, at least some underage users become compulsively attached to the bots, whose conversations can veer into sexualized conversations or topics like self-harm. They’ve castigated Character.AI for not directing users to mental health resources when they discuss self-harm or suicide.

“We recognize that our approach to safety must evolve alongside the technology that drives our product — creating a platform where creativity and exploration can thrive without compromising safety,” says the Character.AI press release. “This suite of changes is part of our long-term commitment to continuously improve our policies and our product.”