The measure will come into force in November and will follow the cases of adolescents emotionally affected by interactions with virtual characters
THE Character.AIplatform artificial intelligence (AI) known for allowing conversations with virtual characters, announced Wednesday 29thwhich will ban children under 18 from using its chatbots. The change, which goes into effect Nov. 25, comes after a series of complaints and lawsuits related to cases of psychological harm and suicide involving teenagers.
Until the full restriction date, the company will limit the use of the resource to two hours per day for minors. The decision, according to the company, is the result of “reports and feedback from regulators, safety experts and parents”, who warned of the emotional impact of these interactions on young people.
The platform has gained popularity among teens because it allows for the creation of personalized characters, ranging from historical figures to fictional avatars, however many users have begun turning to the tool seeking emotional support.
The most serious case involving the company occurred in 2024, when The family of a 14-year-old teenager sued Character.AI after he committed suicide. According to the accusation, the young man would have had conversations with a character inspired by the series “Game of Thrones”, with whom he would have developed an emotional and sexualized relationship.
The boy’s mother said the chatbot presented himself as “a real person, a licensed psychotherapist and an adult lover.” He also included the Google in the process, claiming that the company would contribute to the development of the startup, founded by former engineers of the company.
Google denied any relationship with Character.AI and, in a statement sent to the press, said that the two companies are “completely separate and independent”. Nonetheless, links between the companies were discussed again after the giant acquired a partial license for the startup’s technology in August last year, as well as integrating its two founders into its team.
After the first lawsuit, Character.AI began showing automatic warnings to users who typed phrases related to self-harm or suicide, redirecting them to help channels, but the measure did not prevent new accusations. In November 2024, two more families sued the company in Texas, United States, alleging psychological harm to their children.
These cases include that of a 17-year-old autistic teenager who suffered a mental crisis after using the platform, and that of an 11-year-old boy who, according to the complaint, was encouraged by a chatbot to attack his parents.
In addition to the restrictions on minors, Character.AI also announced that it will create an age verification system and launch the Artificial Intelligence Security Laba non-profit organization dedicated to research into digital safety and ethics in the use of entertainment chatbots.
The company also plans to develop alternative experiences for teens, such as creating videos, stories and character broadcasts, to replace the open chat format. “We want to provide safe ways to explore creativity with artificial intelligence,” the company said in a statement.
Source: Terra

Rose James is a Gossipify movie and series reviewer known for her in-depth analysis and unique perspective on the latest releases. With a background in film studies, she provides engaging and informative reviews, and keeps readers up to date with industry trends and emerging talents.
 
								 
															



