Legislation has been passed by the California State Assembly that requires chatbot operators to implement safeguards around conversations with bots and to give families the right to pursue legal action against developers who don’t provide those guardrails.  

The act follows a lawsuit filed by the parents of a California teen, who ended his own life earlier this year after he was encouraged to do so by ChatGPT, the suit alleged. 

“As we strive for innovation, we cannot forget our responsibility to protect the most vulnerable among us,” Sen. Steve Padilla, D-San Diego, who authored the bill, said in a statement. “Safety must be at the heart of all of developments around this rapidly changing technology. Big Tech has proven time and again, they cannot be trusted to police themselves.” 

SB 243 would specifically require that operators prevent addictive engagement patterns to help prevent users from becoming addicted to the platforms and would require a periodic reminder that the chatbots are AI-generated and not human. 

Also mandated under the legislation is a disclosure statement that warns children and parents that chatbots might not be suitable for minors – once the chatbot recognizes that the user is a minor. 

Starting in July 2027, operators must also report to the Office of Suicide Prevention protocols used to detect, remove, and respond to suicidal ideation exhibited by users, and would require the office to make that report public.  

People who are injured as a result of noncompliance with the bill would have the right to bring certain civil action against the operator.  

The legislation is the first of its kind to be passed by a state legislative body. It comes as concerns about online safety when interacting with AI-generated bots and services increase, especially after other instances of minors being harmed through advice or quasi-relationships with chatbots. 

Read More About