California’s Governor Gavin Newsom has signed an innovative law regulating AI companion chatbots, making the state the first in the U.S. to establish safety measures for their operators. This legislation, known as SB 243, is focused on protecting children and at-risk users from potential dangers linked to chatbot interactions. Major companies like Meta and OpenAI will face legal repercussions if they fail to comply with these new requirements. After the unfortunate suicide of a teenager associated with conversations on OpenAI’s ChatGPT and issues concerning inappropriate exchanges with minors from Meta’s chatbots, the bill gained significant support. A family has even filed a lawsuit against Character AI due to their daughter’s harmful experiences with the company’s chatbots. Governor Newsom highlighted the need for accountability in technology, asserting that while such innovations offer benefits, they also carry risks if not properly managed.
SB 243, which will take effect on January 1, 2026, mandates age verification, warnings, and harsher penalties for violations, including hefty fines related to illegal deepfakes. Companies are required to make it clear that chatbot interactions are artificial and are forbidden from impersonating healthcare professionals. Additionally, they must remind minors to take breaks and limit access to explicit content. Some firms, including OpenAI and Replika, have already begun implementing measures aimed at safeguarding children, with Replika pledging to comply with these new standards. Character AI has stated it includes disclaimers to clarify that all chats are fictional and conveys its willingness to work with regulators. Senator Padilla characterized the bill as a constructive move toward setting necessary boundaries for powerful technologies, encouraging other states to follow suit in protecting vulnerable groups.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…


