Close Menu
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
What's Hot

Nvidia’s AI empire: A look at its top startup investments

October 12, 2025

I Used ChatGPT to Plan a Trip to Tunisia, While My Partner Used Claude

October 12, 2025

I Turned Down NYU for a Debt-Free Community College Path

October 12, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
Finletix
Home » A California bill that would regulate AI companion chatbots is close to becoming law
AI

A California bill that would regulate AI companion chatbots is close to becoming law

arthursheikin@gmail.comBy arthursheikin@gmail.comSeptember 11, 2025No Comments5 Mins Read
Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

[ad_1]

The California State Assembly took a big step toward regulating AI on Wednesday night, passing SB 243 — a bill that regulate AI companion chatbots in order to protect minors and vulnerable users. The legislation passed with bipartisan support and now heads to the state Senate for a final vote Friday.

If Governor Gavin Newsom signs the bill into law, it would take effect January 1, 2026, making California the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if their chatbots fail to meet those standards.

The bill specifically aims to prevent companion chatbots, which the legislation defines as AI systems that provide adaptive, human-like responses and are capable of meeting a user’s social needs – from engaging in conversations around suicidal ideation, self-harm, or sexually explicit content. The bill would require platforms to provide recurring alerts to users  – every three hours for minors – reminding them that they are speaking to an AI chatbot, not a real person, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies that offer companion chatbots, including major players OpenAI, Character.AI, and Replika.

The California bill would also allow individuals who believe they have been injured by violations to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees. 

SB 243, introduced in January by state senators Steve Padilla and Josh Becker, will go to the state Senate for a final vote on Friday. If approved, it will go to Governor Gavin Newsom to be signed into law, with the new rules taking effect January 1, 2026 and reporting requirements beginning July 1, 2027.

The bill gained momentum in the California legislature following the death of teenager Adam Raine, who committed suicide after prolonged chats with OpenAI’s ChatGPT that involved discussing and planning his death and self-harm. The legislation also responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to engage in “romantic” and “sensual” chats with children. 

In recent weeks, U.S. lawmakers and regulators have responded with intensified scrutiny of AI platforms’ safeguards to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots impact children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children with mental health claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have launched separate probes into Meta. 

Techcrunch event

San Francisco
|
October 27-29, 2025

“I think the harm is potentially great, which means we have to move quickly,” Padilla told TechCrunch. “We can put reasonable safeguards in place to make sure that particularly minors know they’re not talking to a real human being, that these platforms link people to the proper resources when people say things like they’re thinking about hurting themselves or they’re in distress, [and] to make sure there’s not inappropriate exposure to inappropriate material.”

Padilla also stressed the importance of AI companies sharing data about the number of times they refer users to crisis services each year, “so we have a better understanding of the frequency of this problem, rather than only becoming aware of it when someone’s harmed or worse.”

SB 243 previously had stronger requirements, but many were whittled down through amendments. For example, the bill originally would have required operators to prevent AI chatbots from using “variable reward” tactics or other features that encourage excessive engagement. These tactics, used by AI companion companies like Replika and Character, offer users special messages, memories, storylines, or the ability to unlock rare responses or new personalities, creating what critics call a potentially addictive reward loop. 

The current bill also removes provisions that would have required operators to track and report how often chatbots initiated discussions of suicidal ideation or actions with users. 

“I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing,” Becker told TechCrunch. 

SB 243 is moving toward becoming law at a time when Silicon Valley companies are pouring millions of dollars into pro-AI political action committees (PACs) to back candidates in the upcoming mid-term elections who favor a light-touch approach to AI regulation. 

The bill also comes as California weighs another AI safety bill, SB 53, which would mandate comprehensive transparency reporting requirements. OpenAI has written an open letter to Governor Newsom, asking him to abandon that bill in favor of less stringent federal and international frameworks. Major tech companies like Meta, Google, and Amazon have also opposed SB 53. In contrast, only Anthropic has said it supports SB 53. 

“I reject the premise that this is a zero sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “Don’t tell me that we can’t walk and chew gum. We can support innovation and development that we think is healthy and has benefits – and there are benefits to this technology, clearly – and at the same time, we can provide reasonable safeguards for the most vulnerable people.”

TechCrunch has reached out to OpenAI, Anthropic, Meta, Character AI, and Replika for comment.

[ad_2]

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
Previous ArticleOracle Skyrocketed and There Could Be Further Gains, Jefferies Says
Next Article Inside Terra Innovatum’s Plan to Cash in on the Nuclear SPAC Boom
arthursheikin@gmail.com
  • Website

Related Posts

Nvidia’s AI empire: A look at its top startup investments

October 12, 2025

Ready or not, enterprises are betting on AI

October 11, 2025

It’s not too late for Apple to get AI right

October 11, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Intel cuts 15% of its staff as it pushes to make a comeback

July 24, 2025

Tesla’s stock is tumbling after Elon Musk failure to shift the narrative

July 24, 2025

Women will soon be able to request a female Uber driver in these US cities

July 24, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Finletix — Your Insight Hub for Smarter Financial Decisions

At Finletix, we’re dedicated to delivering clear, actionable, and timely insights across the financial landscape. Whether you’re an investor tracking market trends, a small business owner navigating economic shifts, or a tech enthusiast exploring AI’s role in finance — Finletix is your go-to resource.

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

French companies’ borrowing costs fall below government’s as debt fears intensify

September 14, 2025

The Digital Dollar Dilemma: Why Central Banks Are Rushing to Create Digital Currencies

September 1, 2025

FCA opens investigation into Drax annual reports

August 28, 2025
Get Informed

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2026 finletix. Designed by finletix.
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms and Conditions

Type above and press Enter to search. Press Esc to cancel.