Close Menu
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
What's Hot

Nvidia’s AI empire: A look at its top startup investments

October 12, 2025

I Used ChatGPT to Plan a Trip to Tunisia, While My Partner Used Claude

October 12, 2025

I Turned Down NYU for a Debt-Free Community College Path

October 12, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
Finletix
Home » Anthropic endorses California’s AI safety bill, SB 53
AI

Anthropic endorses California’s AI safety bill, SB 53

arthursheikin@gmail.comBy arthursheikin@gmail.comSeptember 8, 2025No Comments5 Mins Read
Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

[ad_1]

On Monday, Anthropic announced an official endorsement of SB 53, a California bill from state senator Scott Wiener that would impose first-in-the-nation transparency requirements on the world’s largest AI model developers. Anthropic’s endorsement marks a rare and major win for SB 53, at a time when major tech groups like the Consumer Technology Association (CTA) and Chamber for Progress are lobbying against the bill.

“While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” said Anthropic in a blog post. “The question isn’t whether we need AI governance — it’s whether we’ll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former.”

If passed, SB 53 would require frontier AI model developers like OpenAI, Anthropic, Google, and xAI to develop safety frameworks, as well as release public safety and security reports before deploying powerful AI models. The bill would also establish whistleblower protections to employees who come forward with safety concerns.

Senator Wiener’s bill specifically focuses on limiting AI models from contributing to “catastrophic risks,” which the bill defines as the death of at least 50 people or more than a billion dollars in damages. SB 53 focuses on the extreme side of AI risk — limiting AI models from being used to provide expert-level assistance in the creation of biological weapons or being used in cyberattacks — rather than more near-term concerns like AI deepfakes or sycophancy.

California’s Senate approved a prior version of SB 53 but still needs to hold a final vote on the bill before it can advance to the governor’s desk. Governor Gavin Newsom has stayed silent on the bill so far, although he vetoed Senator Weiner’s last AI safety bill, SB 1047.

Bills regulating frontier AI model developers have faced significant pushback from both Silicon Valley and the Trump administration, which both argue that such efforts could limit America’s innovation in the race against China. Investors like Andreessen Horowitz and Y Combinator led some of the pushback against SB 1047, and in recent months, the Trump administration has repeatedly threatened to block states from passing AI regulation altogether.

One of the most common arguments against AI safety bills are that states should leave the matter up to federal governments. Andreessen Horowitz’s head of AI policy, Matt Perault, and chief legal officer, Jai Ramaswamy, published a blog post last week arguing that many of today’s state AI bills risk violating the Constitution’s Commerce Clause — which limits state governments from passing laws that go beyond their borders and impair interstate commerce.

Techcrunch event

San Francisco
|
October 27-29, 2025

However, Anthropic co-founder Jack Clark argues in a post on X that the tech industry will build powerful AI systems in the coming years and can’t wait for the federal government to act.

“We have long said we would prefer a federal standard,” said Clark. “But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored.”

OpenAI’s chief global affairs officer, Chris Lehane, sent a letter to Governor Newsom in August arguing that he should not pass any AI regulation that would push startups out of California — although the letter did not mention SB 53 by name.

OpenAI’s former head of policy research, Miles Brundage, said in a post on X that Lehane’s letter was “filled with misleading garbage about SB 53 and AI policy generally.” Notably, SB 53 aims to solely regulate the world’s largest AI companies — particularly ones that generated a gross revenue of more than $500 million.

Despite the criticism, policy experts say SB 53 is a more modest approach than previous AI safety bills. Dean Ball, a senior fellow at the Foundation for American Innovation and former White House AI policy adviser, said in an August blog post that he believes SB 53 has a good chance now of becoming law. Ball, who criticized SB 1047, said SB 53’s drafters have “shown respect for technical reality,” as well as a “measure of legislative restraint.”

Senator Wiener previously said that SB 53 was heavily influenced by an expert policy panel Governor Newsom convened — co-led by leading Stanford researcher and co-founder of World Labs, Fei-Fei Li — to advise California on how to regulate AI.

Most AI labs already have some version of the internal safety policy that SB 53 requires. OpenAI, Google DeepMind, and Anthropic regularly publish safety reports for their models. However, these companies are not bound by anyone but themselves, so sometimes they fall behind their self-imposed safety commitments. SB 53 aims to set these requirements as state law, with financial repercussions if an AI lab fails to comply.

Earlier in September, California lawmakers amended SB 53 to remove a section of the bill that would have required AI model developers to be audited by third parties. Tech companies have previously fought these types of third-party audits in other AI policy battles, arguing that they’re overly burdensome.

[ad_2]

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
Previous ArticleBig Business and Capitalism at 15-Year Low in Popularity
Next Article Will the bull market keep going this week? Use this technique for your guidepost
arthursheikin@gmail.com
  • Website

Related Posts

Nvidia’s AI empire: A look at its top startup investments

October 12, 2025

Ready or not, enterprises are betting on AI

October 11, 2025

It’s not too late for Apple to get AI right

October 11, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Intel cuts 15% of its staff as it pushes to make a comeback

July 24, 2025

Tesla’s stock is tumbling after Elon Musk failure to shift the narrative

July 24, 2025

Women will soon be able to request a female Uber driver in these US cities

July 24, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Finletix — Your Insight Hub for Smarter Financial Decisions

At Finletix, we’re dedicated to delivering clear, actionable, and timely insights across the financial landscape. Whether you’re an investor tracking market trends, a small business owner navigating economic shifts, or a tech enthusiast exploring AI’s role in finance — Finletix is your go-to resource.

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

French companies’ borrowing costs fall below government’s as debt fears intensify

September 14, 2025

The Digital Dollar Dilemma: Why Central Banks Are Rushing to Create Digital Currencies

September 1, 2025

FCA opens investigation into Drax annual reports

August 28, 2025
Get Informed

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2026 finletix. Designed by finletix.
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms and Conditions

Type above and press Enter to search. Press Esc to cancel.