Close Menu
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
What's Hot

Nvidia’s AI empire: A look at its top startup investments

October 12, 2025

I Used ChatGPT to Plan a Trip to Tunisia, While My Partner Used Claude

October 12, 2025

I Turned Down NYU for a Debt-Free Community College Path

October 12, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
Finletix
Home » Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 
AI

Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 

arthursheikin@gmail.comBy arthursheikin@gmail.comAugust 16, 2025No Comments2 Mins Read
Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

[ad_1]

Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.

To be clear, the company isn’t claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”

However, its announcement points to a recent program created to study what it calls “model welfare” and says Anthropic is essentially taking a just-in-case approach, “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.”

This latest change is currently limited to Claude Opus 4 and 4.1. And again, it’s only supposed to happen in “extreme edge cases,” such as “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.”

While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users’ delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a “strong preference against” responding to these requests and a “pattern of apparent distress” when it did so.

As for these new conversation-ending capabilities, the company says, “In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.”

Anthropic also says Claude has been “directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.”

Techcrunch event

San Francisco
|
October 27-29, 2025

When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses.

“We’re treating this feature as an ongoing experiment and will continue refining our approach,” the company says.

[ad_2]

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
Previous ArticleMarket concentration around AI darlings persists. It’s making investors worried
Next Article Here’s Who Would Buy Chrome If Google Is Forced to Sell
arthursheikin@gmail.com
  • Website

Related Posts

Nvidia’s AI empire: A look at its top startup investments

October 12, 2025

Ready or not, enterprises are betting on AI

October 11, 2025

It’s not too late for Apple to get AI right

October 11, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Intel cuts 15% of its staff as it pushes to make a comeback

July 24, 2025

Tesla’s stock is tumbling after Elon Musk failure to shift the narrative

July 24, 2025

Women will soon be able to request a female Uber driver in these US cities

July 24, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Finletix — Your Insight Hub for Smarter Financial Decisions

At Finletix, we’re dedicated to delivering clear, actionable, and timely insights across the financial landscape. Whether you’re an investor tracking market trends, a small business owner navigating economic shifts, or a tech enthusiast exploring AI’s role in finance — Finletix is your go-to resource.

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

French companies’ borrowing costs fall below government’s as debt fears intensify

September 14, 2025

The Digital Dollar Dilemma: Why Central Banks Are Rushing to Create Digital Currencies

September 1, 2025

FCA opens investigation into Drax annual reports

August 28, 2025
Get Informed

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2026 finletix. Designed by finletix.
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms and Conditions

Type above and press Enter to search. Press Esc to cancel.