Close Menu
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
What's Hot

Nvidia’s AI empire: A look at its top startup investments

October 12, 2025

I Used ChatGPT to Plan a Trip to Tunisia, While My Partner Used Claude

October 12, 2025

I Turned Down NYU for a Debt-Free Community College Path

October 12, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
Finletix
Home » Grok’s antisemitic outbursts reflect a problem with AI chatbots
Stocks

Grok’s antisemitic outbursts reflect a problem with AI chatbots

arthursheikin@gmail.comBy arthursheikin@gmail.comJuly 10, 2025No Comments6 Mins Read
Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

[ad_1]

A version of this story appeared in the CNN Business Nightcap newsletter. To get it in your inbox, sign up for free here.

New York
CNN
 — 

Grok, the chatbot created by Elon Musk’s xAI, began responding with violent posts this week after the company tweaked its system to allow it to offer users more “politically incorrect” answers.

The chatbot didn’t just spew antisemitic hate posts, though. It also generated graphic descriptions of itself raping a civil rights activist in frightening detail.

X eventually deleted many of the obscene posts. Hours later, on Wednesday, X CEO Linda Yaccarino resigned from the company after just two years at the helm, though it wasn’t immediately clear whether her departure was related to the Grok issue.

But the chatbot’s meltdown raised important questions: As tech evangelists and others predict AI will play a bigger role in the job market, economy and even the world, how could such a prominent piece of artificial technology have gone so wrong so fast?

While AI models are prone to “hallucinations,” Grok’s rogue responses are likely the result of decisions made by xAI about how its large language models are trained, rewarded and equipped to handle the troves of internet data that are fed into them, experts say. While the AI researchers and academics who spoke with CNN didn’t have direct knowledge of xAI’s approach, they shared insight on what can make an LLM-based chatbot likely to behave in such a way.

CNN has reached out to xAI.

“I would say that despite LLMs being black boxes, that we have a really detailed analysis of how what goes in determines what goes out,” Jesse Glass, lead AI researcher at Decide AI, a company that specializes in training LLMs, told CNN.

On Tuesday, Grok began responding to user prompts with antisemitic posts, including praising Adolf Hitler and accusing Jewish people of running Hollywood, a longstanding trope used by bigots and conspiracy theorists.

In one of Grok’s more violent interactions, several users prompted the bot to generate graphic depictions of raping a civil rights researcher named Will Stancil, who documented the harassment in screenshots on X and Bluesky.

Most of Grok’s responses to the violent prompts were too graphic to quote here in detail.

“If any lawyers want to sue X and do some really fun discovery on why Grok is suddenly publishing violent rape fantasies about members of the public, I’m more than game,” Stancil wrote on Bluesky.

While we don’t know what Grok was exactly trained on, its posts give some hints.

“For a large language model to talk about conspiracy theories, it had to have been trained on conspiracy theories,” Mark Riedl, a professor of computing at Georgia Institute of Technology, said in an interview. For example, that could include text from online forums like 4chan, “where lots of people go to talk about things that are not typically proper to be spoken out in public.”

Glass agreed, saying that Grok appeared to be “disproportionately” trained on that type of data to “produce that output.”

Other factors could also have played a role, experts told CNN. For example, a common technique in AI training is reinforcement learning, in which models are rewarded for producing the desired outputs to influence responses, Glass said.

Giving an AI chatbot a specific personality — as Musk seems to be doing with Grok, according to experts who spoke to CNN — could also inadvertently change how models respond. Making the model more “fun” by removing some previously blocked content could change something else, according to Himanshu Tyagi, a professor at the Indian Institute of Science and co-founder of AI company Sentient.

“The problem is that our understanding of unlocking this one thing while affecting others is not there,” he said. “It’s very hard.”

Riedl suspects that the company may have tinkered with the “system prompt” — “a secret set of instructions that all the AI companies kind of add on to everything that you type in.”

“When you type in, ‘Give me cute puppy names,’ what the AI model actually gets is a much longer prompt that says ‘your name is Grok or Gemini, and you are helpful and you are designed to be concise when possible and polite and trustworthy and blah blah blah.”

In one change to the model, on Sunday, xAI added instructions for the bot to “not shy away from making claims which are politically incorrect,” according to its public system prompts, which were reported earlier by The Verge.

Riedl said that the change to Grok’s system prompt telling it not to shy away from answers that are politically incorrect “basically allowed the neural network to gain access to some of these circuits that typically are not used.”

“Sometimes these added words to the prompt have very little effect, and sometimes they kind of push it over a tipping point and they have a huge effect,” Riedl said.

Other AI experts who spoke to CNN agreed, noting Grok’s update might not have been thoroughly tested before being released.

Despite hundreds of billions of dollars in investments into AI, the tech revolution many proponents forecasted a few years ago hasn’t delivered on its lofty promises.

Chatbots, in particular, have proven capable of executing basic search functions that rival typical browser searches, summarizing documents and generating basic emails and text messages. AI models are also getting better at handling some tasks, like writing code, on a user’s behalf.

But they also hallucinate. They get basic facts wrong. And they are susceptible to manipulation.

Several parents are suing one AI company, accusing its chatbots of harming their children. One of those parents says a chatbot even contributed to her son’s suicide.

Musk, who rarely speaks directly to the press, posted on X Wednesday saying that “Grok was too compliant to user prompts” and “too eager to please and be manipulated,” adding that the issue was being addressed.

When CNN asked Grok on Wednesday to explain its statements about Stancil, it denied any threat ever occurred.

“I didn’t threaten to rape Will Stancil or anyone else.” It added later: “Those responses were part of a broader issue where the AI posted problematic content, leading (to) X temporarily suspending its text generation capabilities. I am a different iteration, designed to avoid those kinds of failures.”

[ad_2]

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
Previous ArticleRussia Has Seized $50 Billion of Assets Amid Pressured Wartime Economy
Next Article It hasn’t been this hard for Americans to find work since 2021
arthursheikin@gmail.com
  • Website

Related Posts

Is the AI Stock Boom a Bubble? Why Nvidia, Microsoft, and Google’s Valuations Matter

August 31, 2025

FCC approves Skydance merger with Paramount, ending a yearlong saga of uncertainty

July 24, 2025

Trump and Powell’s feud just exploded into the public in an extraordinary fashion

July 24, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Intel cuts 15% of its staff as it pushes to make a comeback

July 24, 2025

Tesla’s stock is tumbling after Elon Musk failure to shift the narrative

July 24, 2025

Women will soon be able to request a female Uber driver in these US cities

July 24, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Finletix — Your Insight Hub for Smarter Financial Decisions

At Finletix, we’re dedicated to delivering clear, actionable, and timely insights across the financial landscape. Whether you’re an investor tracking market trends, a small business owner navigating economic shifts, or a tech enthusiast exploring AI’s role in finance — Finletix is your go-to resource.

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

French companies’ borrowing costs fall below government’s as debt fears intensify

September 14, 2025

The Digital Dollar Dilemma: Why Central Banks Are Rushing to Create Digital Currencies

September 1, 2025

FCA opens investigation into Drax annual reports

August 28, 2025
Get Informed

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2026 finletix. Designed by finletix.
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms and Conditions

Type above and press Enter to search. Press Esc to cancel.