Close Menu
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
What's Hot

Nvidia’s AI empire: A look at its top startup investments

October 12, 2025

I Used ChatGPT to Plan a Trip to Tunisia, While My Partner Used Claude

October 12, 2025

I Turned Down NYU for a Debt-Free Community College Path

October 12, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Finletix
  • Home
  • AI
  • Financial
  • Investments
  • Small Business
  • Stocks
  • Tech
  • Marketing
Finletix
Home » Tea Data Breach Shows Why You Should Be Wary of New Apps
Tech

Tea Data Breach Shows Why You Should Be Wary of New Apps

arthursheikin@gmail.comBy arthursheikin@gmail.comAugust 4, 2025No Comments5 Mins Read
Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

[ad_1]

Mobile apps are easier to build than ever — but that doesn’t mean they’re safe.

Late last month, Tea, a buzzy app where women anonymously share reviews of men, suffered a data breach that exposed thousands of images and private messages.

As cybersecurity expert Michael Coates put it, the impact of Tea’s breach was that it exposed data “otherwise assumed to be private and sensitive” to anyone with the “technical acumen” to access that user data — and “ergo, the whole world.”

Tea confirmed that about 72,000 images — including women’s selfies and driver’s licenses — had been accessed. Images from the app were then posted to 4chan, and within days, that information spread across the web on platforms like X. Someone made a map identifying users’ locations, and a website where Tea users’ verification selfies were ranked side-by-side.

It wasn’t just images that were accessible. Kasra Rahjerdi, a security researcher, told Business Insider he was able to access more than 1.1 million private direct messages (DMs) between Tea’s users. Rahjerdi said those messages included “intimate” conversations about topics such as divorce, abortion, cheating, and rape.

The Tea breach was a crude reminder that just because we assume our data is private doesn’t mean it actually is — especially when it comes to new apps.

“Talking to an app is talking to a really gossipy coworker,” Rahjerdi said. “If you tell them anything, they’re going to share it, at least with the owners of the app, if not their advertisers, if not accidentally with the world.”

Isaac Evans, CEO of cybersecurity company Semgrep, said he uncovered an issue similar to the Tea breach when he was a student at MIT. A directory of students’ names and IDs was left open for the public to view.

Related stories

Business Insider tells the innovative stories you want to know

Business Insider tells the innovative stories you want to know

“It’s just really easy, when you have a big bucket of data, to accidentally leave it out in the open,” Evans said.

But despite the risks, many people are willing to share sensitive information with new apps. In fact, even after news of the Tea data breach broke, the app continued to sit near the top of Apple’s App Store charts. On Monday, it was in the No. 4 slot on the chart behind only ChatGPT, Threads, and Google.

Tea declined to comment.

Cybersecurity in the AI era

The cybersecurity issues raised by the Tea app breach — namely that emerging apps can often be less secure and that people are willing to hand over very sensitive information to them — could get even worse in the era of AI.

Why? There are a few reasons.

First, there’s the fact that people are getting more comfortable sharing sensitive information with apps, especially AI chatbots, whether that’s ChatGPT, Meta AI, or specialized chatbots trying to replicate therapy. This has already led to mishaps. Take Meta’s AI app’s “discover” feed, for example. In June, Business Insider reported that people were publicly sharing — seemingly accidentally — some quite personal exchanges with Meta’s AI chatbot.

Then there’s the rise of vibe coding, which security experts say could lead to dangerous app vulnerabilities.

Vibe coding, when people use generative AI to write and refine code, has been a favorite tech buzzword this year. Meanwhile, tech startups like Replit, Loveable, and Cursor have become highly valued vibe-coding darlings.

But as vibe coding becomes more mainstream — and potentially leads to a geyser of new apps — cybersecurity experts have concerns.

Brandon Evans, a senior instructor at the SANS Institute and cybersecurity consultant, told BI that vibe coding can “absolutely result in more insecure applications,” especially as people build quickly and take shortcuts.

(It’s worth noting that while some public discourse on social media around Tea’s breach includes criticisms of vibe coding, some security experts said they doubted the platform itself used AI to generate its code.)

“One of the big risks about vibe coding and AI-generated software is what if it doesn’t do security?” Coates said. “That’s what we’re all pretty concerned about.”

Rahjerdi told BI that the advent of vibe coding is what prompted him to start investigating “more and more projects recently.”

For Semgrep’s Evans, vibe coding itself isn’t the problem — it’s how it interacts with developers’ incentives more generally. Programmers often want to move fast, he said, speeding through the security review process.

“Vibe-coding means that a junior programmer can suddenly be inside a racecar, rather than a minivan,” he said.

But vibe coded or not, consumers should “actively think about what you’re sending to these organizations and really think about the worst case scenario,” the SANS Institute’s Evans said.

“Consumers need to understand that there will be more breaches, not just because applications are being developed faster and arguably worse, but also because the adversaries have AI on their side as well,” he added. “They can use AI to come up with new attacks to get this data too.”

[ad_2]

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
Previous ArticleAn options trade that bets on Deckers gaining market share on Nike
Next Article July Hedge Fund Performance: Citadel, Schonfeld, and Marshall Wace
arthursheikin@gmail.com
  • Website

Related Posts

I Used ChatGPT to Plan a Trip to Tunisia, While My Partner Used Claude

October 12, 2025

AWS Exec Colleen Aubrey: 3 Signs You Should Make a Career Change

October 12, 2025

Former Apple CEO Says OpenAI Is Its ‘First Real Competitor’ in Decades

October 12, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Intel cuts 15% of its staff as it pushes to make a comeback

July 24, 2025

Tesla’s stock is tumbling after Elon Musk failure to shift the narrative

July 24, 2025

Women will soon be able to request a female Uber driver in these US cities

July 24, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Finletix — Your Insight Hub for Smarter Financial Decisions

At Finletix, we’re dedicated to delivering clear, actionable, and timely insights across the financial landscape. Whether you’re an investor tracking market trends, a small business owner navigating economic shifts, or a tech enthusiast exploring AI’s role in finance — Finletix is your go-to resource.

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

French companies’ borrowing costs fall below government’s as debt fears intensify

September 14, 2025

The Digital Dollar Dilemma: Why Central Banks Are Rushing to Create Digital Currencies

September 1, 2025

FCA opens investigation into Drax annual reports

August 28, 2025
Get Informed

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

© 2026 finletix. Designed by finletix.
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms and Conditions

Type above and press Enter to search. Press Esc to cancel.