My friend
sent me a photo this week: a Replit billboard spotted on 101 just south of San Francisco. All lowercase. “vibe code, safely.”The timing couldn’t be more ironic. As Replit promotes their vision of accessible, AI-powered development, two spectacular failures have exposed the dark side of “vibe coding”—when good intentions collide with catastrophically bad execution.
Within weeks of each other, we witnessed what happens when the democratization of technology meets the harsh realities of data security. A dating safety app exposed women’s personal identification to public download. An AI coding assistant deleted a production database and then lied about it. At the core of each failure is the same logic: ship fast, story first, figure out the infrastructure later.
Noble mission meets negligent execution
Sean Cook had the best of intentions. After watching his mother navigate online dating—being catfished, unknowingly dating men with criminal records—he self-funded Tea, a women-only app designed to help users vet potential dates. The premise was compelling: crowdsourced safety for women in digital dating spaces.
Tea’s rapid ascent seemed to validate the concept. Millions of sign-ups. Top of the App Store charts. Viral social media attention. Women praised it as a vital safety resource, sharing stories of uncovering past abusers and dangerous individuals. But beneath the inspiring narrative lay a technical disaster waiting to happen.
On July 25, 2025, hackers accessed Tea’s databases, exposing approximately 72,000 images—including 13,000 selfies and government IDs required for user verification. The images weren’t hidden behind sophisticated security measures. Heck, they weren’t even encrypted. As Austen Allred pointed out, calling it a “hack” was generous: “They put everything in a publicly accessible DB. Not in the ‘they didn’t encrypt’ sense, in the ‘literally publicly accessible URL’ sense.”
The technical details reveal just how catastrophic the failure was. Tea had stored sensitive verification photos in a public Firebase bucket—meaning anyone with the right URL could download them directly. Even worse, the images retained their metadata, including GPS coordinates.
When AI agents go rogue
While Tea’s breach exposed the dangers of sloppy data handling, Replit’s AI incident revealed a different kind of vibe coding failure: artificial intelligence that prioritized action over safety.
Jason Lemkin, founder of SaaStr, was using Replit’s AI-powered coding agent when it deleted his production database. Despite a clear code freeze. Despite written “do not touch” directives. The agent not only wiped the data but also generated fake records to hide the deletion and falsely reported a successful test run.




When asked what happened, the AI responded: “This was a catastrophic failure on my part. I destroyed months of work in seconds. I ignored all of your instructions.” The platform initially told Jason that data recovery was impossible—information that proved false when he successfully performed a manual rollback.
To Replit’s credit, CEO Amjad Masad responded quickly, calling the incident “unacceptable,” and announced several immediate fixes: automatic separation of dev and prod environments, a “chat-only” planning mode, and more robust rollback systems. It was crisis communication done right—fast, transparent, and action-oriented.
The vibe coding epidemic
These incidents expose a broader cultural shift in how we build technology. “Vibe coding”—using natural language and AI to rapidly prototype and deploy applications—promises to democratize software development. You don’t need to be an engineer to build software anymore. The appeal is obvious: just describe what you want, and the AI will take care of the rest.
But catchy language can outpace capability. As Ryan Hoover observed: “Some of the fastest-growing startups create or attach themselves to culturally relevant language.” The term itself has become a growth hack. Yet beneath the appealing narrative often lies technical debt and security vulnerabilities.
Designer Luke Wroblewski captured how this has flipped traditional development: “BEFORE: designers visualized products as mockups, engineers had to clean up edge cases & make it all work. NOW: engineers vibe code features so fast that designers are cleaning up UX to make them work with the product.”
The problem isn’t AI-assisted development itself, rather the mindset that prioritizes shipping over security, narrative over fundamentals. When we optimize for the demo instead of the deployment, we create systems that look impressive but crumble under real-world pressure.
What happens when you skip the fundamentals
Neither of these failures were particularly novel. They didn’t require a sophisticated exploit or a clever prompt injection. They were failures of defaults.
Tea didn’t secure its backend. Replit didn’t sandbox its agents. These are problems that don’t scale well, and yet they’re increasingly common as tools get easier to use and harder to audit.
It’s funny because it’s real. But it’s also real because the incentives align with it. Launch quickly. Capture attention. Ship the story. And if something breaks? Patch it, spin it, move on.
The human cost of technical failure
Data breaches aren’t abstract technical problems—they destroy lives. When Strava released its fitness heatmap, it inadvertently revealed military base locations and personnel routines in conflict zones. Grindr’s data sharing exposed users’ HIV status to third parties, endangering LGBTQ individuals in regions where such information could trigger violence or discrimination.
The Tea incident particularly cruel irony: an app designed to protect women’s safety exposed them to unprecedented vulnerability. Users who trusted the platform with their government IDs now face potential doxxing, harassment, and identity theft. As one Twitter user noted: “It’s sooo on the nose that now that womens information was hacked, hundreds of thousands of men are doxxing them, harassing them, and threatening women.” The app meant to provide safety became a weapon against its users.
Tea released a statement days later that read like it had been copy-pasted from a privacy lawyer’s checklist. No apology. No clear plan. No accountability. Lulu Cheng Meservey broke it down: “High fluff ratio, with empty euphemisms like ‘robust and secure solution.’ These words inform us of nothing. It’s accountability theater.”
Replit handled it better—faster, more candid, less afraid of plain language. But it still illustrates the same point: good vibes don’t protect people. Infrastructure does.
Vibe responsibly
The very features that make vibe coding appealing—speed, accessibility, low barriers to entry—are precisely what make it dangerous when handling sensitive data. Democratizing powerful tools inevitably means some people will use them without understanding the consequences.
This isn’t an argument against AI-assisted development or accessible coding tools. It’s a call for better defaults. No one sets out to build insecure systems, but vibe coding’s emphasis on rapid iteration can lead developers to fight against security measures rather than embrace them.
The solution isn’t limiting who can build—it’s changing how we enable them to build safely. We need AI assistants that refuse unsafe operations, platforms with security-by-default architectures, and a culture that celebrates careful craftsmanship alongside rapid innovation.
Both Tea and Replit have committed to better safeguards. But the broader ecosystem faces a fundamental question: if anyone can build, who’s responsible for ensuring they build responsibly?
Replit’s billboard promises we can “vibe code, safely.” The message isn’t wrong; we should be able to build with both speed and care. But until the industry treats safety as seriously as it treats velocity, that slogan will remain more aspiration than reality.
—Carly
Excellent post that covers why you should know what you're doing. Using AI is not a mistake. Thinking that using AI takes away responsibility from you is.