The rise of vibe coding, where AI tools generate complete web applications from plain-language prompts, has unlocked a new era of rapid software creation. Platforms like Lovable, Base44, Replit, and Netlify now let anyone spin up a functional app in seconds without writing a single line of code. But a sweeping new investigation reveals a dark underbelly to this convenience: thousands of these AI-generated apps are exposing highly sensitive corporate and personal data on the public internet.
Security researchers combing through publicly accessible endpoints discovered that many vibe-coded applications are being deployed with default configurations that leave databases, API keys, and internal documents wide open. In some cases, proprietary business information, customer records, and even healthcare data were sitting unprotected on open web servers, indexed by search engines and accessible to anyone who knew where to look.
The problem stems from a dangerous combination of factors. First, AI coding assistants are trained to produce working code quickly, not necessarily secure code. Second, the users deploying these apps often lack the technical background to recognize when an AI-generated configuration is unsafe. Third, many no-code platforms prioritize ease of deployment over security hardening, making it trivial to push an app live without reviewing access controls.
What makes this particularly concerning is the velocity at which vibe-coded apps are being created. Unlike traditional software development, which typically involves code review, staging environments, and security audits, vibe coding can move from idea to production in minutes. That speed is exactly what makes it appealing, but it also removes the natural checkpoints that catch misconfigurations before they become public liabilities.
Why it matters
For enterprises, this is not just a theoretical risk. As vibe coding moves from hobby projects to business workflows, companies are inadvertently creating shadow IT infrastructure that security teams may not even know exists. A single AI-generated app with an exposed database could become the entry point for a data breach, regulatory fine, or ransomware attack. Organizations need to treat vibe-coded apps with the same scrutiny as traditionally developed software, including mandatory security reviews, access audits, and employee training on safe AI deployment practices.
The takeaway is clear: AI can write code, but it cannot yet replace judgment. Until platforms bake security-by-default into their AI workflows, the burden falls on users to verify that what the machine built is actually safe to ship.