I Audited 50 Vibe-Coded Apps. Here's What I Found.
After reviewing 50 apps built with Cursor, Lovable, Bolt, and Replit, the same problems keep showing up. Here are the real numbers — and what every vibe coder should fix before launching.
Over the past few months, we've audited 50 apps built entirely with AI coding tools — Cursor, Lovable, Bolt, Replit, v0, Windsurf, and GPT Engineer. These were real projects from real founders, most of whom were weeks away from launching.
The findings were consistent enough to publish. Not to scare anyone — vibe coding works — but because the same fixable problems show up in nearly every codebase. Here's what the data looks like.
The numbers
Out of 50 apps audited:
- 47 (94%) had at least one critical security vulnerability
- 43 (86%) had API keys or secrets exposed in client-side code
- 41 (82%) had no input validation on user-facing forms
- 38 (76%) had no rate limiting on any endpoint
- 36 (72%) had database tables with no row-level security
- 34 (68%) had no error boundaries or global error handling
- 29 (58%) had no tests of any kind
- 23 (46%) would crash on common edge cases (expired session, empty API response, network timeout)
Not a single app passed all 15 checks in our security checklist. The average app failed 9 out of 15.
The most common vulnerability: exposed secrets
86% of the apps we audited had secrets in client-side code. This isn't a subtle issue — it means anyone who opens browser dev tools can see your API keys, database URLs, or third-party service tokens.
The pattern is always the same. The AI generates a .env file with everything prefixed as NEXT_PUBLIC_ or VITE_, making it available to the browser. Or it puts API keys directly into React components because that's the fastest way to make the feature work.
What to do: Audit every environment variable. Anything that isn't a public-facing key (like a Supabase anon key) must be moved to server-side code — API routes, server actions, or edge functions.
The second most common: no input validation
82% of apps send user input straight to the database without any validation. Forms accept any length, any format, any content. This opens the door to injection attacks, data corruption, and abuse.
AI tools generate forms that look polished but skip the invisible work. There's no length limit, no type checking, no sanitisation. The form looks like it works because it does — until someone submits a 50,000-character string or a script tag.
What to do: Add validation on both client and server. Zod schemas work well for this — define the shape once and validate in both places. Never trust client-side validation alone. For the full approach, see our production checklist.
Missing rate limiting is universal
76% of apps had zero rate limiting. Every API endpoint accepted unlimited requests from any source. A single user — or bot — could submit thousands of requests per minute.
Without rate limiting, anyone can spam your contact form until your email sending quota is gone and Resend charges you for the overage. It also makes brute-force attacks on login endpoints trivial.
What to do: Add rate limiting to every public-facing endpoint. Upstash Redis with sliding window limits is the standard approach for serverless apps. We typically set 10 requests per IP per day for form submissions and 5 per hour for authentication attempts.
Database security is an afterthought
72% of Supabase-backed apps had tables with RLS either disabled or set to USING (true), meaning any authenticated user could read or modify any other user's data. This is especially common in Lovable-built apps.
The root cause is that AI tools optimise for making things work quickly. Disabling RLS is the fastest way to get a query working, so that's what the AI does. It rarely goes back to lock things down.
What to do: Enable RLS on every table. Write policies that check auth.uid() against an owner column. Test by logging in as one user and trying to access another user's data.
Error handling: the invisible gap
68% of apps had no error boundaries, no global error handler, and no fallback UI for failed states. When something goes wrong — and it will — the user sees a blank white screen or a cryptic error message.
AI tools build for the happy path. The feature works when the API returns the right data in the right format at the right time. But production means slow networks, expired tokens, rate-limited third-party APIs, and malformed responses. Without error handling, any of these crashes the app.
What to do: Add a global error boundary in your React app. Handle loading, error, and empty states for every data fetch. Log errors to a service like Sentry so you know when things break.
How results varied by tool
The issues were consistent across all tools, but the distribution varied:
Lovable apps had the highest rate of database security issues (RLS disabled in 90% of apps) but generally had cleaner UI code. The biggest risks are backend security — see our detailed Lovable security breakdown.
Cursor apps had better code structure overall but more inconsistency — some files would be excellent while others were clearly generated in a different session with different prompts. Type safety was loose in 80% of cases. Here's what a senior dev typically finds in Cursor code.
Bolt apps had the most client-side secrets (92%) because StackBlitz's browser-based environment blurs the line between client and server code. They also had the fewest backend implementations overall.
Replit apps had the most deployment concerns — vendor lock-in, hardcoded hosting assumptions, and environment configuration that only works on Replit's platform.
For a full breakdown of how each tool compares, see our tool comparison.
What separates apps that launched from apps that didn't
Of the 50 apps we audited, 31 went on to launch successfully. The founders who launched had one thing in common: they stopped adding features. They fixed security, set up monitoring, and shipped. The ones who didn't launch? They kept prompting. More features, more complexity, no foundation.
Specifically, every app that launched successfully had done three things: fixed exposed secrets and enabled database security, added error monitoring before their first real user, and sought an audit before launching rather than after something broke.
The takeaway
Vibe coding gets you to 80% fast. That last 20% is security, error handling, validation, rate limiting, and deployment configuration. It's not glamorous work, but it's the difference between a demo and a product.
If you've built something and want to know where it stands, use our 15-point production readiness checklist to score your app yourself.
Or request a free audit and we'll send you a 5-point security snapshot within 48 hours — no commitment, no sales pitch. Just an honest assessment of what needs fixing before you launch.
Get articles like this in your inbox
Practical tips on shipping vibe-coded apps. No spam.
Keep reading
Want to know where your app stands?
Get a free 5-point security snapshot from our dev team — no strings attached.