Skip to main content
AttributeX AI

Vibe Coded App Audit: 48-Hour Fixed-Price Report

48-hour audit of your AI-built app. $2,500 flat. PDF report with prioritized fix list, debrief call, credited to any rescue engagement.

You shipped something real with Cursor, Lovable, Bolt, or Replit Agent. It works. Users are signing up. But you have no idea what is actually broken under the hood, and every vendor you call wants a two-week discovery engagement before they will tell you.

The Vibe Coded App Audit is a fixed-scope, fixed-price, 48-hour diagnostic. We plug into your repo on Monday morning. You get a 20-page PDF report with a ranked fix list on Wednesday afternoon. Then we jump on a 30-minute debrief call to walk you through it. Flat $2,500. No scope negotiation. No surprise invoices. The fee is credited in full toward any rescue or production engineering engagement you book within 30 days.

This exists because founders kept asking us "can you just look at my app and tell me what is broken before I commit to a $25K engagement." The answer is yes, and this is how.

What Gets Audited

Six areas. Same checklist every time. Same deliverable format every time. Productized on purpose.

1. Architecture

  • Data flow diagram of what your app actually does (often different from what the AI tool thought it was building).
  • Module boundaries and coupling. How much does the auth layer know about billing? How much does billing know about email?
  • Third-party dependency map. Which services does a single user request hit?
  • Framework-level anti-patterns: useEffect data fetching in Next.js App Router, API routes that should be Server Actions, client components that should be server components.

2. Security

  • Secret scan across the repo and build output. Any key prefix: sk_, sbp_, pk_live_, AKIA, ghp_, Bearer tokens in client bundles.
  • Row-level security state on every Postgres/Supabase table.
  • CORS configuration on every API route.
  • Auth flow review: token storage, refresh handling, session rotation, logout invalidation.
  • Input validation on every mutation endpoint. SQL injection, XSS, prototype pollution checks.
  • Dependency audit: npm audit, Snyk scan, Socket.dev unmaintained package flags.

This track maps directly to the eight security vulnerabilities we find in every AI-generated codebase.

3. Performance

  • Top 20 database queries by frequency and latency, profiled with EXPLAIN ANALYZE.
  • N+1 detection across ORM call sites.
  • Index coverage report: missing indexes on foreign keys, filter columns, and sort columns.
  • Bundle size analysis. Client JS over 300KB gzipped gets flagged.
  • Core Web Vitals snapshot via Lighthouse on your three highest-traffic routes.
  • API p50/p95/p99 latency on the five most-called endpoints.

4. Database Schema

  • Normalization review. Denormalized columns that should be joins. JSON columns hiding relational structure.
  • Missing foreign key constraints (Lovable frequently generates these as text references).
  • Nullable columns that should be NOT NULL. Columns without defaults that should have them.
  • Migration history review. Destructive migrations without backfill. Breaking changes without versioning.
  • Backup and point-in-time recovery configuration.

5. Auth and Authorization

  • Token lifecycle: issue, refresh, rotate, revoke.
  • Session fixation and session hijacking vectors.
  • RBAC completeness: every privileged route has a server-side role check, not just a frontend guard.
  • Password reset flow review. Email verification review. MFA state.
  • OAuth provider integration edge cases: expired tokens, revoked scopes, provider downtime.

6. Deploy Pipeline

  • Environment separation (dev, staging, production) and env var hygiene.
  • Build reproducibility. Does npm ci on a clean checkout produce the same artifact?
  • Rollback capability. Can you revert to the previous deployment in under 5 minutes?
  • Observability: logs, metrics, traces, alerts. Presence and quality of each.
  • Incident response readiness: runbooks, on-call rotation, status page.

The 48-Hour Timeline

Hour 0 — Kickoff

You sign a mutual NDA, add us as a read-only collaborator on your repo, and share read-only credentials to your Supabase/Neon/Railway dashboard. We kick off a 15-minute scoping call to confirm which endpoints handle the most traffic and where you suspect problems.

Hours 1-16 — Automated scans + manual review

We run the automated tooling stack first:

# Secret scan
trufflehog filesystem ./ --only-verified

# Dependency audit
npm audit --production
npx snyk test

# Database profile (read-only)
psql $DATABASE_URL -c "
  select query, calls, mean_exec_time, max_exec_time
  from pg_stat_statements
  order by mean_exec_time desc
  limit 20;"

# Bundle analysis
ANALYZE=true npm run build

# Lighthouse on top routes
lighthouse https://yourapp.com/ --output=json --quiet

Then a senior engineer manually reads through auth, billing, and any route handling PII. Automated tools catch 60% of issues. The other 40% require eyes on the code.

Hours 17-32 — Architecture walkthrough and deep dives

We build the data flow diagram, trace a real user request end to end, and dig into anything the automated scans flagged. This is where we find the subtle bugs: the idempotency key that is actually a client-generated UUID (not idempotent across retries), the RBAC check that happens in useEffect (bypassable with a curl), the process.env.STRIPE_SECRET that is read at module load and cached (blocks key rotation).

Hours 33-44 — Report drafting

Every finding is written up in a consistent format:

  • Severity: Critical, High, Medium, or Low
  • Category: Security, Performance, Database, Auth, Architecture, Deploy
  • Symptom: What the user or operator would observe
  • Root cause: What the code is actually doing wrong
  • Fix: Concrete remediation with code or config snippets
  • Effort: Hours to fix, at our engineering rate

The report ends with a prioritized remediation plan: what to fix this week, what to fix this month, what can wait. Plus a rough engagement estimate if you want us to execute the fixes.

Hour 45-48 — Delivery and debrief

PDF lands in your inbox. We schedule a 30-minute walkthrough call for the same or next day. You leave the call with a ranked fix list and a clear decision: fix it yourself, hire a freelancer, or book a rescue engagement with us.

A Real Finding From a Recent Audit

One Lovable-built SaaS had this middleware:

// src/middleware.ts — as generated by Lovable
export function middleware(request: NextRequest) {
  const token = request.cookies.get('sb-access-token');
  if (!token) {
    return NextResponse.redirect(new URL('/login', request.url));
  }
  return NextResponse.next();
}

The middleware checked that a cookie named sb-access-token existed. It never verified the token. Any authenticated-looking cookie value let the request through. A user could set document.cookie = 'sb-access-token=anything' in the browser console and access the entire app.

The fix took 15 minutes:

import { createServerClient } from '@supabase/ssr';

export async function middleware(request: NextRequest) {
  const response = NextResponse.next();
  const supabase = createServerClient(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
    {
      cookies: {
        getAll: () => request.cookies.getAll(),
        setAll: (cookies) => cookies.forEach(({ name, value, options }) =>
          response.cookies.set(name, value, options)),
      },
    }
  );
  const { data: { user } } = await supabase.auth.getUser();
  if (!user) {
    return NextResponse.redirect(new URL('/login', request.url));
  }
  return response;
}

Finding the bug took 20 minutes of reading auth-adjacent files. Without the audit, it would have shipped to production indefinitely. This is the category of issue the audit exists to catch.

Three More Failure Modes We Catch Constantly

Failure: Stripe webhook signature verification skipped. Symptom: /api/webhooks/stripe accepts any POST body as a valid Stripe event. Any attacker can mark invoices as paid. Fix: verify stripe-signature header with stripe.webhooks.constructEvent before processing.

Failure: NEXT_PUBLIC_ prefix on server secrets. Symptom: API keys embedded in the client bundle, visible in browser DevTools. The NEXT_PUBLIC_ prefix ships env vars to the browser. AI tools frequently add this prefix to every env var by default. Fix: rename, rotate, rebuild.

Failure: No database statement timeout. Symptom: one runaway query holds a connection for hours, connection pool exhausts, app goes down. Fix: ALTER DATABASE yourdb SET statement_timeout = '30s'; plus per-query timeouts in the client.

Who Should Order This

  • Founders deciding whether to rebuild or rescue. The audit gives you a ranked fix list. You can then decide if the work is a $10K quick fix or a $50K rebuild. See rebuild vs rescue for the decision framework.
  • Technical founders who want a second opinion before committing to a multi-week engagement with any vendor, including us.
  • Non-technical founders who inherited a vibe coded codebase from a contractor and need to know what they actually bought.
  • Due diligence buyers. Acquirers evaluating a small AI-built SaaS, investors doing technical DD on a seed-stage portfolio company.

If you need the fixes executed, not just identified, skip the audit and book a rescue engagement directly — the full engagement includes audit work up front.

Frequently Asked Questions

What does the $2,500 include?

One read-only audit by a senior engineer, a 20-page PDF report with every finding scored and ranked, and a 30-minute debrief call. Nothing hidden, nothing extra.

Why 48 hours and not a week?

Because the checklist is the same every time. We productized it on purpose. A week-long audit is a week-long consulting engagement with a report at the end — you pay for discovery time we have already done across 50+ apps.

Is the $2,500 refundable if you find nothing?

We have never run an audit and found nothing. If we ever do, we refund in full. In practice, the audit always finds at least 3-5 critical issues because the failure modes in vibe coded apps are consistent.

Do I need to give you write access to my repo?

No. Read-only is enough. We run scans, read code, and write the report. You stay in control of your main branch.

Can I use the report to hire a different vendor?

Yes. The report is yours. No lock-in clause. If you take it to another agency or hire a freelancer to execute the fixes, that is your call. We would rather deliver a useful report than trap you in a relationship.

What if my app is not built with Cursor/Lovable/Bolt?

The audit works on any Next.js, React, Node, or Python web app regardless of how it was written. The failure modes we catch exist in hand-written code too — AI tools just produce them at higher density.

How is this different from running Snyk or Sonar myself?

Automated scanners catch about 60% of what we find. The other 40% are logic bugs, architectural anti-patterns, and missing controls that tools cannot detect. You need a senior engineer reading the code for those. Our report combines both.

Can I expense this as due diligence?

Yes. We invoice as "Technical due diligence audit — AttributeX" and most investors and acquirers accept that on expense reports without further justification.

Get Your Audit Report in 48 Hours

Fixed scope. Fixed price. Fixed timeline. No negotiation, no scope creep, no surprise invoices.

Start with a free production audit to kick off the paid 48-hour audit. We respond within a few hours with NDA paperwork, a repo access checklist, and a calendar link to schedule the kickoff call. The credit policy is simple: if you book any rescue or production engineering engagement within 30 days, the $2,500 audit fee is credited in full toward that engagement.

Ready to ship your AI app to production?

We help funded startups turn vibe-coded prototypes into production systems. $10K-$50K engagements. Results in weeks, not months.

Get Your Free Audit