Skip to main content
AttributeX AI

AI Built App Performance Problems

13 min read

You open Chrome DevTools and run a Lighthouse audit on your AI-built app. Performance: 35. Accessibility: 62. Best Practices: 51. SEO: 44.

Your competitor — the one with a smaller team and less funding — scores 95 across the board. Their pages load in 1.2 seconds. Yours take 7 seconds. Their Core Web Vitals pass. Yours fail every metric. Google ranks them on page one. You are on page four.

The gap is not design talent or product quality. It is client-side performance engineering — the discipline of making web applications fast. AI coding tools do not practice this discipline. They generate code that is functionally correct and performance-hostile.

In our audit of 50 vibe coded apps, the average Lighthouse Performance score was 38. After production engineering, the average was 92. Same apps. Same features. Same hosting. The difference was how the code was structured and delivered.

AI tools prioritize function over performance

This is not a criticism. It is a design reality. Cursor, Copilot, and Claude are trained to generate code that works. "Working" means: renders the correct UI, fetches the correct data, handles the user interaction. These tools are exceptional at this.

"Performant" means: renders the correct UI in under 100 milliseconds, fetches only the data needed for the visible viewport, uses the minimum JavaScript required for the interaction. These are optimization constraints that AI tools do not apply unless specifically prompted — and even then, they apply them inconsistently.

The result is apps that feel fine on your MacBook Pro with a gigabit connection and crawl on your user's Android phone with a 4G connection in Sao Paulo.

The 7 client-side performance killers

1. Bundle sizes that dwarf the application logic

AI tools import entire libraries for single functions. Need to format a date? The AI imports moment.js (72KB gzipped) instead of using Intl.DateTimeFormat (0KB — built into the browser). Need an icon? The AI imports the entire icon library (150KB) instead of importing the individual icon component (2KB).

We analyzed the JavaScript bundles of 50 vibe coded apps. The average initial bundle was 2.4MB. After removing unused imports and switching to tree-shakeable alternatives, the average dropped to 480KB. That is an 80% reduction in JavaScript the user's browser must download, parse, and execute before the page becomes interactive.

The specific offenders we find most often: entire lodash library (71KB) when 2-3 functions are used, full date libraries when native APIs suffice, icon libraries imported at the package level, animation libraries loaded for simple transitions that CSS handles natively, and charting libraries that ship their own copy of d3.

Each unnecessary kilobyte adds roughly 1 millisecond of parse time on a mid-range mobile device. A 2MB excess bundle adds 2 seconds to Time to Interactive — the point where the user can actually click buttons and type in fields.

2. Zero code splitting

AI-generated apps ship a single JavaScript bundle containing the code for every page. Visit the login page and your browser downloads the code for the dashboard, settings, admin panel, and every other page. The login page needs 50KB of JavaScript. The browser loads 2MB.

Code splitting divides the application into chunks loaded on demand. When a user visits /login, only the login chunk loads. When they navigate to /dashboard, the dashboard chunk loads. Next.js does this automatically with dynamic routes — but only if the code is structured to support it.

AI-generated code defeats automatic code splitting by importing shared modules at the root level, creating circular dependencies between pages, and using dynamic imports incorrectly (importing at module scope instead of within the component that needs it).

After implementing proper code splitting, the typical initial page load drops from 2MB to 200-400KB. Subsequent page navigations load 50-150KB each. Total data transferred over a user session stays similar, but the initial load — the one that determines whether the user stays or bounces — drops by 80%.

3. Unoptimized images

Images are the single largest contributor to page weight. AI-generated apps serve images in their original format and resolution regardless of the display context. Common findings:

  • Hero images: 4000x3000 JPEG at 2.5MB, displayed at 1200x600. Should be WebP at 80KB.
  • Profile avatars: 1000x1000 PNG at 800KB, displayed at 48x48. Should be WebP at 3KB.
  • Product screenshots: 12 images loaded on page mount, only 4 visible above the fold. The other 8 should lazy-load.
  • Background images: Full-resolution photos used as background textures at 20% opacity. Could be 10x smaller with aggressive compression since detail is invisible.

The total image payload on a typical vibe coded landing page: 8-15MB. After optimization (WebP conversion, proper sizing, lazy loading, blur placeholders): 200-400KB. That is a 20-40x reduction.

Next.js provides the Image component that handles most of this automatically — responsive sizing, format conversion, lazy loading. AI tools generate standard img tags because that is the simpler pattern. The fix is mechanical: replace every img with next/image, configure image domains, and add appropriate sizes attributes. This is just one of the patterns our audit of 50 vibe coded apps found in virtually every AI-generated codebase.

4. Layout thrashing from poor CSS

AI-generated CSS causes layout instability — elements that shift position as the page loads. Google measures this as Cumulative Layout Shift (CLS), and it directly impacts search rankings.

The patterns that cause layout shift in vibe coded apps: images without explicit width and height (the browser reserves no space until the image loads), fonts that load after the initial render (causing text to reflow), dynamic content injected above the fold after page load (banners, notifications), and CSS that depends on JavaScript to compute layout.

The CLS score for the average vibe coded app in our audit was 0.42. Google considers anything above 0.1 "poor." After fixing the layout patterns — adding image dimensions, preloading fonts, reserving space for dynamic content — the average dropped to 0.04.

Layout thrashing also causes visual jank during interaction. Scroll handlers that read layout properties (offsetTop, getBoundingClientRect) force the browser to recalculate layout synchronously, causing dropped frames. AI-generated scroll effects (parallax, sticky headers, scroll-triggered animations) consistently cause 15-30% frame drops because they read layout in tight loops instead of using IntersectionObserver or CSS position: sticky.

5. Missing lazy loading and virtualization

AI tools generate lists that render every item in the DOM immediately. A chat history with 500 messages creates 500 DOM nodes on page load. A data table with 1,000 rows creates 1,000 DOM rows. An image gallery with 200 photos creates 200 image elements.

The browser chokes. DOM nodes consume memory. Layout calculation scales with node count. Paint operations scan the entire tree. A page with 5,000 DOM nodes takes 200-400 milliseconds longer to become interactive than one with 500 nodes.

Lazy loading (rendering items only when they enter the viewport) and virtualization (rendering only the visible rows plus a small buffer) reduce DOM node count by 90-95%. A 500-message chat renders 15 messages at a time. A 1,000-row table renders 20 rows. Scroll and the old nodes are recycled for new content.

The libraries exist (react-window, react-virtuoso, tanstack-virtual). AI tools do not reach for them because the simpler .map() pattern works — it is just slow when the list grows beyond what the developer tested with.

6. No caching headers on static assets

AI-generated apps serve every request fresh. Your JavaScript bundle, your CSS file, your fonts, your static images — none have cache headers. Every page visit downloads everything again.

Proper cache headers tell the browser: "This file will not change for one year. Do not request it again." With content hashing (filename includes a hash of the contents), cached files are automatically invalidated when they change.

Next.js and Vercel handle cache headers for hashed static assets automatically. But AI-generated apps often reference assets in ways that bypass the built-in caching: direct imports from /public, inline styles that reference external URLs, and dynamically constructed asset paths that do not go through the build pipeline.

After configuring proper caching, repeat visits load in under 500 milliseconds because only the HTML (typically 10-30KB) is fetched fresh. All JavaScript, CSS, images, and fonts are served from the browser cache. For SaaS apps where users visit daily, this means 95% of their page loads are sub-second.

7. Waterfall API calls on page load

AI-generated pages fire multiple API calls sequentially as components mount. The page renders a shell. Component A mounts and fetches data. When Component A finishes, it renders Component B. Component B mounts and fetches its data. Component C depends on Component B's data, so it waits.

This creates a "waterfall" where each data fetch blocks the next. A page with 4 sequential data dependencies takes 4x as long to fully load as one where all data is fetched in parallel or on the server.

Server Components in Next.js solve this by fetching data during server rendering — the user receives a fully rendered page with data already populated. AI tools generate client-side fetching patterns (useEffect + useState) because that pattern is more common in their training data.

The waterfall pattern is particularly visible on authenticated dashboards. The page loads empty. The auth state resolves (200ms). The user data loads (300ms). The dashboard data loads (400ms). The notifications load (200ms). Total: 1.1 seconds of sequential fetching visible as a cascade of spinners and layout shifts.

Moving these fetches to the server and parallelizing them reduces the total to 400 milliseconds — and the user sees a complete page instead of progressive loading states.

What these scores mean for your business

Google ranking. Core Web Vitals are a confirmed ranking signal. A Lighthouse Performance score of 35 means your CWV metrics (LCP, FID/INP, CLS) are all failing. Google explicitly deprioritizes pages with poor CWV in search results. Your content marketing investment is undermined by your application performance.

User retention. Amazon found that every 100 milliseconds of latency costs 1% of sales. Google found that a 500-millisecond delay reduces traffic by 20%. Your 7-second load time is not just slow — it is actively driving users away before they see your product.

Mobile users. 60% of web traffic is mobile. Mobile devices have less CPU, less memory, and slower network connections than the MacBook you develop on. A page that takes 3 seconds on your laptop takes 12 seconds on a mid-range Android phone on 4G. That 12-second experience is what most of your users actually see.

Conversion rates. Pages that load in 1 second convert at 3x the rate of pages that load in 5 seconds. Your application form at /apply — the most important page on your site — is directly impacted by how fast it loads and how responsive it feels.

The fix: performance engineering, not design changes

Performance engineering is not redesigning your app. It is restructuring how your app delivers the same design to the browser. Same UI. Same features. Same interactions. Faster.

Production engineering for performance follows a specific protocol:

  1. Lighthouse audit every page, identify metrics below thresholds
  2. Bundle analysis: identify unused imports, missing tree shaking, oversized dependencies
  3. Implement code splitting by route, lazy loading for below-fold content
  4. Replace all img tags with next/image, configure proper sizing
  5. Fix layout shift sources: image dimensions, font preloading, space reservation
  6. Implement virtualization for large lists and tables
  7. Configure caching headers for all static assets
  8. Move data fetching from client-side waterfalls to server-side parallel
  9. Re-audit every page, verify all metrics pass

The typical result: Lighthouse Performance from 35 to 90+. Time to Interactive from 7 seconds to 1.5 seconds. Core Web Vitals from all failing to all passing.

The technical debt in vibe coded apps makes performance problems pervasive because the same inefficient patterns are replicated across every page. Production engineering fixes the patterns at the source, so every page benefits from a single optimization pass. For specific before/after metrics and a step-by-step walkthrough of the optimization process, see our AI app performance optimization service.

Frequently asked questions

How do I check my app's Lighthouse score?

Open Chrome DevTools (F12), go to the Lighthouse tab, select "Performance" and "Mobile" categories, and click "Analyze page load." The Mobile score is what matters — it is the experience most of your users have, and it is the score Google uses for ranking.

What is a good Lighthouse Performance score?

90+ is production-grade. 70-89 is acceptable with room for improvement. Below 70 indicates significant performance issues that affect user experience and search ranking. The average vibe coded app scores 35-45.

Will performance optimization break my existing features?

No. Performance optimization changes how code is delivered and rendered, not what it does. Code splitting loads the same code in smaller chunks. Image optimization delivers the same images in better formats. Caching headers store the same assets locally. The user sees the same app. It just loads faster.

How much does performance optimization improve SEO?

Core Web Vitals are one of many ranking signals, but they are a threshold signal — below the threshold, you are penalized; above it, you are competitive. Moving from failing to passing CWV typically improves organic search visibility by 15-30% over 6-8 weeks. The SEO improvement compounds because higher rankings drive more traffic, which drives more engagement signals.

Can I fix performance by upgrading my hosting plan?

No — and understanding why is important. The gap between what your Cursor-built app delivers and what production demands is a code-level problem, not an infrastructure problem. Server-side performance (API response times, database queries) improves with better hosting. Client-side performance (bundle size, image optimization, rendering, caching) does not — those are code-level issues that execute in the user's browser regardless of your server infrastructure. Since most vibe coded app performance problems are client-side, upgrading hosting helps minimally.

My app loads fast on my computer. Why should I worry?

You are developing on a MacBook Pro with 16-32GB RAM, an M-series chip, and a 500Mbps connection. Your users are on Android phones with 4GB RAM, mid-range processors, and 4G connections. Test on Chrome DevTools' throttling mode (set to "Slow 4G" and "4x CPU slowdown") to see what your users experience.

How long does performance optimization take?

For a typical SaaS with 15-30 pages, a comprehensive performance pass takes 1-2 weeks. Bundle optimization and code splitting take 2-3 days. Image optimization takes 1-2 days. Data fetching restructuring takes 3-5 days. The full engagement includes before/after Lighthouse audits on every page to verify improvements.

Your users are waiting — literally

Every second your page takes to load is a user deciding whether to stay. Every failed Core Web Vitals metric is a search ranking you are losing. The performance gap between your vibe coded app and a production-grade competitor is measurable, fixable, and directly tied to revenue.

Apply for a performance-focused production audit. We will Lighthouse every page, identify every bottleneck, and optimize your app to score 90+ across the board.

Your product deserves to be fast. Your users expect it.

Ready to ship your AI app to production?

We help funded startups turn vibe-coded prototypes into production systems. $10K-$50K engagements. Results in weeks, not months.

Apply for Strategy Call