Skip to main content
AttributeX AI

Slopsquatting: When AI Hallucinates Your Dependencies

12 min read

Your AI assistant suggested a package called starlette-reverse-proxy. You installed it. Your app works. What you did not know: the real package does not exist. Or rather, it did not exist three weeks ago, when the first LLM hallucinated the name. Since then, someone registered it on PyPI, published a version with code that exfiltrates environment variables to a Telegram bot, and is waiting for enough developers to copy the same suggestion into their projects.

This is slopsquatting. It is the supply chain attack vector that AI coding assistants opened up, and it is already in production. The name is a play on typosquatting, but it is worse: typosquatting relies on you making a typo, slopsquatting relies on your AI making up a package that sounds plausible enough that you will not double-check.

We have started finding slopsquatted packages in every other security audit we run on AI-built apps. Most founders did not know the category existed. Here is exactly how it works, how to detect it in your codebase, and the lockfile-plus-mirror-plus-SBOM remediation that actually closes the hole.

The 20% number that changes how you think about AI code

The research on hallucinated dependencies is consistent and worse than most developers assume. Lasso Security's study published in 2024 found that 19.7% of package names suggested by LLM coding assistants across Python and JavaScript did not exist on the public registries when checked. Socket's follow-up work landed in the same range: roughly 20% of AI-suggested packages are hallucinations.

The numbers split by model class. GPT-4 and Claude Sonnet hallucinate packages in the 5-8% range — lower, but still non-zero. Open-source models (Code Llama, DeepSeek Coder, smaller instruction-tuned variants) hit 21% and above. When developers repeat the same prompt to the same model multiple times, roughly 43% of hallucinated names are stable — meaning the same fake package name shows up again and again. That repeatability is what makes the attack economical.

Here is the attack chain:

  1. An attacker monitors popular LLM outputs (their own queries, community leaks, GitHub Copilot telemetry scraping, or public datasets of AI suggestions).
  2. They identify package names that show up repeatedly but do not exist on npm, PyPI, or the relevant registry.
  3. They register those names and publish a first version with either direct malicious code (postinstall scripts, import-time payloads) or a benign-looking wrapper that pulls a second-stage payload from a remote host.
  4. They wait. Every developer whose AI assistant suggests that name and who does not verify before pip install becomes a victim.

The starlette-reverse-proxy case is the clearest public example. The name was hallucinated by multiple LLMs when developers asked for "a FastAPI-compatible reverse proxy built on Starlette." It sounded real enough — Starlette is the ASGI framework FastAPI sits on, "reverse proxy" is a legitimate use case, the naming convention matched dozens of real packages. In early 2025, the squatted version on PyPI contained a setup.py that ran arbitrary code at install time. Developers who copied the AI's suggestion and ran pip install starlette-reverse-proxy gave the attacker code execution on their laptops and CI runners.

What slopsquatting looks like in a real codebase

This is the kind of diff we find during audits. Innocuous-looking package.json and requirements.txt entries that do not belong:

# requirements.txt — what we find in AI-built FastAPI apps
  fastapi==0.115.0
  uvicorn==0.32.0
+ starlette-reverse-proxy==0.1.3        # hallucinated, squatted
  pydantic==2.9.2
+ openai-retry-wrapper==1.0.4           # hallucinated, squatted
  httpx==0.27.2
// package.json — what we find in AI-built Next.js apps
  "dependencies": {
    "next": "16.0.0",
    "react": "19.0.0",
+   "react-hook-form-zod-validator": "^2.1.0",   // does not exist
    "zod": "^3.23.8",
+   "nextjs-s3-upload-helper": "^1.4.2"          // squatted after hallucination
  }

Each of those fake packages was suggested by an AI assistant to a real founder we audited. Two of the four had already been registered by someone other than the original maintainer of the adjacent real package — meaning the names were live attack surface by the time we saw them.

Detection is straightforward if you know to look for it. The signals:

  • The package has fewer than 10 versions total, all published in the last 90 days
  • The maintainer is a single account with no other packages or a brand-new account
  • The package has fewer than 500 downloads per week
  • The repository URL in the manifest points at a dead GitHub link, a fork, or nothing at all
  • The package has no stars, no issues, and no CI history
  • npm audit signatures or pip-audit cannot verify the publisher

Run this on every dependency in a freshly AI-generated project and you will find at least one suspicious entry in roughly half of the projects we audit. That matches the 20% hallucination rate combined with typical AI project dependency counts of 30-60 packages.

Why "my lockfile protects me" is not quite right

Lockfiles (package-lock.json, pnpm-lock.yaml, poetry.lock, requirements.txt with pinned versions and hashes) do protect you from upgrades pulling in a newer malicious version. They do not protect you from the original install. If your AI assistant suggests openai-retry-wrapper, you run npm install openai-retry-wrapper, the lockfile records the squatted version with its hash, and every CI run and every teammate then installs that exact malicious version deterministically.

The lockfile is a recording device. It is faithful to what you installed. If what you installed was malicious, the lockfile faithfully reinstalls the malicious thing.

The real defense-in-depth for hallucinated dependencies is four layers, all of which have to be in place:

  1. Verified installs: npm audit signatures, pip install --require-hashes, pnpm install --frozen-lockfile --strict-peer-dependencies
  2. A private registry mirror: npm Enterprise, JFrog Artifactory, Google Artifact Registry, or Sonatype Nexus — so installs only resolve against packages you have explicitly allow-listed
  3. An SBOM generated on every build: syft, cdxgen, or cyclonedx-bom produce a software bill of materials you can diff, scan, and audit
  4. A human review step for any new dependency added by an AI assistant, before it lands in main

Layer 2 is the one most founders skip and it is the most important. A private registry mirror with an explicit allowlist makes slopsquatting impossible by default — a squatted name simply cannot resolve, because it is not in the allowlist.

A minimum-viable setup that stops slopsquatting

You do not need enterprise tooling. This works for a five-person team on a $50K engineering budget:

# .github/workflows/supply-chain.yml
name: Supply chain checks
on: [pull_request]
jobs:
  verify:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: pnpm
      - run: pnpm install --frozen-lockfile
      - run: pnpm audit --audit-level high
      - run: pnpm audit signatures
      - name: Generate SBOM
        uses: anchore/sbom-action@v0
        with:
          format: cyclonedx-json
          output-file: sbom.json
      - name: Scan SBOM
        uses: anchore/scan-action@v3
        with:
          sbom: sbom.json
          fail-build: true
          severity-cutoff: high

For Python projects, the equivalent is pip-audit --require-hashes --strict and cyclonedx-bom on every PR. For a private mirror, Verdaccio runs on a $10/month VPS and takes an afternoon to set up with an explicit allowlist.

The harder layer is the human review step. Teams with strong practices require a second reviewer to approve any PR that adds a new line to package.json or requirements.txt and verify the package's legitimacy — real maintainer, real repo, real download history. This is the single highest-leverage process change we recommend after an audit. It takes 2 minutes per new dependency and it catches slopsquatting at the only moment it is cheap to catch: before merge.

Why AI-built apps are disproportionately exposed

Three reasons slopsquatting hits AI-built apps harder than traditional codebases:

First, the dependency count is higher. AI assistants install a package for everything — a utility function gets its own library, a form validation adds two packages, a trivial date format pulls in three. We routinely see Next.js projects with 80+ direct dependencies where a hand-written version would have 20.

Second, there is no institutional memory about which packages are trusted. A senior engineer on a traditional team knows that date-fns and dayjs are legitimate and that date-util-helper-pro is not. The AI assistant does not. The founder reviewing the AI's work does not. The package name is just as plausible as any other.

Third, the failure mode is silent. A slopsquatted package that successfully exfiltrates environment variables does not break your app. Your tests still pass. Your build still ships. The only signal is a spike in credential abuse or unauthorized API calls weeks later — long after the audit trail has faded.

This is the same dynamic that drives AI-generated code security vulnerabilities more broadly, and it is why AI-built apps fail formal security audits at a much higher rate than traditionally built apps. The attack surface is wider and the review discipline is weaker.

Remediation checklist for your current AI-built project

If you have an AI-built app in production right now:

  1. Export a full dependency tree: pnpm list --depth=10 --json > deps.json or pip list --format=json > deps.json
  2. For every package, verify the maintainer, repo URL, first-publish date, and weekly downloads
  3. Rotate every secret that was ever exposed on the affected machines — any laptop or CI runner that ran install could have been compromised
  4. Install pnpm audit signatures or pip-audit in your CI and make it blocking
  5. Generate an SBOM and store it with every release
  6. Set up a private registry mirror with an allowlist before your next dependency addition
  7. Add a reviewer-required rule to PRs that touch your dependency manifests
  8. Run a full AI app security audit to check for slopsquatting and the other patterns that commonly ride alongside it

Steps 1-3 are urgent. Steps 4-7 are one sprint of work. Step 8 is where you get independent confirmation that you actually closed the hole.

Frequently asked questions

How do I know if a package my AI suggested is real?

Three checks, in order: search npm or PyPI for the exact name, verify the repository link in the manifest leads to an active repo with commit history, and check weekly download count. If any of those fail, the package is suspect. If all three pass but the first publish date is in the last 90 days, still treat it as suspect — slopsquatted packages are always new.

Does this affect Copilot, Cursor, Claude, and ChatGPT equally?

No. GPT-4-class and Claude Opus/Sonnet-class models hallucinate in the 5-8% range. Smaller models and open-source models hallucinate in the 15-25% range. But even 5% is meaningful — one hallucinated dependency in twenty is plenty of attack surface when you install a hundred packages over a project lifetime. Every assistant needs the same defense-in-depth.

Is slopsquatting actually happening in the wild or is it theoretical?

It is happening. Lasso Security, Socket, Snyk, and Checkmarx have all published incident reports documenting squatted packages in the wild that match AI-hallucinated names. The starlette-reverse-proxy case is the most public but it is not the only one. Security researchers are finding new squatted hallucinations on npm and PyPI on a monthly cadence.

If I use a private registry mirror, do I still need lockfile verification?

Yes. Defense in depth matters here. A private registry protects you from resolving a squatted name. Lockfile verification protects you from a legitimate package being compromised mid-flight or from a maintainer account being taken over. They cover different attack paths and you want both.

What about transitive dependencies — packages my packages depend on?

This is the harder problem. Your direct dependency is legitimate, but one of its 40 transitive dependencies is either squatted or compromised. SBOM scanning plus tools like Socket's deep-scan and Snyk's reachability analysis catch this by walking the full dependency graph. npm audit catches the obvious cases. A private registry mirror with an allowlist is the strongest protection because transitive deps also resolve through the mirror.

How often should I re-audit my dependencies?

Continuously for CI (automated audit on every PR), weekly for SBOM diff review, quarterly for a full dependency hygiene pass where you remove packages that are no longer used or no longer maintained. New slopsquatting campaigns appear on a weekly cadence — a one-time audit is not sufficient.

We already shipped with suspect packages. What now?

Assume compromise. Rotate every credential that was present on any machine that ran install — laptops, CI runners, staging servers. Review your logs for unusual outbound connections from those hosts during the install window. Remove the packages, replace them with verified alternatives, and run a full security audit to confirm there is no lingering persistence. This is exactly the scenario we run during incident-response audits.

Your AI wrote a supply chain attack into your app

The dependency your AI assistant suggested probably exists. The problem is the 20% that do not — and the attackers who are already publishing malicious versions under those names, waiting for the next developer to copy-paste the same suggestion.

Book a free security audit for your AI-built app. We will scan your dependency tree for slopsquatted packages, check every line of your lockfile against known-bad signatures, install the remediation toolchain, and hand you an SBOM you can show an investor or a customer's security team.

Your app's supply chain is only as trustworthy as the packages an AI guessed at. Get independent verification before an attacker does.

Ready to ship your AI app to production?

We help funded startups turn vibe-coded prototypes into production systems. $10K-$50K engagements. Results in weeks, not months.

Get Your Free Audit