← Back to all articles

TanStack Supply Chain Attack: The CI Cache Was the Trust Boundary

New findings from the TanStack supply chain attack show how trusted publishing, CI caches, OIDC, and developer tooling became the attack path.

Updated 13 min read
Article header showing a compromised package spreading through a CI/CD supply chain, with a registry panel and downstream web, mobile, CLI, SaaS, and internal app targets.

On May 13, I covered the TanStack compromise as one item in a larger week of security chaos. At the time, the headline was already bad enough: 84 malicious versions across 42 @tanstack/* packages, published in roughly six minutes, with valid trusted-publishing provenance.

Two days later, the story is clearer and worse in a more useful way. TanStack became the clearest public demonstration so far that modern supply chain controls can produce a malicious package that looks legitimate if the workflow around them is shaped badly.

The attacker poisoned CI state, waited for a legitimate release workflow, extracted a short-lived OIDC publishing path at runtime, and let the project’s own release machinery put malware into npm. The chain bypassed the older mental model of phished maintainer passwords and stolen long-lived npm tokens.

The release pipeline itself became the path to publication.

What changed since the roundup

The earlier article had the outline. The new findings fill in the machinery and the downstream blast radius.

New findingWhy it matters
critical TanStack postmortemConfirms the chain: pull_request_target, shared cache poisoning, and runtime OIDC token extraction from the GitHub Actions runner.
high Trusted publishing bypassThe malicious packages were published through the legitimate trusted-publisher binding, but not through the intended publish step.
high Mistral advisoryMistral says an affected developer device was involved. Its npm variants appear non-functional, but mistralai==2.4.6 on PyPI ran malicious code on import.
high OpenAI disclosureTwo corporate employee devices were impacted, limited credential material was exfiltrated from internal repositories, and OpenAI is rotating app signing certificates.
high Cemu release pivotDatadog found the same Python payload in Linux assets on the real Cemu GitHub release page, uploaded by a compromised long-term contributor account.

A trusted release pipeline can publish attacker-controlled packages if untrusted code can persist into it.

How the attack path worked

TanStack’s postmortem says that between 19:20 and 19:26 UTC on May 11, 2026, an attacker published 84 malicious versions across 42 @tanstack/* packages. The affected set was router-heavy. TanStack says @tanstack/query*, @tanstack/table*, @tanstack/form*, @tanstack/virtual*, @tanstack/store, and the @tanstack/start meta-package were confirmed clean, while many router, start subpackage, adapter, devtool, and plugin packages were not.

The entry point was a pull request from a throwaway fork. The attacker opened PR #7378 against TanStack/router, titled it like a normal work-in-progress cleanup, and used a branch named fix/history-package. Opening and force-pushing to that PR triggered workflows that used pull_request_target. A merge was not required.

That trigger is dangerous when it checks out and runs code from a fork. It runs in the base repository context, which means the job can interact with resources scoped to the real repository. In TanStack’s case, the workflow tried to split trust boundaries: one job ran benchmark code with read-only permissions, while another handled trusted PR comments. The split made sense on paper.

The missed boundary was the GitHub Actions cache.

The untrusted benchmark job used a shared TanStack setup action that transitively called actions/cache for the pnpm store. TanStack’s postmortem and SafeDep’s cache-poisoning writeup say the malicious code wrote into the pnpm store under a cache key that the real release workflow would later restore. When the PR job ended, the cache action saved the poisoned store in the base repository’s cache scope.

The attacker then force-pushed the PR branch back to a clean main commit, closed the PR, and deleted the branch. The visible PR became a no-op. The poisoned cache remained.

Hours later, a legitimate maintainer merged an unrelated PR to main. That triggered the normal release workflow. The release runner restored the poisoned cache, executed attacker-controlled code during the build/test path, and the malware extracted the OIDC publishing path from the runner environment.

TanStack’s key point is precise: no npm token was stolen from a maintainer. The attacker used the release workflow’s own ability to mint a short-lived trusted-publishing credential, then posted directly to registry.npmjs.org. The normal publish step was skipped because tests failed, but the malicious publish had already happened.

The packages could carry legitimate provenance signals while still being malicious because the trust signal described the workflow identity. It did not prove that only the intended publish step had released the artifact.

Why this beats the old mental model

Most npm incident response advice still assumes one of three stories:

  • A maintainer account was compromised.
  • A long-lived npm token leaked.
  • A malicious dependency ran an install script.

Those still happen. The axios compromise in March was a classic-token story. The earlier Mini Shai-Hulud waves used install-time or import-time execution to harvest secrets and self-propagate. But the TanStack incident adds a fourth story:

Untrusted CI state crossed into a trusted release workflow.

The weak point was the chain between untrusted PR execution, cache restore, and trusted publishing:

  1. A pull_request_target workflow checked out fork-controlled code.
  2. That job could write to a cache scoped to the base repo.
  3. A release workflow later restored that cache.
  4. The release workflow had id-token: write because it legitimately needed trusted publishing.
  5. Malware running on the release runner used that authority outside the intended publish step.

TanStack says this was a known-bad workflow pattern and took responsibility for missing it. Their hardening follow-up is unusually frank: provenance, SLSA, OIDC, and 2FA worked as advertised, but the workflow shape was still wrong.

This distinction matters. If teams read the incident as “trusted publishing is broken”, they may throw away one of the better improvements npm has made. Trusted publishing proves the identity of the workflow. Teams still have to protect the workflow from untrusted state, caches, actions, artifacts, and dependencies.

What the malware did

The TanStack package variant used a subtle trigger. Instead of replacing the normal scripts block, compromised package manifests added an optionalDependencies entry pointing to a GitHub commit:

"@tanstack/setup": "github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c"

That commit hosted a package with a prepare lifecycle script. During install, npm resolved the optional dependency, fetched the Git dependency, and executed the payload path. Because it was optional, failures could be easier to miss.

Research from Wiz and SafeDep describes two main payload families in this wave:

  • The TanStack npm path used the Git-based optional dependency plus embedded files such as router_init.js.
  • Other npm packages, including Mistral and UiPath variants, used a preinstall path that ran node setup.mjs, downloaded Bun, and attempted to execute the shared payload.

The payload’s goal was credential theft and propagation. Reported targets included GitHub tokens, npm tokens, GitHub Actions OIDC material, AWS, Azure, GCP, Kubernetes service-account tokens, HashiCorp Vault tokens, SSH keys, and developer files. Several researchers also documented persistence and propagation through developer tooling configuration, including .claude/ and .vscode/ files.

Wiz reported a gh-token-monitor persistence component on developer machines. It checked GitHub tokens periodically and, if token checks started returning authorization errors, attempted to wipe the user’s home directory. This does not mean every affected system was wiped. It does change the remediation order: isolate and inspect the machine before revoking tokens from that same infected environment.

First isolate and inspect. Then remove persistence. Then rotate credentials from a known-clean system.

The downstream story got bigger

The first TanStack numbers were already large. Then the campaign spread into more namespaces and ecosystems.

SafeDep’s campaign analysis counted 172 packages across npm and PyPI and 404 compromised versions. The notable set included:

  • @tanstack/* router-related packages
  • @mistralai/* npm packages
  • mistralai==2.4.6 on PyPI
  • guardrails-ai==0.10.1 on PyPI
  • @uipath/* packages
  • @opensearch-project/opensearch
  • @squawk/*
  • @tallyui/*
  • @beproduct/*
  • multiple unscoped packages

Some variants failed. A broken payload is still evidence that an environment pulled a malicious package during the exposure window, and a similar next run may not be broken.

Mistral’s official advisory adds important nuance. Mistral says the npm packages were effectively inoffensive because setup.mjs referenced a missing file. But the PyPI package mistralai==2.4.6 ran malicious code on import on Linux. It downloaded transformers.pyz to /tmp/transformers.pyz, started it as a background process, and targeted credentials from common locations. Mistral also says its current investigation points to an affected developer device, with no indication that Mistral infrastructure was compromised.

OpenAI then disclosed downstream exposure. According to OpenAI, two corporate employee devices were impacted. OpenAI says it observed activity consistent with the public malware behavior, including credential-focused exfiltration activity in a limited subset of internal repositories available to those employees. OpenAI says it found no evidence that user data, production systems, intellectual property, or shipped software were compromised, but code-signing certificates were present in impacted repositories. As a precaution, OpenAI is rotating signing certificates and asking macOS app users to update by June 12, 2026.

This is the sober real-world datapoint. The supply chain compromise did not need to reach production to create urgent work. A developer machine with access to internal repositories was enough to trigger credential rotation, workflow restrictions, certificate replacement, and customer-facing guidance.

The Cemu pivot matters

The strangest new finding came from Datadog Security Labs. While investigating the mistralai payload, Datadog pivoted on the hash of transformers.pyz and found the same payload inside Linux release assets for Cemu, a Wii U emulator.

This was not npm. It was not PyPI. It was GitHub Releases.

Datadog found that two Linux assets on the real cemu-project/Cemu GitHub release page had been re-uploaded in May 2026 by a user account named MangelSpec, while the original Windows and macOS assets were still the old github-actions[bot] uploads from 2025. The AppImage alone had nearly 20,000 downloads before discovery. Cemu maintainers confirmed the account belonged to a long-term co-author and removed its access pending investigation.

The lesson is not that Cemu is related to TanStack. It is that the same credential-theft campaign can move across distribution channels:

  • npm package publish
  • PyPI package publish
  • GitHub release asset replacement
  • developer IDE and AI-agent configuration

If your incident response only checks package-lock.json, it can miss the second-order problem: which human or automation credentials did the malware steal, and what could those credentials publish or replace next?

Why developers noticed

The Fireship video matters less as a primary source and more as a sign of where the developer conversation has moved. The incident is no longer being understood as another npm token leak. The version that has now reached a broad developer audience is closer to the real issue: a pull request poisoned CI cache, a later trusted release restored it, and the release workflow produced packages that still looked verified. That is the right lesson to take into client conversations.

What to check now

If you manage developer endpoints, CI runners, or client build environments, treat this as an exposure investigation. Dependency cleanup is only one piece of the response.

1. Search lockfiles and package caches

Look for affected TanStack, Mistral, UiPath, OpenSearch, Guardrails AI, and other listed packages. Use the vendor-maintained lists because the package set changed during the investigation.

grep -R -n -E "@tanstack|@mistralai|@uipath|@opensearch-project|guardrails-ai|mistralai" \
  package-lock.json pnpm-lock.yaml yarn.lock requirements*.txt pyproject.toml uv.lock poetry.lock Pipfile.lock 2>/dev/null

For Mistral’s PyPI advisory, specifically check:

pip show mistralai | grep -i "^version"
grep -R -n -E "mistralai\\b.*2\\.4\\.6|guardrails-ai\\b.*0\\.10\\.1" \
  requirements*.txt pyproject.toml uv.lock poetry.lock Pipfile Pipfile.lock 2>/dev/null

2. Look for payload artifacts

On developer workstations and CI runners, check for:

  • /tmp/transformers.pyz
  • router_init.js
  • router_runtime.js
  • tanstack_runner.js
  • setup.mjs
  • .claude/setup.mjs
  • .claude/router_runtime.js
  • .vscode/setup.mjs
  • .vscode/tasks.json changes you did not author
  • ~/.config/systemd/user/gh-token-monitor.service
  • ~/Library/LaunchAgents/com.user.gh-token-monitor.plist

3. Hunt network indicators

Block and investigate traffic to the known infrastructure:

  • git-tanstack.com
  • filev2.getsession.org
  • seed1.getsession.org
  • seed2.getsession.org
  • seed3.getsession.org
  • 83.142.209.194

Mistral also lists api.masscan.cloud in its advisory. Treat these as hunt leads, not a complete set.

4. Audit what the host could reach

If a compromised package ran on a developer workstation, CI runner, or build container, ask what secrets were present at runtime:

  • GitHub PATs and OAuth tokens
  • npm publish tokens
  • cloud credentials and instance metadata access
  • Kubernetes service accounts
  • Vault tokens
  • SSH private keys
  • .npmrc
  • gh CLI credentials
  • PyPI tokens and .pypirc
  • code-signing material
  • package registry credentials

Then rotate from a known-clean device. If persistence indicators are present, remove or isolate them before revoking GitHub tokens from the infected host.

5. Check for unauthorized publishing

Do not stop at “did we install TanStack?” Check whether stolen credentials were used after the install:

  • unexpected npm publishes
  • new package versions without matching commits or tags
  • modified GitHub release assets
  • new public repositories with campaign-like descriptions
  • unexpected commits to .claude/, .vscode/, .github/, or release workflow files
  • package registry changes outside normal release windows

This is the part that matters for MSPs. A client may not care about TanStack by name, but they care if a developer token can publish malware under their organization or replace an artifact customers download.

Longer-term controls

The immediate response is rotation and hunting. The strategic response is reducing the ways untrusted state reaches trusted publishing.

Fix GitHub Actions trust boundaries

Audit every pull_request_target workflow. The dangerous pattern is:

  • pull_request_target
  • checks out fork-controlled code
  • runs that code or installs dependencies from it
  • writes cache, artifacts, build output, or any state later consumed by trusted workflows

If a workflow needs to comment on PRs, label issues, or post benchmark output, split the design. Run untrusted code under pull_request, publish a sanitized artifact, then use workflow_run or a trusted job to post results. Do not let fork code write to base-repo cache.

Treat caches as executable trust state

Caches are not performance-only. They can carry executable code into a later workflow.

For release workflows:

  • prefer no cache over shared mutable cache
  • use restore-only cache patterns where possible
  • avoid cache keys shared between PR and release jobs
  • bust caches after suspicious workflow runs
  • do not restore dependency stores produced by untrusted code

TanStack disabled caches across affected workflows as an immediate mitigation, which fits the risk.

Pin Actions to commit SHAs

Floating action references are another mutable trust path. Pin Actions to full commit SHAs, especially in release and deployment workflows. TanStack moved that direction after the incident.

Tags are convenient. Convenience is not a control.

Restrict install-time execution

Most npm malware still wants lifecycle scripts. The TanStack variant used a more subtle Git dependency prepare path, but the principle holds: install time is code execution time.

Use the strongest package-manager controls your workflow can tolerate:

  • npm ci --ignore-scripts where builds do not require install scripts
  • pnpm 11 minimum release age defaults
  • pnpm blockExoticSubdeps
  • pnpm build-script approval with allowBuilds
  • Renovate or Dependabot cooldowns for fresh releases
  • internal registry quarantine periods for high-risk ecosystems

Socket’s pnpm 11 writeup is worth reading here. A 24-hour package cooldown would not stop every supply chain attack, but it directly targets the speed these campaigns rely on.

Review AI-agent and IDE config as executable code

This campaign keeps touching .claude/ and .vscode/. That should change how teams review repository config.

Treat these directories like workflow files:

  • protect them with CODEOWNERS
  • alert on new startup hooks or task files
  • review AI-agent configuration in PRs
  • block unexpected executable files in IDE config directories

The attacker is poisoning the places developers trust to run helper automation.

Bottom line

TanStack did a lot right. It used CI-based publishing, provenance, and an auditable release path. It detected and responded quickly once external researchers raised the alarm.

Those controls still produced malicious packages through the project’s own release identity.

The next supply chain incident may look like a legitimate workflow restoring legitimate cache, minting legitimate credentials, and producing an artifact with legitimate metadata.

For MSPs and internal IT teams, the action item is simple but uncomfortable: CI/CD is a privileged production system, and the state it consumes deserves the same suspicion as the secrets it holds.

I said this was just the beginning. I undersold it.

Apple, Microsoft, Linux, PAN-OS, Canvas, Next.js, plus another supply chain encore: a week of disclosures since the 'three exploits in 72 hours' post.

3 exploits in 72 hours. I fear this is just the beginning.

Three critical incidents in under 72 hours: cPanel auth bypass exploited as 0-day, Linux Copy Fail kernel root, and Mini Shai-Hulud's npm and PyPI return.

LiteLLM Supply Chain Attack: What MSPs Need to Know

Analysis of the TeamPCP supply chain attack on LiteLLM via compromised Trivy GitHub Actions, covering the 3-layer payload, IOCs, and defensive actions for MSPs.

Axios npm Supply Chain Attack: What You Need to Know

Analysis of the axios npm supply chain attack that dropped a cross-platform RAT via maintainer account compromise, with IOCs and defensive steps.

Search articles
esc to close