Back

How OIDC Trusted Publishing Works — and Where Mini Shai-Hulud Found the Gap

OIDC trusted publishing was designed to eliminate the long-lived credentials that supply chain attackers steal. Mini Shai-Hulud bypassed it anyway. Here's how the mechanism works, what it actually guarantees, and how three individually reasonable configuration decisions combined to let an attacker publish under TanStack's own verified identity.

Every major npm supply chain attack of the past three years has followed roughly the same pattern: an attacker steals a developer's long-lived publish token, authenticates to the npm registry directly, and pushes a malicious package version. The credential is the breach. The defense against that pattern is to eliminate long-lived credentials entirely.

That defense is called OIDC trusted publishing. On May 11, 2026, a worm called Mini Shai-Hulud bypassed it without stealing a single credential.

This article explains how OIDC trusted publishing works, what guarantees it actually provides, and how three configuration decisions combined to let a malicious actor publish packages under TanStack's own cryptographically verified identity.


The problem OIDC trusted publishing was built to solve

The classical npm publish workflow requires a token. A developer runs npm login, receives a token scoped to their account, and either uses it interactively or stores it as an environment variable in their CI system. That token is the credential that authorizes publishing. It is long-lived, often never expires, and can be used from anywhere.

The attack surface is obvious. A token sitting in a CI environment variable can be extracted by any workflow step that runs in that environment. A token stored on a developer's laptop can be harvested by infostealer malware — exactly what the earlier Shai-Hulud 2.0 wave in November 2025 was doing at scale. A token accidentally committed to a public repository can be found within minutes by automated scanners.

OIDC trusted publishing removes the token entirely. Instead of storing a long-lived credential, the npm registry accepts short-lived identity proofs issued by GitHub's OIDC provider at the moment a workflow runs. The publish credential does not exist until publish time, expires almost immediately, and cannot be reused.


What OIDC is and how it works in this context

OpenID Connect is an identity protocol that lets one system vouch for the identity of another — without sharing a password or token. GitHub uses it to let a workflow prove who it is to external services like npm.

When a workflow runs, GitHub's OIDC provider can issue a signed JSON Web Token (JWT) that describes the execution context: which repository triggered the workflow, which workflow file is running, which branch it is running from, which actor triggered it, and when. This JWT is signed with GitHub's private key and can be verified by any party that trusts GitHub as an identity provider.

npm's trusted publisher integration uses this mechanism. A maintainer registers their package with a trusted publisher configuration that says, in effect: accept publish requests that present a GitHub OIDC token asserting this repository and this workflow. When the release workflow runs, it calls getIDToken() — a GitHub Actions API — to obtain a JWT, presents that JWT to npm's federation endpoint, and receives back a short-lived publish token scoped to that specific package. The workflow uses that token to run npm publish. The token is gone before the next workflow step.

The result is that there is no static credential to steal.


What SLSA provenance adds

Supply-chain Levels for Software Artifacts (SLSA) is a framework for describing and verifying build integrity. At Build Level 3 — the highest level currently in widespread use — a package carries a provenance attestation: a cryptographically signed statement asserting that this tarball was built by this workflow run in this repository from this source commit. The signature is issued by Sigstore, an open-source public notary for software builds backed by the Linux Foundation.

The attestation is generated automatically when a workflow uses OIDC publishing with provenance enabled. It is attached to the package in the npm registry and can be verified by anyone who downloads the package.

TanStack had SLSA Build Level 3 provenance on every release. The compromised packages published on May 11 also carry valid SLSA attestations — because they were built by TanStack's actual workflow, running in TanStack's actual repository, using TanStack's actual OIDC identity.

The attestation does not lie. It accurately records that the build happened where it happened. It says nothing about whether the source commit that triggered the build contained attacker-controlled code.


The three vulnerabilities that were chained

The TanStack postmortem describes the attack as a chain in which each vulnerability bridges the trust boundary the others assumed. None of the three conditions alone would have been sufficient.

pull_request_target and the fork boundary

GitHub Actions has two workflow triggers for pull requests: pull_request and pull_request_target.

pull_request runs a workflow using the code from the fork — the contributor's copy. It runs with read-only permissions and no access to repository secrets, because it is assumed that fork code is untrusted.

pull_request_target runs a workflow in the context of the base repository — the owner's copy — even when triggered by a pull request from a fork. It has access to repository secrets and, depending on configuration, can write to the base repository's cache. The intent is to allow workflows that need base-repository context, such as labeling or metrics, to run safely on pull requests without exposing code execution to fork contributors.

The pattern is dangerous when the workflow does anything with code from the pull request head. If a pull_request_target workflow checks out the fork's code — or restores a cache that was written by a fork workflow — it executes untrusted code with base-repository privileges. This has been documented as the "Pwn Request" attack pattern since 2021. CISA issued an advisory after the tj-actions/changed-files compromise in March 2025 used the same mechanism.

TanStack's bundle-size.yml workflow used pull_request_target. It ran a bundle size calculation on incoming pull requests. To do this, it needed to build the fork's code. That requirement created the trust boundary crossing.

GitHub Actions cache poisoning

GitHub Actions caches are stored per-repository and are restored across workflow runs. The cache key is typically a hash of the dependency manifest — package.json, lockfile, or similar — so that any workflow run resolving the same dependencies hits the same cache. If the key matches, the cache contents are restored to the runner regardless of which workflow run originally wrote them.

Cache scope crosses the fork boundary in one direction. A fork workflow run can write to a cache key that the base repository's workflows will later restore. The reverse is not true — base repository cache is not accessible to fork runs — but the forward direction is sufficient.

When the attacker's pull request ran under pull_request_target, the bundle-size.yml workflow built the fork's code and wrote a pnpm store cache. That cache contained the attacker's payload. When a maintainer later pushed a legitimate commit to the main branch and the release workflow ran, it restored the pnpm store cache — and got the poisoned version.

The cache key Linux-pnpm-store-6f9233a5... appears in both the pull request run and the release run. The release workflow had no way to distinguish a clean cache from a poisoned one based on the key alone.

The orphaned commit and OIDC scope

An orphaned commit is a git commit with no branch or tag pointing to it — invisible to normal git navigation, but still fetchable by anyone who knows its SHA hash. This matters because GitHub stores git objects for a repository and all its forks in a shared network. A commit pushed to any fork can be fetched from the parent repository by its SHA, even if no branch in the parent ever referenced it.

The attacker pushed commit 79ac49eedf774dd4b0cfa308722bc463cfe5885c to the fork voicproducoes/router. This commit contained a package named @tanstack/setup with a prepare lifecycle hook pointing to a malicious script.

The poisoned pnpm cache — already in place from the previous step — included an optionalDependencies entry in one of the package manifests:

"optionalDependencies": {
  "@tanstack/setup": "github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c"
}

When the release workflow restored the poisoned cache and resolved dependencies, npm fetched that commit from GitHub's shared git object network. It was reachable. It was treated as a valid dependency. The prepare lifecycle script executed.

Here is where OIDC scope becomes the critical variable. TanStack's trusted publisher configuration granted trust at the repository level: any workflow running from tanstack/router could request a publish token. It was not scoped to a specific workflow file or a specific branch.

The release workflow was already running in the context of tanstack/router. It had been triggered by a legitimate push to main. When it called getIDToken() to obtain a publish credential, GitHub's OIDC provider issued a valid token. That token was minted for tanstack/router, which is exactly what the trusted publisher configuration accepted. The workflow used it to publish 84 package versions. The malicious router_init.js payload — delivered via the orphaned commit's prepare hook during dependency resolution — was bundled silently into each tarball before the publish step ran.

To be precise about the sequence: the OIDC token and the SLSA attestation were issued for the workflow run itself, not for the contents of the tarballs it produced. They accurately record that TanStack's release workflow ran on a legitimate push to main. They say nothing about what the poisoned cache caused to be bundled into the packages during that run.

The OIDC token was legitimate. The provenance attestation was legitimate. The packages were not.


What the payload did with the token

Once router_init.js had execution, it swept the environment for credentials, replicated itself across the victim's other packages, and exfiltrated everything it found.

In the first phase, it deployed a bundled copy of TruffleHog to scan the environment for secrets. Rather than relying on environment variables — which CI systems increasingly sanitize — the payload read directly from /proc/*/mem, the Linux interface to process memory, which is harder to scrub and often holds credentials that were loaded and used earlier in the same process.

This matters most on downstream machines. When a developer or another CI system ran npm install on a compromised @tanstack/* package, router_init.js executed in that environment. Any GitHub Actions runner that had called getIDToken() earlier in its workflow — a common pattern in release pipelines — still held that OIDC token in process memory. The payload extracted it before it expired.

In the second phase, it used any npm tokens it found to enumerate all packages published by the compromised maintainer, injected the payload into each one, and published new malicious versions. Where the victim's CI environment used OIDC publishing, the worm had already extracted a live token from memory. Where it used a stored npm token, the worm used that directly. Either way, the publish was authorized under the victim's own identity. This is the worm's self-replication mechanism. It does not require a human operator to identify new targets. Each infected CI run becomes a new publish authority.

In the third phase, it exfiltrated collected credentials through two channels simultaneously: a public GitHub repository created under the victim's account, named "Shai-Hulud," and the Session messenger peer-to-peer network. The Session channel is significant because Session traffic is end-to-end encrypted and routed through an onion network. From a network monitoring perspective, it is indistinguishable from a user sending an encrypted message. Egress filtering on domain or IP reputation does not block it.


Why existing controls did not catch this

Two-factor authentication for the maintainer accounts was not bypassed. It was irrelevant. The publish credential was obtained by the workflow, not by any human authenticating to npm. OIDC publishing is specifically designed to remove human authentication from the publish path — and it did.

SLSA provenance was not falsified. The attestation accurately records the build. It cannot detect whether the source that entered the build was authorized by the maintainers, only that a specific workflow ran on a specific commit. The commit that triggered the build was legitimate. The cache that the build restored was not.

Lockfile integrity checking would not have caught the orphaned commit dependency because it was inserted into the cache at the pnpm store level, not the project's top-level package.json or lockfile. The malicious optionalDependency lived inside a cached intermediate artifact, invisible to a static review of the repository's committed files.

Behavioral analysis at install time did catch it. Automated tooling from Aikido Security flagged anomalies in router_init.js within six minutes of publication. An external researcher filed a full technical report to TanStack within twenty minutes. Examining what a package does when it executes — rather than verifying where it came from — is the control that provenance cannot replace.


What the configuration fix looks like

The attack surface closes when OIDC trust is scoped to the minimum required context. Instead of trusting any workflow in a repository, the trusted publisher configuration should pin to a specific workflow file running from a specific protected branch:

# npm trusted publisher configuration (set in the npm registry, not in the workflow)
# Scoped: only release.yml running from refs/heads/main can request a publish token
Repository: tanstack/router
Workflow: .github/workflows/release.yml
Branch: refs/heads/main

With this configuration, a workflow triggered by an orphaned commit from a fork — which runs in the repository context but not from refs/heads/main — cannot request a publish token. The OIDC token it presents does not match the trusted publisher's branch constraint. The publish is rejected.

The pull_request_target pattern can be made safe by separating the workflow that runs fork code from the workflow that has repository permissions. A common pattern is to split into two workflows: one that runs untrusted code in a restricted context and uploads artifacts, and a second that runs only on workflow_run completion, operates exclusively on base-repository code, and holds any credentials. The two never share a runner. Cache scope is limited by using cache keys that include the workflow run ID rather than content hashes that fork runs can predict and collide.

StepSecurity's Harden-Runner tool can enforce that no unexpected outbound network connections occur during a workflow run, which would have blocked both the orphaned commit fetch and the Session exfiltration channel in this attack.


What this attack changed

Mini Shai-Hulud is not the first supply chain attack and not the most destructive by raw credential count. What it demonstrated is that the current state-of-the-art defense for open-source package publishing — OIDC trusted publishing with SLSA provenance — does not close the attack surface. It relocates it.

The old attack surface was: can an attacker obtain a long-lived publish token? The new attack surface is: can an attacker trigger a workflow run with broad enough OIDC scope?

Provenance tells downstream users where a package was built. It does not tell them whether the build was authorized to include what it included. That is the gap Mini Shai-Hulud went through. Closing it requires treating OIDC scope as a security control with the same rigor as credential access, and behavioral analysis at install time as a required layer rather than an optional supplement.

The TanStack postmortem says it plainly: the chain only works because each vulnerability bridges the trust boundary the others assumed. Each control was assembled with a reasonable scope in mind. What the attack required was that none of those scopes were made explicit — not to the tools, and not to the teams operating them.


References