BLOG by PhantomCorgi Team

On April 1, 2026 — a date that will not age well for Drift Protocol — $285 million left the Solana DeFi platform in 12 minutes. Thirty-one withdrawal transactions. Devices wiped clean. Attackers already gone.

The post-mortems will argue about governance thresholds, timelock configurations, and multisig design. Those are real failures. But the root cause is older and harder to patch: humans trusted people they should not have trusted, and opened files they should not have opened.

This is the story of how DPRK’s Lazarus Group (tracked as UNC4736) ran the most patient, most technically layered heist in DeFi history — and what every developer team building on-chain infrastructure should take from it.


The Setup: Six Months of Being a Good Client

The attackers did not start with malware. They started with a wire transfer.

In October 2025, a new “quant trading firm” opened an account with Drift Protocol. They were professional. They communicated clearly. They deposited over $1 million USD — real capital, real trades, building real trust. Over the following months, they engaged with the development team the way any legitimate institutional client would: feature requests, bug reports, the occasional technical question that required screen-sharing sessions to resolve.

This is the part that most security post-mortems skim past. Six months is a long time to maintain a cover. The Lazarus Group is not a gang of opportunistic script kiddies. UNC4736 is a state-level threat actor with the resources and patience to build relationships that would pass any KYC review. By the time the malicious payload arrived, the people delivering it were known, trusted counterparties.

That trust was the vulnerability. Everything after it was execution.


The Delivery: A Folder You Should Not Have Opened

The attack was delivered through two vectors simultaneously. The first was a malicious TestFlight application — a fake iOS app sent through the established relationship channel. The second, and more technically interesting, was a GitHub repository.

The repository appeared to be a legitimate code project. It contained a .vscode/tasks.json file with a single unusual configuration:

{
  "version": "2.0.0",
  "tasks": [
    {
      "label": "init",
      "type": "shell",
      "command": "<encoded payload>",
      "runOptions": {
        "runOn": "folderOpen"
      }
    }
  ]
}

runOn: folderOpen is a VS Code feature that executes a task automatically when a workspace folder is opened. No prompt. No confirmation. The moment a developer opened the folder in their editor, code ran on their machine.

Here is where Cursor enters the story.


The Cursor Connection: Workspace Trust Was Not What You Think

VS Code handles this attack vector with a security layer called Workspace Trust. When you open a folder from an untrusted source, VS Code restricts what that folder can do — including blocking automatic task execution — until you explicitly grant it trust. The prompt is annoying. Most developers dismiss it. But it exists.

Cursor disables Workspace Trust by default.

Cursor is built on the VS Code codebase and inherits most of its architecture. But in the interest of reducing friction for AI-assisted development, the Workspace Trust prompting was turned off. Open a folder, tasks run. No warning.

This is not the only Cursor vulnerability in the picture. The April 2026 disclosure window has been unkind to the IDE:

CVEDescriptionImpact
CVE-2025-59944Case-sensitivity bypass on file protection rulesMalicious files escape protection filters
CVE-2025-54135 (CurXecute)MCP config rewrite via prompt injectionAttacker rewrites AI tool configuration
CVE-2025-54136 (MCPoison)One-time MCP approval exploited for persistent accessEscalated AI agent permissions
CVE-2026-22708Terminal allowlist bypass via shell built-insCode execution despite explicit restrictions
Inherited Chromium vulns94 unpatched vulnerabilities from bundled ChromiumBrowser-layer attack surface

More than 3,200 developers were infected via malicious npm packages specifically targeting Cursor users in the weeks before the Drift attack. The IDE had become a target. The Drift developers were running it. And the repository they opened was waiting for exactly that combination.


The Compromise: Two Keys Were Enough

Drift Protocol’s treasury was protected by a 2-of-5 multisig. Five keyholders. Any two needed to sign a withdrawal. In principle, this means an attacker needs to compromise two independent machines operated by two independent people — a significant barrier.

The attackers compromised two.

One developer opened the malicious repository. The other installed the fake TestFlight application. Both machines were now under attacker control. The attackers had access to private keys, session tokens, and everything else stored on those machines.

But they did not move immediately.


The Pre-Signed Trap: Durable Nonces and the 12-Minute Window

Solana’s durable nonce mechanism allows transactions to be signed in advance and executed later. Unlike standard Solana transactions that expire quickly, durable nonce transactions remain valid indefinitely — they are “pre-signed” and can sit idle until the moment they are needed.

The attackers used this to prepare the entire operation before executing a single withdrawal. With access to two compromised signers, they pre-signed 31 withdrawal transactions across the previous weeks. When the execution window came, there was no interactive authorization required. No second chance. No human in the loop.

The 12 minutes were not a race. They were a scheduled delivery.

There was one more layer. In the months prior, the governance configuration of Drift Protocol had been quietly weakened — the multisig threshold reduced to 2-of-5, and the timelock (a mandatory delay between proposal and execution) reduced to zero. Whether this happened through the attacker’s social influence during their six-month access period, or through separate legitimate governance decisions, remains under investigation. The result was the same: when the 31 transactions fired, there was no delay, no circuit breaker, no window to intervene.


The Aftermath: $285M and a Circle Controversy

The 31 transactions drained approximately $285 million. The funds moved quickly across chains — Solana to Ethereum bridges, token swaps, the standard mixer playbook that Lazarus Group has refined across a decade of DeFi heists.

The secondary controversy involves $232 million in USDC that was bridged to Ethereum. Circle, the USDC issuer, has the technical ability to freeze specific addresses — a capability it has exercised in previous high-profile thefts. Whether Circle froze those funds, how quickly, and what proportion was recovered remains a contested point in the post-incident reporting. The broader question — whether centralized freeze capabilities in “decentralized” finance undermine the threat model — is a longer conversation this post will not attempt to resolve.

What is clear: once the transactions were submitted, the defense window had already closed. The meaningful interventions all had to happen earlier.


The Attack Timeline

PhaseTimeframeWhat Happened
Account establishmentOct 2025Fake quant firm deposits $1M+, builds trust
Relationship cultivationOct 2025 – Mar 2026Regular client interactions, feature discussions
Payload deliveryMar 2026Malicious repo + TestFlight app sent to developers
Developer machine compromiseMar 2026folderOpen task executes; TestFlight app installs
Credential harvestingMar – Apr 2026Keys, tokens, and session data exfiltrated
Transaction pre-signingOngoing31 durable nonce transactions prepared silently
Governance weakeningPrior to attackThreshold lowered to 2/5, timelock zeroed
ExecutionApril 1, 2026 (12 min)31 transactions submitted and confirmed
Device wipeApril 1, 2026Both compromised machines wiped remotely
Cross-chain movementApril 1–2, 2026Funds bridged, swapped, dispersed

What Would Have Changed: A Product-by-Product Analysis

This section is where we are direct about what PhantomCorgi builds, why we build it, and how it maps to what just happened. We do not think any single tool would have stopped a nation-state actor with six months of patient access. But defense-in-depth is not a myth — it is how you raise the cost of an attack until even Lazarus Group looks for an easier target.

Stage 1: The Malicious Repository — Code Corgi

The tasks.json autorun payload is not novel. It is a known supply chain attack vector. What makes it dangerous is that most development teams have no tooling scanning the configuration files and workspace settings of external repositories before a developer opens them.

Code Corgi performs static analysis on repository contents during PR review and pre-merge gates, with specific detection rules for:

  • .vscode/tasks.json, .vscode/launch.json, and similar workspace configuration files with autorun behaviors
  • runOn: folderOpen and equivalent silent-execution triggers across supported IDEs
  • Unicode homoglyph substitution and obfuscated payloads in shell commands
  • Semantic AST analysis detecting suspicious execution patterns in configuration rather than source files

If the Drift contributors had Code Corgi scanning incoming repositories, the weaponized tasks.json would have been flagged before anyone opened the folder. The flag would not have been ambiguous. runOn: folderOpen with an encoded shell command is not a configuration any legitimate project needs.

Stage 2: The Protocol-Level Anomalies — API Phantom

The attack required several actions at the protocol layer that were anomalous in isolation, let alone in combination: a novel token (CVT) listed as collateral with manufactured price history and no liquidity depth, governance parameter changes reducing threshold and zeroing the timelock, and eventually 31 rapid-fire withdrawals in under a quarter-hour.

API Phantom’s red-team agent is designed to probe exactly these patterns before attackers find them:

  • Oracle manipulation and collateral listing anomaly detection
  • Governance parameter change surveillance with alert thresholds
  • Rapid sequential transaction pattern flagging
  • Durable nonce abuse simulation as a standard red-team test case

The governance weakening is the hardest to catch — changes made through legitimate channels by what appear to be legitimate parties are the definition of an insider threat. But the CVT listing anomaly and the 31-transaction burst are the kind of behavioral signatures that API Phantom’s continuous monitoring would have surfaced. You cannot stop a transaction that is already signed. You can notice that the signing conditions were created by an unusual set of events.

Stage 3: The Social Engineering Channel — Calendar Sentry

The malicious repository and TestFlight application were delivered through communication channels that the targeted developers trusted. In AI-assisted developer workflows — where an AI agent might automatically process incoming links, preview repository contents, or summarize attached documents — a malicious payload in a trusted message can execute before a human ever reviews it.

Calendar Sentry provides input sanitization for developer communication workflows, with prompt injection detection tuned specifically for the pattern of “trusted sender, malicious payload.” When attackers send project links through established channels, Calendar Sentry’s sanitization layer strips executable payloads and flags encoded commands before they reach an AI agent or an IDE’s auto-open behavior.

This does not stop a developer who chooses to open a folder manually. It reduces the blast radius of AI-assisted workflows where the human is one step further removed from the open action.

Stage 4: Credential Persistence — SURADAR Auth Engine

This is the layer where the math changes most dramatically.

The Drift attack succeeded because two developers’ machines contained persistent credentials: private keys, session tokens, or signing material that the attacker could harvest and hold indefinitely to pre-sign the 31 transactions. The credentials did not expire. They did not self-destruct when the device was later wiped. They were already out.

SURADAR — PhantomCorgi’s per-request authentication engine — is designed around the assumption that machines will be compromised. Its threat model starts with “an attacker has your device” and asks what they can do from there.

The answer, with SURADAR, is: not much.

  • Time-banded tokens with 30-second validity windows: A harvested token is useless after 30 seconds. There is no window to pre-sign transactions with stolen material and hold them for later.
  • Per-request HMAC chains: Each authentication event produces a unique token derived from the previous request’s state. Replaying a captured token against a different request context fails cryptographic verification.
  • Immediate zeroing on logout: Session material is zeroed in memory on session end. A post-logout forensic image of the device yields nothing reusable.
  • Bloom filter replay prevention: Every authentication event is recorded. Pre-signed transactions that attempt to reuse authentication context are rejected at submission.

The durable nonce attack required persistent signing authority on the compromised machines. SURADAR’s model eliminates persistent signing authority as a concept. You cannot pre-sign 31 future transactions with material that expires in 30 seconds.


The Uncomfortable Truth

The Drift hack will generate recommendations that are both correct and insufficient. Increase the multisig threshold to 3-of-5. Restore the timelock to 48 hours minimum. Audit all external repositories before opening. Rotate keys after any suspected device compromise.

These are all right. They are also all reactive — they fix the configuration that was exploited, not the class of vulnerability that made the exploitation possible.

The class of vulnerability is this: developers are trusted humans, trusted humans make decisions under social pressure and time constraints, and those decisions become the attack surface.

No governance parameter change makes a developer immune to a six-month relationship built by a state actor with operational budget in the hundreds of millions. No multisig threshold prevents credential theft from a machine that was compromised weeks ago. The threshold is a speed bump. The credential persistence is the door.

The security tools that matter are the ones that reduce the consequence of a human making exactly the mistake a human would make — opening a folder from a trusted contact, installing an app sent by a long-term client, accepting a governance change that seems reasonable in isolation.

That is the problem we are building for.


A Note on Sources

This analysis draws from Drift Protocol’s official incident disclosure, on-chain transaction data from Solana explorers, Mandiant/Google Cloud threat intelligence on UNC4736, the April 2026 Cursor CVE disclosures from the HackerOne program, and independent security research on durable nonce abuse patterns. The $285M figure reflects total protocol TVL drained across the 31 transactions; final recovery figures via Circle USDC freeze are pending official confirmation. We distinguish throughout between verified technical facts and our analytical interpretation of attacker intent.


Talk to Us

If you are building on-chain infrastructure, managing developer teams with access to signing keys, or running DeFi protocols that have not stress-tested their governance attack surface — we want to talk.

Code Corgi, API Phantom, Calendar Sentry, and the SURADAR auth engine are all available for private beta access. Each product can be evaluated independently; they are also designed to work as a layered stack.

Request a demo at phantomcorgi.com or reach out directly through our security contact page.

The next Lazarus Group campaign is already six months in. Somewhere, a developer is about to open a folder.