In March 2026, Resolv Labs lost $25M. The protocol had undergone extensive audits from top-tier security firms. The code worked exactly as intended.
The problem was governance, not code. The SERVICE_ROLE — a critical privilege that could mint stablecoins — was held by a single EOA. When that key was compromised (likely through AWS credential theft), the attacker minted $80M worth of tokens. There was no on-chain cap, no multisig requirement, nothing to stop it. The audits never flagged this as the primary risk.
But Resolv is not an isolated case. I've seen this pattern repeatedly: companies ship with audits in place, hire security engineers, run bug bounties, and still never articulate which risks actually matter. They know they need security. They don't know what they're securing against, or in what order.
At OP Labs, we solved this with threat modeling. Not as an afterthought to audits, but as the foundation for all security decisions. It answered a simple question: **Where are our risks, and which ones matter most right now? **The answers weren't always "audit everything" or "use a multisig." They were often simpler, more specific, and tied directly to our product and our stage. They came with reasons. And they had a prioritization that made the difference between security theater and actual protection.
There is More Than Audits
An audit answers a specific question: Does this code do what it claims to do? A good audit firm will read your contracts, run tests, look for reentrancy bugs, integer overflows, and logic errors. They will verify that the code matches the spec. Then they will write a report and move on.
This is valuable work. Audits catch code bugs. But code bugs are not where most of the losses happen anymore. Resolv's code was doing exactly what it was supposed to do. The minting logic worked. The SERVICE_ROLE permission model worked. No auditor would flag either one because neither was broken. The problem was that the role was held by a single key, stored in AWS and with no backup plan. That's not a code problem. That's a governance and operational security problem.
Here's the pattern: audits assume that deployment is correct, that governance operates as documented, that off-chain infrastructure is secure, that operators follow procedures, and that features don't interact unexpectedly. In practice, those assumptions fail repeatedly. When they do, no amount of code review would have caught it. A carefully audited smart contract can still lose money if the key that controls it is stored poorly. A bug-free minting mechanism can still mint unlimited tokens if there's no on-chain cap to stop it.
That's why companies that hire security engineers often discover, too late, that they still don't know where their real risks are. They have audits. They have security hires. But nobody has asked the harder question: How does this system actually fail?
Threat Modeling Includes the Whole Company
Security is layered. A complete security posture includes unit testing, fuzzing, formal verification, audits, bug bounties, and operational controls. Each layer catches different things. Code audits are one important layer.
But here's the problem: most companies don't know which layers matter most for them, right now. They know they need audits, so they get audited. They know they need bug bounties, so they launch one. They know they need security engineers, so they hire them. But they have no framework for deciding: Which of these actually protects us against the risks that will kill us? Which can we defer? Which should we do first?
Threat modeling answers this by assessing the entire company — code, infrastructure, processes, people — and identifying the paths that lead to catastrophic loss. Not just code bugs. A compromised key. A governance process that doesn't work in practice. A feature that interacts with another feature in an unexpected way. Infrastructure that gets breached. A signer who doesn't understand what they're approving.
Once you've mapped these paths and assigned likelihoods, you can prioritize. Maybe your top risk is that your multisig signers are signing blindly, so you implement transaction verification before they sign. Maybe it's that your infrastructure keys are stored poorly, so you move them to MPC. Maybe it's that you've never thought about what happens if a database gets corrupted, so you add monitoring. Or maybe you do need a full code audit first — but now you know why, and you know it's not the only security work that matters.
The result is a roadmap. Not "do all security measures," but "given what we are right now, here's what we do first, here's what we do next, and here's when we revisit this as we scale." This lets you apply security at the right pace: not too early, where expensive controls slow you down before you're big enough to need them. Not too late, where you get hacked because a critical control was missing.
This is what Resolv missed. They had audits. They had the code reviewed repeatedly. But nobody modeled what happens if the SERVICE_ROLE is compromised, so nobody decided that moving it from an EOA to a multisig was the top priority. They had all the layers. They just didn't know which one would have saved them.
How to Model Threats
I described the full process at OP Labs, and it's straightforward enough to apply to any crypto company. Start with outcomes: what's the worst thing that could happen? Stolen funds, unauthorized minting, frozen withdrawals. Write them down. Rank them by damage.
Then build a tree for each outcome. The root is the bad thing. The branches are all the ways it could happen. Some paths involve code bugs. Some involve compromised keys. Some involve governance processes that don't work as documented. Some involve infrastructure failures. Keep branching until you get to concrete failures: "the SERVICE_ROLE key is stolen" or "the multisig signer doesn't understand what they're signing."
Now assign a likelihood to each branch. Use four levels: purple (certain very soon), red (very likely within one year), yellow (might happen within two years), green (unlikely for several years). Combine likelihoods up the tree: if two things need to fail for a bad outcome (AND logic), the likelihood is the lower one. If either could cause it (OR logic), it's the higher one.
Finally, prioritize mitigations. Target the high-likelihood paths first. For each one, ask: What's the cheapest way to reduce the likelihood? Maybe it's adding monitoring. Maybe it's moving a key to a multisig. Maybe it's a code audit. Maybe it's training your signers. Some mitigations are expensive and slow you down — defer those until you're bigger and the risk justifies the cost.
The result is a roadmap. Do this first because it's high-likelihood and high-impact. Do this second because it's cheaper and unblocks the first. Defer this until you have more TVL because the cost doesn't match the risk yet. This becomes your security roadmap: a living document that evolves as your company grows and your threat landscape changes.
What a Threat Model Would Have Shown Resolv
Let's trace through what a threat model of Resolv would have revealed. Start with the outcome the team wanted to prevent: An attacker mints unlimited unbacked USR tokens.
That outcome can happen if either of two things occur. First, if the code itself allows unlimited minting — but audits checked that, so it's not the problem. Second, if the SERVICE_ROLE (the permission that authorizes mints) is compromised and there's no on-chain bound to stop it.
Now trace the SERVICE_ROLE compromise. It's held by a single externally owned account. That means a single key is all that stands between the attacker and the vault. How likely is that key to be stolen? AWS credential compromise is a known threat — not theoretical, but common. The team could have assigned this a red likelihood: very likely within one year. A single point of failure, no redundancy, storing a key in cloud infrastructure.
Finally, what if that key is stolen? The code will do whatever the SERVICE_ROLE asks it to do. There's no on-chain cap. There's no per-transaction limit. There's no circuit breaker. The code was audited and works perfectly. An attacker who compromises the key can mint $80M worth of tokens, and the contract will happily execute it.
A threat model would have made this visible in a simple tree:
Outcome: Attacker mints unlimited USR [RED: very likely within 1 year]
│
├── Audits fails to find a bug [Green : unlikely]
│
└── SERVICE_ROLE is compromised [RED: very likely within 1 year]
│
└── AWS credentials stolen [RED: very likely within 1 year]
The mitigations are straightforward. Move SERVICE_ROLE from a single EOA to a 3-of-5 multisig — now an attacker needs three keys, not one. Add an on-chain minting cap per block — now even if all five keys are stolen, the damage is bounded. Add monitoring that alerts if minting approaches the cap — now the team knows before users do.
These aren't insights that require a security researcher. They're obvious once you ask the question: What happens if the SERVICE_ROLE is compromised? Resolv's leadership probably was not aware of that risk. An audit firm wouldn't catch it because the code is fine. But a threat modeling exercise would have surfaced it immediately, and the team could have prioritized those three mitigations before launch.
The pattern extends beyond Resolv. Bybit lost $1.5B in February 2025 when attackers compromised Safe{Wallet}'s AWS infrastructure and injected malicious JavaScript into the UI, swapping transaction data at signing time — a threat model would have asked "what if the UI we sign through gets compromised?" and implemented offline transaction verification: prepare transactions off-chain, keep hashes in a secure location, and verify them against the hardware wallet screen before signing. GMX V1 lost $42M in July 2025 when a bugfix introduced a reentrancy vulnerability that was never re-audited — a threat model would have identified feature interaction and patch review as ongoing risks. WazirX lost $235M in July 2024 when signers of a Gnosis Safe multisig approved a transaction that appeared benign but actually contained a malicious contract upgrade — a threat model would have flagged governance blind spots and required strict transaction verification practices.
Start Early, Maintain Always
The best time to threat model is before launch. You have the most flexibility to change architecture. You haven't yet committed to a design that's hard to fix. You can ask: "If we structure it this way, what are the failure modes?" and then design around them from the start.
But threat modeling is not a one-time exercise. It's a living document. As your company grows, your threats change. As you add new features, new interactions emerge. As your TVL grows, old risks become critical. A threat model that made sense at $10M TVL may need revision at $100M TVL. The mitigations you deferred become urgent. New threats surface that didn't matter when you were small.
The maintenance is straightforward. Every time you make a significant change — a new feature, a new integration, a change in governance — revisit the threat model. Does this change affect any of the trees? Does it introduce new paths to bad outcomes? Does it change the likelihood of existing threats? Update the model, reassess your mitigations, adjust the roadmap.
This is what separates companies that get hacked from companies that don't. The ones that get hacked don't threat model at all, or they threat model once and then forget about it. The ones that survive threat model, implement mitigations, and then threat model again when circumstances change. The roadmap evolves. The risks stay visible. The team always knows where the gaps are.
Conclusion
Most crypto companies do audits. Many do bug bounties. Some hire security engineers. Almost none of them know where their actual risks are, or which security work matters most. They spray security measures across the company without understanding which ones will save them if things go wrong.
Threat modeling answers the question you should have asked before launch: Do you know where your risks are? It identifies the paths from normal operation to catastrophic loss. It prioritizes which risks matter most, right now, given your product and your stage. It produces a roadmap that evolves with your company so security scales with your growth instead of either choking you or leaving you exposed.
The hacks of 2024 and 2025 — Resolv, Bybit, GMX, WazirX — were not code bugs. They were failures of threat modeling. Companies with extensive audits, security engineers, and governance structures still lost hundreds of millions because nobody had systematically asked: How does this fail? and What are we doing about it?
You can start tomorrow. It doesn't require hiring. It requires time, honesty, and someone who understands your product well enough to trace the paths that lead to loss. The first threat model takes a few weeks. Maintaining it takes hours per month. The cost is trivial relative to the risk.
Know where your risks are. Protect yourself.
