Cardano’s 14-Hour Stress Test: How the Network Took a Hit and Healed Itself

Written by sundaeswap | Published 2025/12/02
Tech Story Tags: cardano | blockchain | web3 | cardano-serialization-bug | cardano-news | cardano-chain-fork-bug | good-company | hackernoon-top-story

TLDRCardano suffered a 14 hour, self-repairing chain fork on November 21st, 2025. This is the largest degradation of service for Cardano in its 8 years of operation. A serialization bug caused a unidirectional soft-fork.via the TL;DR App

On November 21st, 2025, Cardano suffered a 14 hour, self-repairing chain fork. This is the largest degradation of service for Cardano in its 8 years of operation, and as a key developer within the Cardano ecosystem, I felt it was a good opportunity to reflect on what went well, and what we can learn to improve Cardano's robustness even further. Whether you're a maximalist or a hater, I think there's something to be learned through the objective facts.

I've chosen to build a career and a company on Cardano. When something like this happens, I don't have the luxury of beating my chest on Twitter or engaging in collective dunking. I need to engage in serious soul searching to determine if my bet is still sound.

The answer I came to was yes, absolutely, with some homework.

**What happened \ A serialization bug caused a unidirectional soft-fork: one portion of the nodes rejected a transaction that the rest didn't. This was initially triggered in testnet, likely on accident, and a fix was identified and released quickly. Unfortunately, someone with deep familiarity with Cardano was able to reverse engineer how the transaction was constructed, and submitted it to mainnet. (You may see claims this was "vibe-coded"; that appears to refer to using AI to set firewall rules in an attempt to quarantine the transaction, not the attack itself.)

Unfortunately this was before the fix had achieved widespread adoption, and so a majority of the nodes (those on versions with the bug) accepted it, while key infrastructure like wallets, chain explorers, and exchanges, rejected it.

As node operators upgraded to the fixed version, the chain that rejected the transaction began to grow quicker than the one that had accepted it, and ultimately overtook, leading to a reorg that repaired the chain.

As a small point of pride, the diagnostic tools built quickly to triage the issue used code fromAmaru), an alternative node being written in Rust that the Sundae Labs team is a contributor to. This was a good validation of our plan to bring implementation diversity to Cardano.
\ Real Impact \ In practice, the impact of this chain fork was severe, though not as severe as you might have assumed. The chain continued to produce blocks, and a majority of transactions made it into the surviving fork, though delayed. The monitoring infrastructure run by the CF detected a spike in transaction delays up to 5 minutes, but other users may have seen delays as long as 16-30 minutes, the longest gap between blocks. Some subset of users may also have been unable to submit transactions entirely, though this was due to faulty 3rd party infrastructure that was unable to follow either fork.

A small percentage (3.3%, 479 out of 14401) transactions made it into the faulty chain, and did not make it into the surviving chain. These transactions are still being analyzed, but might represent missed economic opportunities or risks of double spend.

**How I think about Blockchain Outages \ I've developed a personal taxonomy for categorizing large outages, from most serious to least:

1. Sovereignty violations, where the core promises and integrity (such as cryptographic signatures) of a blockchain get violated
2. Ledger bugs, where the economic principles (such as monetary policy) of a blockchain are broken
3. Unrecoverable consensus violation, where a network permanently forks
4. Recoverable consensus violation, where a network has a long lived fork but recovers
5. Severe smart contract exploit, where user funds are lost due to a bug in the contract
6. Full consensus halt, where the chain must be stopped and restarted, coordinated through a central authority
7. Degradation of service, where transactions are delayed or the wrong information is displayed to users

The incident Cardano faced qualifies as 4: serious, but recoverable. In my full blog post, I give examples of each.

**What went well \ This incident put Cardano's Ouroboros consensus through its paces: long forks like this are supposed to be exceedingly rare black swan events, but the design of the consensus protocol and networking stack anticipate and account for this. For example, the fact that it was able to self-heal is built into the protocol, and the way time is handled has a self-regulating lamport clock that gave the stake pool operators time to upgrade their nodes.

Additionally, the reporting and communication infrastructure maintained by the founding entities really shone, as we were able to quickly get eyes on the problem and communicate it widely.

Finally, it was great validation for Cardano's choice of language. The particular error was related to some faulty bounds checking on a buffer of untrusted input. In languages like C, thistypeof bug (if not this one specifically) could very easily have led to a sovereignty violation through remote code execution or similar. Haskell's strong memory safety guarantees meant that this kind of bug is never on the table.

**What broke down \ It became clear from the incident that we need better infrastructure around some wallets, dApps, and chain explorers. Many were unable to follow eitherfork and introduced extra delays in user transactions. In some cases this may have been a safety consideration, but in others it was just a lack of defensive programming that anticipated this scenario.

Similarly, especially as Cardano enters an era of client diversity, it's clear we need to improve our already rigorous testing criteria. A single bug might lead us all to a bit ofsurvivorship biasas the level of testing across the current node implementation is phenomenal, but that same rigor needs to be improved and standardized across all implementations of the node.

**Conclusion \ Blockchains are not immune to the same kinds of bugs that are rampant across most software. It's usually safe to assume that all software is one network packet away from catastrophic meltdown, assuming you can but find the right incantation.

Luckily, most (but not all) of these are found by conscientious security researchers and fixed before they can cause widespread impact.

This incident was an exception and highlighted areas where Cardano can improve while also demonstrating its strengths.

By Pi Lanningham, Chief Technology Officer at SundaeSwap Labs.


Written by sundaeswap | Building sweeter blockchain infrastructure. One scoop at a time. 🍨
Published by HackerNoon on 2025/12/02