Exploiting a Solidity Optimizer Bug That Removes Storage Writes
The green checkmark at the contracts tab in Etherscan is a sign of quality for most web3 users and cryptocurrency investigators. Most bad actors would not go and verify their source code for the world to see. Backdoors, drains and other malicious functions are way easier to spot in the Solidity source than they are in the deployed bytecode. But what if an attacker had the ability to provide "clean" source code that only acts maliciously when deployed on-chain, and get that source code verified?
Let's break that down:
What Etherscan actually verifies (and signals with the green checkmark) is reproducibility. The explorer takes the source code the user delivers, compiles it with the same compiler settings the user hands over and checks if the bytecode produced by this specific compiler matches the bytecode of the on-chain contract. If the two match, the green checkmark gets applied.
This only confirms that the contract is reproducible. But it does not confirm that the compiler output is a correct representation of the Solidity code. The verification process validates the pipeline, but not the build itself and the behavior of the contract.
In most cases, this is only a semantic subtlety. The compiler usually just works and translates the Solidity code to bytecode and the green checkmark effectively has the meaning most users apply to it.
That is, if it weren't for compiler bugs. I found a bug report from 2022 that describes a gap in this scenario. This post explores that gap.
During my review of known Solidity compiler bugs, I found one that produces this kind of misalignment between Solidity and bytecode. I researched this bug, built a proof of concept contract around it, deployed it on the Sepolia testnet and verified it on Etherscan.
I got a green checkmark on a contract where:
- The source code shows a storage write
- The compiled bytecode on-chain does not contain that write
Everything I describe here can be verified on-chain.
The bug
In September 2022, the Solidity team disclosed SOL-2022-7, a bug discovered through differential fuzzing.
This bug affects solc (Solidity compiler) versions 0.8.13 through 0.8.16, but only when both of the following are enabled:
- The optimizer
- The viaIR compilation pipeline, so the Solidity code must be compiled via Yul
The bug was fixed in solc version 0.8.17. In the affected versions, it occurs in the optimizer pass called the "Redundant Store Eliminator". This pass removes storage writes that appear unnecessary.
One example: if a value is written to a slot and then immediately overwritten, the first write makes no sense and can be removed safely. The reason to use the optimizer is to reduce gas costs. Storage writes are some of the most expensive operations on the EVM, so removing writes that are redundant can save a significant amount of gas on every function call.
That is why the Redundant Store Eliminator exists. The problem is that sometimes it judges incorrectly.
An example: between two storage writes, there might be inline assembly. That assembly might be able to terminate under certain conditions, by using return(0,0) or stop(). The optimizer only sees the two storage writes, concludes the first one is redundant and removes it. But if the deployed contract now hits the assembly on-chain, the second write is never reached. Since the first write has been eliminated by the optimizer, the write will never be executed in this case at all.
A minimal example looks like this:
function set(uint256 newVal) external {
value = newVal; // Write 1: removed by optimizer
assembly {
function validate() {
if gt(calldatasize(), 68) { leave }
return(0, 0) // always fires (calldatasize = 36)
}
validate()
}
value = newVal; // Write 2: never reached
}
When this example is compiled with solc 0.8.15 using the affected settings, the optimizer produces bytecode that works like this:
usr$validate() // terminates the transaction
sstore(_1, calldataload(4)) // never reached
The function call succeeds on-chain, but the storage write is never applied.
I verified this by compiling such a contract myself and inspecting the generated Yul IR – the write really was missing. But more on that specific contract in the next section. Here's two screenshots that demonstrate the inner workings:
Looking for a vector
What I found so far was an interesting artifact. But given my background, I thought this might be worth investigating more in depth, to see if this bug opened up possible attack vectors on the Etherscan verification and smart contracts affected by this bug.
The problem a wannabe-attacker is facing is that the bug is constrained pretty heavily.
I tried to build several different contracts intended to use that bug to conceal malicious code, but I failed on most attempts.
The reason is that the optimizer is pretty conservative. It only removes the first write if no external call occurs between the two writes. The reason being, an external call might read the slot and therefore the writes would not be redundant.
The bug also does not affect mappings. The optimizer is unable to determine whether two keccak256-derived slot expressions refer to the same storage location. And placing logic after the termination makes no sense either, because that code is never reached.
Many obvious constructions don't work because of that. I tested this for common attack vectors, not ruling out the possibility that a smart or creative attacker might find some way around these constraints!
Reentrancy attacks and token transfer redirects are blocked by the external call constraint, balance manipulations are blocked by the mapping constraint, and post-assembly logic was already covered.
However, one attack vector remained: governance parameters that are seemingly removed by a function call that never passes through because the function got removed by the optimizer while it is visible in the Solidity source. Parameters stored as plain uint256 variables.
Examples include:
- fee caps
- mint limits
- timelock delays
These functions often follow a simple pattern: validate input, write to storage, emit an event.
To check if the bug really applied and to demonstrate it working, I created a small but somewhat realistic staking vault contract with deposits, withdrawals, staking rewards and governance-controlled fee settings.
This proof of concept vault contains two variables:
maxFee– the safety capwithdrawalFee– the current withdrawal fee
The function setWithdrawalFee() behaves normally and checks:
require(newFee <= maxFee)
The function setMaxFee() contains the assembly pattern that triggers the compiler bug.
The result is subtle but significant: setMaxFee() compiles to a no-op. The transaction succeeds on-chain, but maxFee never changes. It emits no event on-chain either, so that is one thing an experienced Ethereum user might notice.
Since maxFee remains stuck at its initial value of 10000 (100%), the owner can set the withdrawal fee to any value, regardless of the intended cap.
The reasoning a bad actor could choose for why the contract is set up this way is easy: some kind of initial campaign that comes with a lockup phase and 100% fees. After the lockup, the owner sets maxFee to something like 3%, which never really gets applied, and the real fees get lowered to 3%, which works. After more users have joined the vault, the admin would call the function to set withdrawalFee to 100% – and all the money in the pool would be lost. This is a fabricated vector and use case to be honest, but as a proof of concept it works. In a real-world setting, the maxFee setter would typically include a check that the new value is lower than the current one, making the cap irreversible. This proof of concept omits that constraint for simplicity.
The Solidity disclosure also notes that the terminating assembly does not need to appear directly in the affected function and that it can be in nested internal calls, which would make the pattern considerably harder to notice during review.
For clarity, the proof-of-concept contract keeps the assembly visible, but I also tested a nested contract which also worked in the same way. I did not deploy that to the testnet.
On-chain proof
To check if a contract constructed with this particular bug not only works in theory but also in practice, I had to actually deploy it on-chain. For obvious reasons, I could only do that on the Sepolia testnet, not on mainnet. Therefore, there were never real funds in danger of getting locked up.
The contract I constructed is deployed and verified on Sepolia:
0x5d320F229d878fFb8c6b7a171c785016ECb81287
(solc 0.8.15, optimizer enabled, 200 runs, viaIR: true)
For testing and a bit of realism, two addresses were used, one for the owner and one as stand-in for a user:
Owner:
0x6D425dB7729510D0EfC0035EAF81739e4a881579
User:
0x0AF62cec112E9990FADFB71D9cFd2Bc0e193020d
The contract worked, got verified, and I was able to trigger the bug in the contract on-chain. I did the following transactions:
First, the "Owner" calls setMaxFee(300):
The transaction succeeds, but the MaxFeeUpdated event is never emitted and maxFee() still returns 10000.
The function effectively did nothing, but everyone can see on-chain that the owner did call the setMaxFee() function, which should set the maxFee according to the Solidity source code.
Next, the "Owner" calls setWithdrawalFee(300):
and later setWithdrawalFee(10000):
The second call should have reverted if the fee cap had actually been lowered to 3%. It succeeds because maxFee was never updated.
To show that this results in real value transfer, I executed a full deposit-withdraw sequence.
The Owner first seeded the vault with 0.005 ETH in rewards:
The User deposited 0.01 ETH:
When the User withdrew:
the entire amount was collected as fees by the Owner. The User received nothing.
Staking rewards continued to function normally:
which makes the vault appear operational at first glance.
With larger deposits and smaller reward seeds, the fee extraction approaches 100%.
The entire sequence is publicly visible on Sepolia.
What this demonstrates (and what it does not)
My intention with this setup was to show how the bug is still a factor in the on-chain reality. Old compiler configurations can still be used to deploy contracts and the source-bytecode verification still succeeds without any warnings.
I constructed a relatively simple contract and used one specific bug for this test. And as said before, there are limiting factors for the bug's real-world applicability.
A good auditor and even an experienced blockchain user would probably notice the assembly, and the missing event emission should alert people. My example is not meant to be a stealth exploit, although finding a more stealthy vector might be possible for a more sophisticated attacker.
The thing that I was trying to confirm with this setup was the following:
In this contract, a verified source file contains the line:
maxFee = newMax
Yet the deployed bytecode never performs that write, because the function is not present at all in the compiled bytecode.
Etherscan source code verification still succeeds, because the source code does produce exactly this bytecode when used with my compiler settings. This is the only thing Etherscan verifies. It does not check if the bytecode contains all functionalities from the source code or if the bytecode's semantics are correct.
There is an easy fix available: the Solidity team already maintains machine-readable bug data in:
These files include severity levels, affected version ranges, and patterns that can help detect vulnerable source code.
Etherscan currently references this data in the UI. In the verified contract page, there is a yellow warning rectangle, that opens a popup when clicked on and shows known compiler bugs for that contract.
The problem is, it is not clear what this information is about and how it could possibly affect the contract in question. Also, this warning appears at almost every contract, because there are known bugs for many solc versions. Here is an example from the USDT contract:
So while the information is there, there's not much value to it, because it's hard to check the severity of the bug and to check if the contract is affected by that bug or not.
What if it were not a bug?
SOL-2022-7 is an accidental bug, which sometimes happens in software development. The verification currently rests on a trust assumption that most people never think about: the compiler is honest. As long as that is true, everything works as intended. But the bug I described above already shows that the compiler is not always honest, in the case of SOL-2022-7 that is by accident.
Now imagine this being done on purpose.
An attacker who deliberately compromised the Solidity compiler, distributed this version by a supply chain attack on npm or a GitHub release, could introduce changes that are way harder to spot and way more harmful.
In simple tests with the solc sources I was able to alter the compiler in a way so that all transfer calls would reroute to an address under my own control. I will not go into detail on how I did this and will not publish the modified compiler code, for obvious reasons.
If such a malicious solc build was deployed by a supply chain attack, every smart contract compiled with that version of the compiler would hide a – at least from the source code's perspective – completely invisible backdoor. It would still pass verification if the build was determined to be official.
That idea is not new.
Ken Thompson introduced that exact concept in his 1984 Turing Award lecture "Reflections on Trusting Trust." He said that if the compiler cannot be trusted, neither can the programs it builds. That was 40 years ago, but is still relevant today as you can see.
The Solidity project already has strong safeguards in place, and the conservative EVM execution model protects the blockchain against many bugs. None of these would protect contracts against a supply chain Thompson compiler attack.
My proof of concept shows the gap at the scale of a single function.
While there is no proof that anyone ever started a supply chain attack on solc (at least as far as I'm aware), there is constant monitoring and updating needed to close gaps as soon as they are recognized.
As recently as February 2026, a related optimizer issue was reported in the same component "my" PoC was built in – the optimizer.
What now?
Well, currently there's nothing you need to absolutely do right now. But if you are auditing contracts or if you are investigating smart contract misbehaviour and the contract is compiled with solc 0.8.13 through 0.8.16, be cautious.
Look for seemingly unnecessary patterns or constructions. Check the bytecode against the source code and check if all functions are present in the compiled bytecode. Don't assume the source code is the single source of truth.
If you are building verification or analysis tools, it might be worth integrating the bugs_by_version.json into the pipeline, to warn the user of version-specific bugs they could encounter and which they should keep an eye open for.
If you are a smart contract user and see the green checkmark, don't take it as a sign of quality and don't trust an LLM that analyzed the source code, as the bytecode might show different functionality. Take the green checkmark as what it is – a verification that this source code with these specific compiler settings produced this bytecode. Not more, not less.
I have submitted a suggestion along these lines to Etherscan.
PoC Contract (Sepolia): 0x5d320F229d878fFb8c6b7a171c785016ECb81287
All transactions are independently verifiable on sepolia.etherscan.io.
References:
- SOL-2022-7 Disclosure
- Solidity bugs.json
- GitHub Issue #13478
- GitHub Issue #16458 (related 2026 bug in same component)
- Thompson, K. (1984). "Reflections on Trusting Trust." Communications of the ACM 27(8), 761-763.
