Solana’s high throughput and low fees come from its parallel execution model. Unlike most blockchains that process transactions one by one, Solana’s Sealevel runtime can schedule many transactions at the same time.
To make this possible, every transaction must declare which accounts it will read or write. The runtime uses that information to run non-overlapping transactions in parallel.
The issues arise when multiple transactions address the same account. If at least one of them writes, the runtime applies a write lock and runs them sequentially. That’s what developers call a “hot account”.
Hot accounts aren’t a rare edge case, and they show up often in real apps. For example, NFT mints that increment the same counter, DeFi pools where each swap updates the same liquidity account, or MEV bundles competing for shared state.
This guide looks at how write locks work, why hot accounts appear, and how to avoid them in your code.
Parallel execution in Solana
In most chains, all state lives in a single global tree. Solana uses an account model instead. Solana’s global state can be seen as a large database where each record is an account.
Each account has a few core fields:
-
Address
– the public key of the account -
Owner
– the program that is allowed to modify its data -
Data –
arbitrary bytes, such as token balances, NFT metadata, or protocol config -
Lamports
– the SOL balance of the account -
Executable
– a flag that marks whether the account is a program or just data.
Programs in Solana don’t hold state directly, they operate on external accounts. This separation of code and data makes parallel execution possible: programs don’t share memory, they only work with isolated accounts.
The other part of the model is how transactions are built. Every transaction must declare all the accounts it will use, and whether each one is read-only or writable.
- Read-only accounts can be used by multiple transactions in parallel.
Writable accounts require an exclusive lock while the transaction runs.
When a validator receives a batch of transactions, the Sealevel runtime schedules them based on these declarations.
The rules are straightforward:
-
Transactions with disjoint accounts run in parallel.
-
Transactions that only read the same account also run in parallel.
-
If at least one transaction writes to an account, all transactions that touch that account run sequentially.
Parallel execution in Solana is based on running non-overlapping sets of accounts at the same time.
Conflicts
In Solana, a conflict happens when two or more transactions declare the same account in a way that prevents parallel execution:
-
Both mark the account as writable. Example: Tx1 and Tx2 both try to update a liquidity pool balance.
-
One marks the account as writable and another marks it as read-only. Example: Tx1 writes to a pool account while Tx2 only wants to read it — this still blocks.
Transactions can only run in parallel if all shared accounts are read-only. Declaring an account as writable requests an exclusive lock for the duration of that transaction. This is called a write lock.
Each transaction declares the accounts it will touch:
Tx1: [UserA (w), Pool (w)]
Tx2: [UserB (w), Pool (w)]
Tx3: [UserC (w), OrderBook (w)]
Tx4: [UserD (w), Pool (r)]
(w)
= writable(r)
= readonly
The scheduler checks for overlaps:
- Tx1 and Tx2 both write to
Pool
→ conflict. - Tx3 touches different accounts → no conflict.
- Tx4 only reads
Pool
, but because Tx1/Tx2 write it → conflict.
Result:
- Tx1 and Tx2 must run sequentially.
- Tx3 can run in parallel.
- Tx4 waits because of the write lock on
Pool
.
The cost of a conflict is more than just lost parallelism:
-
Higher latency – users wait longer for confirmation.
-
Block space pressure – validators prefer transactions that can run in parallel, since that maximizes throughput. Conflicting ones are less attractive.
-
Fee escalation – to push their transactions through, users start paying higher priority fees, creating a local auction for the write lock.
Validator clients and scheduling
It’s also worth noting that conflicts are handled differently depending on the validator client. Validators in Solana run client software that handles transaction execution and consensus. The primary validator client is written in Rust and called Agave. A new high-performance client called Firedancer, written in C by Jump Crypto, is set to replace Agave. The key difference between them is how they schedule transactions and handle conflicts. So, hot accounts don’t go away: if every transaction writes to the same account, there’s nothing to parallelize. The only way around it is better account design in your program.
Agave uses a simple conservative strategy: |
Firedancer was built from scratch to push modern server hardware to its limits. It takes a more aggressive scheduling approach: |
---|
Avoiding conflicts
Understanding how conflicts work and how validator clients schedule transactions is the foundation. But for a developer the real question is how to write code that doesn’t turn into a bottleneck. The goal is to maximize parallel writes so your dApp doesn’t degrade into a queue.
In practice, developers rely on three main techniques to avoid hot accounts:
- transaction-level optimization
- state sharding
- using PDAs for data isolation
Transaction-level optimization
It’s essential to understand how to form transactions. The fewer accounts marked as writable, the higher the chance the scheduler can run them in parallel. Review each instruction carefully to determine whether that account actually needs to be marked writable.A smaller writable set usually means fewer conflicts.
Local Fee Markets also help. When an account gets hot, competition for access raises the fee pressure on those specific transactions. This doesn’t remove conflicts, but it pushes the load to spread out. For transactions that touch hot accounts — like NFT mints or DEX swaps —getRecentPrioritizationFees
) and set the value dynamically:
const modifyComputeUnits = ComputeBudgetProgram.setComputeUnitLimit({
units: 300,
});
const addPriorityFee = ComputeBudgetProgram.setComputeUnitPrice({
microLamports: 20000,
});
const transaction = new Transaction()
.add(modifyComputeUnits)
.add(addPriorityFee)
.add(
SystemProgram.transfer({
fromPubkey: payer.publicKey,
toPubkey: toAccount,
lamports: 10000000,
}),
);
State sharding
State sharding is the strongest technique against hot accounts. The core idea is to split state across multiple accounts when a single account becomes overloaded.
Instead of a single global account updated on every action, create a set of shard accounts and distribute transactions between them.
For example:
- a liquidity pool can use multiple shard accounts (by token type, by price range, etc.)
- a game might split world state into separate accounts for each zone, area, or resource
❌ The naive approach would be to create a single counter account:
// lib.rs
use anchor_lang::prelude::*;
declare_id!("Fg6PaFpoGXkYsidMpWTK6W2BeZ7FEfcYkg476zPFsLnS");
#[program]
pub mod hot_counter {
use super::*;
pub fn increment(ctx: Context<Increment>) -> Result<()> {
ctx.accounts.global_counter.count += 1;
Ok(())
}
}
#[derive(Accounts)]
pub struct Increment<'info> {
#[account(mut, seeds = [b"counter"], bump)]
pub global_counter: Account<'info, GlobalCounter>,
}
#[account]
pub struct GlobalCounter {
pub count: u64,
}
This account will be locked on every increment, creating a queue.
✅A better approach is to create multiple counter accounts. The client can randomly choose which shard to send the transaction to:
// lib.rs
use anchor_lang::prelude::*;
declare_id!("Fg6PaFpoGXkYsidMpWTK6W2BeZ7FEfcYkg476zPFsLnS");
const NUM_SHARDS: u16 = 8;
#[program]
pub mod sharded_counter {
use super::*;
pub fn increment(ctx: Context<Increment>, shard_id: u16) -> Result<()> {
require!(shard_id < NUM_SHARDS, MyError::InvalidShardId);
let counter_shard = &mut ctx.accounts.counter_shard;
counter_shard.count += 1;
Ok(())
}
}
#[derive(Accounts)]
#[instruction(shard_id: u16)]
pub struct Increment<'info> {
#[account(
mut,
seeds = [b"counter_shard", &shard_id.to_le_bytes()],
bump
)]
pub counter_shard: Account<'info, CounterShard>,
}
#[account]
pub struct CounterShard {
pub count: u64,
}
#[error_code]
pub enum MyError {
#[msg("Invalid shard ID provided.")]
InvalidShardId,
}
How it works:
On the client side (TypeScript/JavaScript), you generate a random number from 0 to 7 (const shardId = Math.floor(Math.random() * NUM_SHARDS
);) and pass it to the instruction.
On the program side, the shard_id is used to look up the right PDA counter. Now 8 users can all call increment
at the same time, and their transactions will most likely land in different shards and run in parallel.
To read the total, you fetch all 8 accounts and sum them on the client. This is a read-only operation, so it doesn’t cause locks and stays efficient.
**Jito case
**
Jito lets users (mostly MEV searchers and trading bots) pay validators tips to get their bundles included in a block in a specific order and at high speed. These payments happen thousands of times per block. If Jito had only one global account for tips, it would instantly become the hottest account in Solana and a bottleneck for their own service.
To avoid that, Jito uses sharding. Instead of one central wallet, they maintain a set of accounts for receiving tips. When a searcher or trader builds a bundle, the last transaction is usually a tip transfer. The Jito SDK doesn’t always send to the same address. Instead it:
- calls
getTipAccounts()
, which returns an array of 8 pubkeys - picks one at random
- builds a
SystemProgram::Transfer
to that address
This spreads out the write load so the scheduler can process multiple tip payments in parallel
Using PDAs for data isolation
Another common mistake is to store all user data in a single big account, for example, with a BTreeMap<Pubkey
, UserData>
This immediately creates a hot account, since any update for one user blocks the whole map.
A better approach is to give each user their own account using a PDA (Program Derived Address). A PDA is a deterministic account address derived from a user’s key and other seeds. With this pattern, updating one user’s state doesn’t block reads or writes for others.
Conclusion
Hot accounts in Solana aren’t an oversight. They’re what you get as a tradeoff for parallel execution. Where other blockchains line everything up in one slow queue, Solana lets developers build ultra-fast applications.
The takeaway is that sсalability in Solana depends not only on code but also on how state is designed. Well-structured accounts let your program take full advantage of parallelism. Poor account design leads to hot accounts and wasted throughput.
To be effective on Solana you have to think in parallel. That means seeing not only the logic of your app, but also how its data lives in a global state and how transactions will contend for it.
TL;DR
Solana runs transactions in parallel, but write locks turn shared state into hot accounts. When too many transactions touch the same account, they queue up, raise latency, and drive up fees. Validator clients like Agave and Firedancer differ in how they schedule conflicts, but neither removes the problem. This guide shows how to deal with them. Overall, the only real fix is in program design. Common techniques are:
- keeping write sets as small as possible and using local fee markets
- sharding state across multiple accounts
- isolating user data with PDAs
Hot accounts don’t disappear, but with the right patterns you can keep your app scalable.