A Solana orderbook where quote updates cost 212 CU
The fastest orderbook on Solana today is Phoenix. A market maker updating quotes costs ~59,000 CU. Manifest — our own project — costs ~50,000 CU. Both use Red-Black trees to store orders, and both pay the O(log n) tax on every insert and cancel.
Meanwhile, proprietary AMMs like HumidiFi update their pricing curves for 143 CU. They’re winning the Jupiter routing war on major pairs because they can refresh quotes every block at negligible cost. But they’re closed-source, single-maker, and opaque — Jupiter can see a price, but nobody can see the book.
We wanted both: prop AMM-level compute costs with full orderbook transparency. The result is a tick-based orderbook written in raw sBPF assembly where a market maker can declare their entire book — 10 levels per side — for 212 CU flat.
Why trees are the wrong data structure
Phoenix and Manifest both store orders in self-balancing binary trees. This is the textbook approach — O(log n) insert, O(log n) cancel, O(log n) lookup. It works, but the constant factors are brutal on-chain.
A single tree insertion on Solana involves: traversing parent/child pointers through account data, comparing keys, rotating nodes, updating balance factors, and writing back multiple modified nodes. Each node touch is a load + compare + store sequence. For a tree with 1,000 orders, that’s ~10 pointer chases, each hitting different cache lines in a 100KB+ account.
Traditional exchanges — CME, NYSE, NASDAQ — don’t use trees. They use tick arrays: a flat array where index i holds the total quantity at price level i. Insert is O(1). Cancel is O(1). Lookup is O(1). The only downside is fixed price range, which is fine for a specific market like SOL/USDC.
The layout
One account. Everything inside. No external PDAs, no separate order accounts, no cranks.
SOL/USDC · $60.00–$119.99 · $0.01 ticks · 6,000 price levels
┌──────────────────────────────────────────────┐
│ HEADER 200 B │
│ mints, vaults, authority, best_bid/ask, │
│ sequence_number, maker_count │
├──────────────────────────────────────────────┤
│ BID BITMAP [u64; 94] 752 B │
│ ASK BITMAP [u64; 94] 752 B │
├──────────────────────────────────────────────┤
│ BID TICKS [u32; 6000] 24,000 B │ ← aggregate liquidity
│ ASK TICKS [u32; 6000] 24,000 B │
├──────────────────────────────────────────────┤
│ MAKER 0 240 B │
│ MAKER 1 240 B │ ← per-maker compact state
│ ... │
│ MAKER N 240 B │
└──────────────────────────────────────────────┘
Base size with no makers: ~50 KB. Each maker adds 240 bytes. With 32 makers the account is 57 KB — 0.40 SOL in rent. With 128 makers, 80 KB — 0.56 SOL.
The bitmaps tell you which ticks have liquidity without scanning 6,000 entries. One u64 covers 64 ticks. Finding the best bid is a single backward scan through 94 words — in practice, one or two iterations because liquidity clusters near mid.
Two layers of state
The aggregate tick arrays are the public book — they hold the sum of all makers’ liquidity at each price. This is what Jupiter reads for routing quotes and what takers hit during swaps.
But we also need to know whose liquidity got filled. That’s what the per-maker compact state is for.
Each maker gets 240 bytes:
pubkey 32 B
base_free 8 B withdrawable SOL
base_locked 8 B locked in ask orders
quote_free 8 B withdrawable USDC
quote_locked 8 B locked in bid orders
mid_tick 2 B center price
num_levels 1 B levels per side (max 20)
spacing 1 B tick gap between levels
expire_slot 8 B quotes valid until this slot
bid_lots 80 B [u32; 20] lots at each bid level
ask_lots 80 B [u32; 20] lots at each ask level
A maker’s liquidity at any tick can be derived arithmetically:
offset = tick - maker.mid
level = offset / spacing - 1
lots = maker.ask_lots[level]
No pointers. No linked lists. No tree traversal. Just division and an array lookup. ~10 CU per maker per tick.
SetLevels: the 212 CU hot path
This is the instruction market makers call every block. Instead of cancel-old-orders-then-place-new-orders, the maker declares their entire book state in one shot:
SetLevels(
maker_index = 3, // my slot in the maker table
mid_tick = 2550, // $85.50
num_levels = 10, // 10 per side
spacing = 1, // every tick
expire_slot = current + 10,
bid_sizes = [100, 90, 81, 73, 66, 59, 53, 48, 43, 39],
ask_sizes = [100, 90, 81, 73, 66, 59, 53, 48, 43, 39],
)
The program:
- Subtracts the maker’s old contribution from the aggregate tick arrays
- Writes the new maker state
- Adds the new contribution to the aggregate tick arrays
- Updates bitmaps and best bid/ask
That’s it. The entire MM integration is ~50 lines of code. No order IDs, no cancel lists, no state tracking on the client side. Just “here’s my new book” every 400ms.
The 212 CU breaks down as:
Verify maker pubkey at index 12 CU
Subtract old from aggregate 40 CU (10 levels × 4 ops)
Clear stale bitmap bits 30 CU
Write new maker state 25 CU
Add new to aggregate 40 CU
Set new bitmap bits 30 CU
Update best_bid / best_ask 15 CU
Expire check + sequence bump 8 CU
Balance verification 12 CU
TOTAL 212 CU
This number is flat. It doesn’t scale with the number of makers in the market. Whether there are 4 or 128 other makers, your SetLevels costs 212 CU — because you only touch your own 240-byte row and the specific aggregate ticks your levels map to.
Why sBPF assembly
In Rust, using Anchor, a single msg!() log costs more CU than our entire SetLevels instruction. The Solana runtime charges 1 CU per sBPF instruction executed plus fixed costs for syscalls and CPI. Every layer of abstraction — Anchor’s account deserialization, borsh encoding, Rust’s bounds checking, allocator setup — adds hundreds or thousands of CU before your program logic even begins.
Raw sBPF eliminates all of it. The program starts at the entrypoint with r1 pointing to the serialized input buffer. Account data is right there — fixed offsets from r1. No deserialization. No heap. No framework.
The entire SetLevels handler is ~100 sBPF instructions. At 1 CU per instruction plus overhead for memory access patterns, that’s where the 212 CU comes from. The equivalent Rust program would be 5,000–10,000 CU minimum, even with Pinocchio’s zero-copy approach.
Swaps: where the compute goes
Takers don’t know makers exist. They pass the market account, specify a side and size, and the program walks the aggregate tick arrays from best price until filled:
Swap(side=buy, max_lots=500, limit_tick=8560)
At each tick, the program needs to attribute fills to individual makers. It iterates the maker table and derives each maker’s contribution at that tick using the arithmetic from their compact state. Pro-rata fill, update balances, move to the next tick.
The formula: ~27 + ticks_crossed × num_makers × 14 + 8,000 CU (the 8,000 is two SPL Token CPI transfers).
| Makers | Ticks | CU |
|---|---|---|
| 8 | 3 | 8,400 |
| 32 | 5 | 10,300 |
| 128 | 5 | 18,000 |
| 128 | 10 | 28,000 |
For takers with pre-deposited balances (the Jupiter path), SwapWithFreeFunds skips the CPI entirely — subtract 8,000 CU from every row above. An 8-maker, 3-tick swap becomes 400 CU.
Swaps being more expensive than quote updates is the correct tradeoff. On any orderbook, there are orders of magnitude more quote updates than fills. Phoenix processes roughly 50 quote updates per fill on SOL/USDC. Optimizing the hot path — the thing that happens 50x more often — is where CU savings actually compound.
Quote expiry
Each maker sets an expire_slot with every SetLevels call. During a swap, the program checks each maker’s expiry before attributing fills. Expired makers are skipped — their liquidity is effectively invisible.
SetLevels(expire_slot = current_slot + 10) → valid ~4 seconds
SetLevels(expire_slot = current_slot + 25) → valid ~10 seconds
If a maker’s bot goes down, their quotes expire automatically. No stale liquidity sitting on the book for hours. Cleanup is lazy — the swap instruction fixes aggregate tick arrays as a side effect when it encounters expired makers.
The comparison
Tick Book Phoenix Manifest Prop AMM
MM quote update: 212 CU 59,000 50,000 143
Swap (with CPI): 9,000-28,000 38,000 58,000 158,000
Transparency: full full full none
Multi-maker: yes yes yes no
Composable: yes yes yes no
MM integration: 50 lines 2000+ 1000+ 50 lines
The tick book sits in the gap between CLOBs and prop AMMs. It has the transparency and multi-maker support of an orderbook with the compute efficiency approaching a prop AMM. The tradeoff is a fixed price range — this design covers $60–$120 for SOL/USDC, not arbitrary pairs at arbitrary prices.
For the dominant trading pair on Solana, that’s not a limitation. SOL/USDC lives in a known price range, and the market that matters most for Jupiter routing is the one with the tightest spread and cheapest execution, not the most flexible architecture.
What’s next
The design is complete. The sBPF assembly implementation is in progress — set_levels.s and batch_update.s are written, with swap and vault operations next. The test harness uses solana-program-test in Rust, calling into the assembled sBPF binary.
If you’re a market maker running on Solana and 212 CU quote updates sound interesting, get in touch.