feat(l1): replace trie layer cache with LRU cache for full sync#6492
feat(l1): replace trie layer cache with LRU cache for full sync#6492azteca1998 wants to merge 3 commits intomainfrom
Conversation
… mode (#6480) During full sync (add_blocks_in_batch), the TrieLayerCache's diff-layer chain, bloom filter, and RCU cloning are pure overhead since full sync never reorgs. This commit adds a FlatTrieCache backed by an LRU that bypasses all of that machinery: - New FlatTrieCache in layering.rs: simple LRU keyed by trie node path, no layers, no bloom, no parent pointers. - New TrieCacheRef enum: allows TrieWrapper to use either the layered cache (normal CL processing) or the flat LRU cache (batch mode) transparently. - In batch mode, apply_trie_updates writes trie nodes directly to disk and populates the LRU, instead of accumulating diff layers and doing periodic commits with bloom rebuilds. - add_blocks_in_batch enables the flat cache before execution and disables it after, so normal block processing is completely unchanged.
Lines of code reportTotal lines added: Detailed view |
🤖 Kimi Code ReviewOverall Assessment: The PR introduces a well-structured optimization for full sync batch processing. The abstraction using Issues and Suggestions: 1. Magic numbers for trie key lengths (
2. Early success signal before disk persistence (
3. Unused method (
4. Poisoned lock handling (
5. Unnecessary fallback in
Automated review by Kimi (Moonshot AI) · kimi-k2.5 · custom prompt |
🤖 Codex Code Review
No other EVM opcode or gas-accounting issues stood out in this diff. I couldn’t run Automated review by OpenAI Codex · gpt-5.4 · custom prompt |
🤖 Claude Code ReviewHere is the full review of PR #6492: PR Review:
|
Summary
FlatTrieCache: an LRU-based trie node cache (2M entries) for full sync batch mode, bypassing the diff-layer chain, bloom filter rebuild, and RCU overheadTrieCacheRefenum soTrieWrappercan dispatch to either cache transparentlyapply_trie_updatestakes a fast path: writes nodes directly to disk and populates the LRU, skipping layer accumulation entirelyadd_blocks_in_batchenables the flat cache before execution and disables it aftertrie_cache_ref()returnsLayeredwhen no batch cache is activeCloses #6480
Test plan
FULL_SYNC_BLOCK_LIMIT=50000and verify completion[METRICS]logs)