feat(l1): use pipelined execution for full sync instead of sequential#6493
feat(l1): use pipelined execution for full sync instead of sequential#6493azteca1998 wants to merge 1 commit intomainfrom
Conversation
…#6484) Replace the sequential batch execution path in full sync with pipelined execution. Instead of running all 1024 blocks sequentially then doing one giant merkle at the end, blocks are now processed through the existing block pipeline (add_block_pipeline) which provides: - Concurrent execution + merkleization per block (3-thread pipeline) - 16 parallel shard workers for trie computation - Cache warming / prefetching via warmer thread Blocks are grouped into configurable sub-batches (default 64 blocks, configurable via PIPELINE_SUB_BATCH_SIZE env var with sync-test feature) for progress logging and async runtime yielding. The trade-off is per-block merkleization instead of one collapsed batch, but the pipeline overlap (merkle runs concurrently with execution) should more than compensate, with estimated +20-40% throughput improvement.
Lines of code reportTotal lines added: Detailed view |
🤖 Kimi Code ReviewCritical Issue: Blocking Async Runtime
Fix: Wrap the sub-batch processing in // Check cancellation before spawning
if cancellation_token.is_cancelled() { ... }
let blockchain = self.clone();
let sub_batch = sub_batch.to_vec(); // Necessary to move into blocking task
tokio::task::spawn_blocking(move || {
for block in sub_batch {
// ... existing processing logic ...
blockchain.add_block_pipeline(block, None).map_err(...)?;
}
Ok::<_, (ChainError, Option<BatchBlockProcessingFailure>)>(())
})
.await
.map_err(|e| (ChainError::Custom(e.to_string()), None))??;Issue: Unused Parameter
Automated review by Kimi (Moonshot AI) · kimi-k2.5 · custom prompt |
🤖 Claude Code ReviewNow I have a full picture of the PR. Here is the review: PR #6493 —
|
| Severity | Item |
|---|---|
| Critical | add_block_pipeline blocks the Tokio executor thread — needs spawn_blocking |
| Medium | _final_batch is dead code — remove the parameter and its computation |
| Medium | Stale doc comment on add_blocks |
| Low | block.clone() per iteration — avoidable with the spawn_blocking refactor |
| Low | Metrics division safety is implicit — make it explicit |
| Nit | blocks_done shadow variable |
The core concept is sound and the pipeline approach for non-final batches is a real improvement. The spawn_blocking issue is the blocker; everything else can be polished before merge.
Automated review by Claude (Anthropic) · sonnet · custom prompt
🤖 Codex Code Review
I couldn’t run Automated review by OpenAI Codex · gpt-5.4 · custom prompt |
Summary
add_blocks_in_pipeline_batches()toBlockchainthat processes full sync blocks through the existing pipeline (add_block_pipeline) instead of the sequentialexecute_block_from_statepathPIPELINE_SUB_BATCH_SIZEenv var withsync-testfeature)add_blocks()with a single unified pipeline pathCloses #6484
Test plan
FULL_SYNC_BLOCK_LIMIT=50000and verify completionPIPELINE_SUB_BATCH_SIZE=32,64,128