Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
- [Differences from RISC-V](./differences-from-risc-v.md)
- [Technical Overview](./technical-overview.md)
- [Architecture Overview](./architecture-overview.md)
- [Memory Continuation Checks](./memory-continuation-checks.md)
- [Prover Workflow](./prover-workflow.md)
- [Optimizations](./optimizations.md)
- [Appendix](./appendix.md)
Expand Down
8 changes: 7 additions & 1 deletion docs/src/architecture-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,13 @@ compose into a single consistent execution:
$\text{shard}_i.\text{end\_pc} = \text{shard}_{i+1}.\text{init\_pc}$
(and likewise for other `GlobalState` fields) at every shard
boundary.
- **Dynamic heap/hint continuity.** Shards also expose the start
address and length of the dynamic heap and hint init regions. The
verifier checks that these segments stay within the platform's heap
and hint windows and that each shard starts exactly where the
previous shard ended, so dynamic init data cannot skip, overlap, or
move out of range across shard boundaries. See
[Memory Continuation Checks](./memory-continuation-checks.md).
- **Cross-shard memory consistency.** Memory written in shard $i$
must remain readable in any later shard $j > i$. The within-shard
RAM grand product $\prod R = \prod W$ cannot be reused across
Expand Down Expand Up @@ -154,4 +161,3 @@ compose into a single consistent execution:
$\sum_{\text{shard}} \Sigma_{\text{shard}} = \mathcal{O}$
(the point at infinity) directly. The number of shards is at most
in the hundreds, so this is a few hundred EC additions.

94 changes: 94 additions & 0 deletions docs/src/memory-continuation-checks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# Memory Continuation Checks
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot does this doc implies the main intuition/idea that heap/init does not need to initialized in the first shard? We can load it until it's first accessed. Then we have to require that not a single address are being initialized twice.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes—that is the intended model. I updated the doc to state lazy dynamic init explicitly: early shards can have zero-length heap/hint segments until first access, and continuation is append-only/non-overlapping so an address cannot be initialized twice across shards. Fixed in fc8fc48.


When execution is split into multiple shards, Ceno must check not only
that control flow continues correctly, but also that the dynamic memory
regions exposed through public values remain consistent across shard
boundaries.

The two dynamic regions are:

- **Heap**
- **Hints**

Each shard exposes the start address and length of its current heap and
hint segment. The verifier then checks that these segments form one
continuous sequence over the full trace.

Intuitively, this supports lazy dynamic init: early shards can expose
zero length for heap/hints, and the segment only extends once those
addresses are first accessed. Because every next segment must start at
the previous end, the exposed ranges are append-only and non-overlapping,
so an address cannot be initialized twice across shards.

<p align="center">

```text
heap / hint address space

shard 0 shard 1 shard 2
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ start = s0 │ │ start = e0 │ │ start = e1 │
│ len = l0 │ │ len = l1 │ │ len = l2 │
│ end = e0 │───▶│ end = e1 │───▶│ end = e2 │
└──────────────┘ └──────────────┘ └──────────────┘

Requirements:
- every start/end stays inside the allowed platform range
- next shard starts exactly at previous shard end
- segments are append-only (no overlaps or rewinds)
- proof table length matches the public length
```

</p>

## What Is Being Checked

For both heap and hints, full-trace verification enforces:

- **Range correctness**: each shard's start and end must stay inside the
configured memory window
- **Continuation**: shard `i + 1` must start exactly where shard `i`
ended
- **Length consistency**: the dynamic init table proved inside the shard
must have the same length as the public value

These checks rule out three classes of errors:

- a shard claiming memory outside the allowed heap or hint range
- a gap or overlap between consecutive shards
- a mismatch between the public memory length and the table actually
proved

## How This Fits with the Other Memory Checks

These continuation checks are different from the cross-shard RAM
consistency check.

- **RAM consistency** checks that reads and writes across shards compose
to one valid memory history
- **Memory continuation** checks that the dynamic heap and hint segments
themselves are chained correctly from shard to shard

Both are needed:

- RAM consistency protects the memory contents
- continuation checks protect the public memory layout

## Full-Trace vs Single-Shard Verification

Continuation is a **full-trace** property.

- In full-trace verification, heap and hint segments are chained across
all shards
- In single-shard debug verification, Ceno only checks that the selected
shard is internally valid; it does not claim that the shard forms a
complete continuation of the whole execution

## Where This Lives in the System

- the platform defines the allowed heap and hint ranges
- the native verifier checks shard-by-shard continuation
- the recursion verifier enforces the same bounds in aggregation

This keeps the memory-state invariant aligned across both native and
recursive verification.
Loading