Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
c3e665f
pull non legends list changes from other branch
chrisnojima-zoom Apr 7, 2026
87649da
WIP
chrisnojima-zoom Apr 7, 2026
4085ac0
Merge branch 'nojima/HOTPOT-next-670-clean-2' into nojima/ZCLIENT-sma…
chrisnojima Apr 7, 2026
0a0f058
WIP
chrisnojima Apr 7, 2026
9ad69ff
WIP
chrisnojima-zoom Apr 7, 2026
480dbc5
WIP
chrisnojima Apr 7, 2026
4b6bc14
WIP
chrisnojima-zoom Apr 7, 2026
3bba9a2
WIP
chrisnojima-zoom Apr 7, 2026
08e096b
WIP
chrisnojima-zoom Apr 8, 2026
e9f2d84
WIP
chrisnojima-zoom Apr 8, 2026
ae335ff
WIP
chrisnojima-zoom Apr 8, 2026
f7006f0
WIP
chrisnojima-zoom Apr 8, 2026
260cc3e
WIP
chrisnojima-zoom Apr 8, 2026
999504e
Merge branch 'nojima/HOTPOT-next-670-clean-2' into nojima/ZCLIENT-sma…
chrisnojima Apr 8, 2026
0d11b91
Merge branch 'nojima/HOTPOT-next-670-clean-2' into nojima/ZCLIENT-sma…
chrisnojima Apr 8, 2026
a39f89b
Merge branch 'nojima/HOTPOT-next-670-clean-2' into nojima/ZCLIENT-sma…
chrisnojima Apr 8, 2026
9275d73
WIP
chrisnojima-zoom Apr 8, 2026
2ba43f9
WIP
chrisnojima-zoom Apr 8, 2026
83984f7
WIP
chrisnojima-zoom Apr 8, 2026
2b3f975
WIP
chrisnojima Apr 8, 2026
4203c03
WIP
chrisnojima-zoom Apr 8, 2026
1d62edf
WIP
chrisnojima Apr 8, 2026
608dfab
WIP
chrisnojima-zoom Apr 8, 2026
d5ddf81
WIP
chrisnojima-zoom Apr 8, 2026
fba9ece
WIP
chrisnojima-zoom Apr 8, 2026
f20d79e
WIP
chrisnojima-zoom Apr 8, 2026
e5bd8b5
WIP
chrisnojima-zoom Apr 8, 2026
ead9819
WIP
chrisnojima-zoom Apr 8, 2026
af72dd9
WIP
chrisnojima-zoom Apr 8, 2026
b14b0ea
WIP
chrisnojima-zoom Apr 8, 2026
2ef186d
WIP
chrisnojima-zoom Apr 8, 2026
db173c7
WIP
chrisnojima-zoom Apr 8, 2026
c3c9a9b
WIP
chrisnojima-zoom Apr 8, 2026
95caec2
WIP
chrisnojima-zoom Apr 8, 2026
b794435
WIP
chrisnojima-zoom Apr 8, 2026
efe229f
WIP
chrisnojima-zoom Apr 8, 2026
11aa690
WIP
chrisnojima-zoom Apr 8, 2026
1afa564
WIP
chrisnojima-zoom Apr 8, 2026
102d0de
WIP
chrisnojima-zoom Apr 8, 2026
a03a2df
WIP
chrisnojima-zoom Apr 8, 2026
c398d7a
WIP
chrisnojima-zoom Apr 8, 2026
aed19b3
WIP
chrisnojima-zoom Apr 8, 2026
913f3bd
WIP
chrisnojima-zoom Apr 8, 2026
ceec493
WIP
chrisnojima-zoom Apr 8, 2026
6de065a
WIP
chrisnojima-zoom Apr 9, 2026
5832799
WIP
chrisnojima Apr 9, 2026
0d9855b
WIP
chrisnojima-zoom Apr 9, 2026
505416f
WIP
chrisnojima-zoom Apr 9, 2026
0e4cf0d
WIP
chrisnojima-zoom Apr 9, 2026
697e33e
WIP
chrisnojima-zoom Apr 9, 2026
f4be1fa
WIP
chrisnojima-zoom Apr 9, 2026
7ed5ad9
WIP
chrisnojima-zoom Apr 9, 2026
b8cdba3
WIP
chrisnojima Apr 9, 2026
b3b0d20
WIP
chrisnojima-zoom Apr 9, 2026
a5dfc21
WIP
chrisnojima-zoom Apr 9, 2026
6e6b92d
WIP
chrisnojima Apr 9, 2026
bca7921
WIP
chrisnojima-zoom Apr 9, 2026
4907c02
WIP
chrisnojima-zoom Apr 9, 2026
09a4fe9
WIP
chrisnojima-zoom Apr 9, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,4 @@
- Components must not mutate Zustand stores directly with `useXState.setState`, `getState()`-based writes, or similar ad hoc store mutation. If a component needs to affect store state, route it through a store dispatch action or move the state out of the store.
- During refactors, do not delete existing guards, conditionals, or platform/test-specific behavior unless you have proven they are dead and the user asked for that behavior change. Port checks like `androidIsTestDevice` forward into the new code path instead of silently dropping them.
- When addressing PR or review feedback, including bot or lint-style suggestions, do not apply it mechanically. Verify that the reported issue is real in this codebase and that the proposed fix is consistent with repo rules and improves correctness, behavior, or maintainability before making changes.
- When working from a repo plan or checklist such as `PLAN.md`, update the checklist in the same change and mark implemented items done before you finish.
37 changes: 27 additions & 10 deletions go/chat/localizer.go
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,10 @@ func (s *localizerPipeline) suspend(ctx context.Context) bool {
if !s.started {
return false
}
prevSuspendCount := s.suspendCount
s.suspendCount++
s.Debug(ctx, "suspend: count %d -> %d waiters: %d cancelChs: %d queued: %d",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this adds some logging. i was seeing inbox unboxing hang every once in awhile. if i restart the ui i could 100% repro this and spent a ton of time trying to see where things were going wrong (see plans/inbox-load-fail.md). after i restarted the service the problem went away and couldn't be repoed so now i think its something in the go layer

prevSuspendCount, s.suspendCount, len(s.suspendWaiters), len(s.cancelChs), len(s.jobQueue))
if len(s.cancelChs) == 0 {
return false
}
Expand Down Expand Up @@ -405,8 +408,12 @@ func (s *localizerPipeline) resume(ctx context.Context) bool {
s.Debug(ctx, "resume: spurious resume call without suspend")
return false
}
prevSuspendCount := s.suspendCount
s.suspendCount--
s.Debug(ctx, "resume: count %d -> %d waiters: %d cancelChs: %d queued: %d",
prevSuspendCount, s.suspendCount, len(s.suspendWaiters), len(s.cancelChs), len(s.jobQueue))
if s.suspendCount == 0 {
s.Debug(ctx, "resume: releasing waiters: %d", len(s.suspendWaiters))
for _, cb := range s.suspendWaiters {
close(cb)
}
Expand All @@ -415,6 +422,12 @@ func (s *localizerPipeline) resume(ctx context.Context) bool {
return false
}

func (s *localizerPipeline) suspendStats() (suspendCount, waiters, cancelChs, queued int) {
s.Lock()
defer s.Unlock()
return s.suspendCount, len(s.suspendWaiters), len(s.cancelChs), len(s.jobQueue)
}

func (s *localizerPipeline) registerWaiter() chan struct{} {
s.Lock()
defer s.Unlock()
Expand All @@ -430,23 +443,27 @@ func (s *localizerPipeline) registerWaiter() chan struct{} {
func (s *localizerPipeline) localizeJobPulled(job *localizerPipelineJob, stopCh chan struct{}) {
id, cancelCh := s.registerJobPull(job.ctx)
defer s.finishJobPull(id)
s.Debug(job.ctx, "localizeJobPulled: pulling job: pending: %d completed: %d", job.numPending(),
job.numCompleted())
s.Debug(job.ctx, "localizeJobPulled[%s]: pulling job: pending: %d completed: %d", id,
job.numPending(), job.numCompleted())
waitCh := make(chan struct{})
if !globals.IsLocalizerCancelableCtx(job.ctx) {
close(waitCh)
} else {
s.Debug(job.ctx, "localizeJobPulled: waiting for resume")
suspendCount, waiters, cancelChs, queued := s.suspendStats()
s.Debug(job.ctx, "localizeJobPulled[%s]: waiting for resume suspendCount: %d waiters: %d cancelChs: %d queued: %d",
id, suspendCount, waiters, cancelChs, queued)
go func() {
<-s.registerWaiter()
close(waitCh)
}()
}
select {
case <-waitCh:
s.Debug(job.ctx, "localizeJobPulled: resume, proceeding")
suspendCount, waiters, cancelChs, queued := s.suspendStats()
s.Debug(job.ctx, "localizeJobPulled[%s]: resume, proceeding suspendCount: %d waiters: %d cancelChs: %d queued: %d",
id, suspendCount, waiters, cancelChs, queued)
case <-stopCh:
s.Debug(job.ctx, "localizeJobPulled: shutting down")
s.Debug(job.ctx, "localizeJobPulled[%s]: shutting down", id)
return
}
s.jobPulled(job.ctx, job)
Expand All @@ -455,25 +472,25 @@ func (s *localizerPipeline) localizeJobPulled(job *localizerPipelineJob, stopCh
defer close(doneCh)
if err := s.localizeConversations(job); err == context.Canceled {
// just put this right back if we canceled it
s.Debug(job.ctx, "localizeJobPulled: re-enqueuing canceled job")
s.Debug(job.ctx, "localizeJobPulled[%s]: re-enqueuing canceled job", id)
s.jobQueue <- job.retry(s.G())
}
if job.closeIfDone() {
s.Debug(job.ctx, "localizeJobPulled: all job tasks complete")
s.Debug(job.ctx, "localizeJobPulled[%s]: all job tasks complete", id)
}
}()
select {
case <-doneCh:
job.cancelFn()
case <-cancelCh:
s.Debug(job.ctx, "localizeJobPulled: canceled a live job")
s.Debug(job.ctx, "localizeJobPulled[%s]: canceled a live job", id)
job.cancelFn()
case <-stopCh:
s.Debug(job.ctx, "localizeJobPulled: shutting down")
s.Debug(job.ctx, "localizeJobPulled[%s]: shutting down", id)
job.cancelFn()
return
}
s.Debug(job.ctx, "localizeJobPulled: job pass complete")
s.Debug(job.ctx, "localizeJobPulled[%s]: job pass complete", id)
}

func (s *localizerPipeline) localizeLoop(stopCh chan struct{}) {
Expand Down
19 changes: 18 additions & 1 deletion go/chat/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -171,8 +171,13 @@ func (h *Server) RequestInboxLayout(ctx context.Context, reselectMode chat1.Inbo
func (h *Server) RequestInboxUnbox(ctx context.Context, convIDs []chat1.ConversationID) (err error) {
ctx = globals.ChatCtx(ctx, h.G(), keybase1.TLFIdentifyBehavior_CHAT_GUI, nil, nil)
ctx = globals.CtxAddLocalizerCancelable(ctx)
reqID := libkb.RandStringB64(3)
defer h.Trace(ctx, &err, "RequestInboxUnbox")()
defer h.PerfTrace(ctx, &err, "RequestInboxUnbox")()
h.Debug(ctx, "RequestInboxUnbox[%s]: begin convs: %d", reqID, len(convIDs))
defer func() {
h.Debug(ctx, "RequestInboxUnbox[%s]: return err: %v", reqID, err)
}()
for _, convID := range convIDs {
h.GetPerfLog().CDebugf(ctx, "RequestInboxUnbox: queuing unbox for: %s", convID)
h.Debug(ctx, "RequestInboxUnbox: queuing unbox for: %s", convID)
Expand Down Expand Up @@ -398,14 +403,26 @@ func (h *Server) GetUnreadline(ctx context.Context, arg chat1.GetUnreadlineArg)
func (h *Server) GetThreadNonblock(ctx context.Context, arg chat1.GetThreadNonblockArg) (res chat1.NonblockFetchRes, err error) {
var identBreaks []keybase1.TLFIdentifyFailure
ctx = globals.ChatCtx(ctx, h.G(), arg.IdentifyBehavior, &identBreaks, h.identNotifier)
reqID := libkb.RandStringB64(3)
defer h.Trace(ctx, &err,
"GetThreadNonblock(%s,%v,%v)", arg.ConversationID, arg.CbMode, arg.Reason)()
defer h.PerfTrace(ctx, &err,
"GetThreadNonblock(%s,%v,%v)", arg.ConversationID, arg.CbMode, arg.Reason)()
defer func() { h.setResultRateLimit(ctx, &res) }()
defer func() { err = h.handleOfflineError(ctx, err, &res) }()
defer func() {
h.Debug(ctx, "GetThreadNonblock[%s]: return convID: %s err: %v", reqID, arg.ConversationID, err)
}()
defer h.suspendBgConvLoads(ctx)()
defer h.suspendInboxSource(ctx)()
h.Debug(ctx, "GetThreadNonblock[%s]: suspend inbox source begin convID: %s", reqID, arg.ConversationID)
resumeInboxSource := h.suspendInboxSource(ctx)
h.Debug(ctx, "GetThreadNonblock[%s]: suspend inbox source done convID: %s", reqID, arg.ConversationID)
defer func() {
h.Debug(ctx, "GetThreadNonblock[%s]: resume inbox source begin convID: %s", reqID, arg.ConversationID)
resumeInboxSource()
h.Debug(ctx, "GetThreadNonblock[%s]: resume inbox source done convID: %s", reqID, arg.ConversationID)
}()
h.Debug(ctx, "GetThreadNonblock[%s]: begin convID: %s sessionID: %d", reqID, arg.ConversationID, arg.SessionID)
uid, err := utils.AssertLoggedInUID(ctx, h.G())
if err != nil {
return chat1.NonblockFetchRes{}, err
Expand Down
30 changes: 20 additions & 10 deletions go/chat/uithreadloader.go
Original file line number Diff line number Diff line change
Expand Up @@ -501,7 +501,14 @@ func (t *UIThreadLoader) LoadNonblock(ctx context.Context, chatUI libkb.ChatUI,
) (err error) {
var pagination, resultPagination *chat1.Pagination
var fullErr error
reqID := libkb.RandStringB64(3)
fullSent := false
defer t.Trace(ctx, &err, "LoadNonblock")()
t.Debug(ctx, "LoadNonblock[%s]: begin convID: %s reason: %v", reqID, convID, reason)
defer func() {
t.Debug(ctx, "LoadNonblock[%s]: return convID: %s err: %v fullErr: %v fullSent: %v",
reqID, convID, err, fullErr, fullSent)
}()
defer func() {
// Detect any problem loading the thread, and queue it up in the retrier if there is a problem.
// Otherwise, send notice that we successfully loaded the conversation.
Expand Down Expand Up @@ -539,7 +546,7 @@ func (t *UIThreadLoader) LoadNonblock(ctx context.Context, chatUI libkb.ChatUI,
return err
}
defer t.G().ConvSource.ReleaseConversationLock(ctx, uid, convID)
t.Debug(ctx, "LoadNonblock: conversation lock obtained")
t.Debug(ctx, "LoadNonblock[%s]: conversation lock obtained convID: %s", reqID, convID)

// Enable delete placeholders for supersede transform
if query == nil {
Expand Down Expand Up @@ -648,11 +655,11 @@ func (t *UIThreadLoader) LoadNonblock(ctx context.Context, chatUI libkb.ChatUI,
} else {
t.Debug(ctx, "LoadNonblock: sending nil cached response")
}
start := time.Now()
t.Debug(ctx, "LoadNonblock[%s]: cached send begin convID: %s", reqID, convID)
if err := chatUI.ChatThreadCached(ctx, pthread); err != nil {
t.Debug(ctx, "LoadNonblock: failed to send cached thread: %s", err)
}
t.Debug(ctx, "LoadNonblock: cached response send time: %v", time.Since(start))
t.Debug(ctx, "LoadNonblock[%s]: cached send done convID: %s", reqID, convID)
}(localCtx)

startTime := t.clock.Now()
Expand Down Expand Up @@ -708,23 +715,25 @@ func (t *UIThreadLoader) LoadNonblock(ctx context.Context, chatUI libkb.ChatUI,
}
resultPagination = rthread.Pagination
t.applyPagerModeOutgoing(ctx, convID, rthread.Pagination, pagination, pgmode)
start = time.Now()
if fullErr = chatUI.ChatThreadFull(ctx, string(jsonUIRes)); err != nil {
t.Debug(ctx, "LoadNonblock: failed to send full result to UI: %s", err)
t.Debug(ctx, "LoadNonblock[%s]: full send begin convID: %s", reqID, convID)
if fullErr = chatUI.ChatThreadFull(ctx, string(jsonUIRes)); fullErr != nil {
t.Debug(ctx, "LoadNonblock: failed to send full result to UI: %s", fullErr)
return
}
t.Debug(ctx, "LoadNonblock: full response send time: %v", time.Since(start))
fullSent = true
t.Debug(ctx, "LoadNonblock[%s]: full send done convID: %s", reqID, convID)

// This means we transmitted with success, so cancel local thread
cancel()
}()
wg.Wait()

t.Debug(ctx, "LoadNonblock: thread payloads transferred, checking for resolve")
t.Debug(ctx, "LoadNonblock[%s]: payload transfer complete convID: %s fullSent: %v", reqID, convID, fullSent)
// Resolve any messages we didn't cache and get full information about
if fullErr == nil {
fullErr = func() error {
skips := globals.CtxMessageCacheSkips(ctx)
t.Debug(ctx, "LoadNonblock[%s]: post-send resolve begin convID: %s skips: %d", reqID, convID, len(skips))
cancelUIStatus := t.setUIStatus(ctx, chatUI, chat1.NewUIChatThreadStatusWithValidating(0),
getDelay())
defer func() {
Expand Down Expand Up @@ -797,13 +806,14 @@ func (t *UIThreadLoader) LoadNonblock(ctx context.Context, chatUI libkb.ChatUI,
t.G().ActivityNotifier.Activity(ctx, uid, chat1.TopicType_CHAT,
&act, chat1.ChatActivitySource_LOCAL)
}
t.Debug(ctx, "LoadNonblock[%s]: post-send resolve done convID: %s", reqID, convID)
return nil
}()
}

// Clean up context and set final loading status
if getDisplayedStatus() {
t.Debug(ctx, "LoadNonblock: status displayed, clearing")
t.Debug(ctx, "LoadNonblock[%s]: final status clear begin convID: %s", reqID, convID)
t.clock.Sleep(t.validatedDelay)
// use a background context here in case our context has been canceled, we don't want to not
// get this banner off the screen.
Expand All @@ -820,7 +830,7 @@ func (t *UIThreadLoader) LoadNonblock(ctx context.Context, chatUI libkb.ChatUI,
t.Debug(ctx, "LoadNonblock: failed to set status: %s", err)
}
}
t.Debug(ctx, "LoadNonblock: clear complete")
t.Debug(ctx, "LoadNonblock[%s]: final status clear done convID: %s", reqID, convID)
} else {
t.Debug(ctx, "LoadNonblock: no status displayed, not clearing")
}
Expand Down
130 changes: 130 additions & 0 deletions plans/chat-refactor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# Chat Message Perf Cleanup Plan

## Goal

Reduce chat conversation mount cost, cut per-row Zustand subscription fan-out, and remove render thrash in the message list without changing behavior.

## Constraints

- Preserve existing chat behavior and platform-specific handling.
- Prefer small, reviewable patches with one clear ownership boundary each.
- This machine does not have `node_modules` for this repo, so this plan assumes pure code work unless validation happens elsewhere.

## Working Rules

- Use one clean context per workstream below.
- Do not mix store-shape changes and row rendering changes in the same patch unless one directly unblocks the other.
- Keep desktop and native paths aligned unless there is a platform-specific reason not to.
- Treat each workstream as independently landable where possible.
- Do not preserve proxy dispatch APIs solely to avoid touching callers when state ownership changes; migrate callers to the new owner in the same workstream.
- When a checklist item is implemented, update this plan in the same change and mark that item done.

## Workstreams

### 1. Row Renderer Boundary

- [x] Introduce a single row entry point that takes `ordinal` and resolves render type inside the row.
- [x] Remove list-level render dispatch from `messageTypeMap` where possible.
- [x] Delete the native `extraData` / `forceListRedraw` placeholder escape hatch if the new row boundary makes it unnecessary.
- [x] Keep placeholder-to-real-message transitions stable on both native and desktop.

Primary files:

- `shared/chat/conversation/list-area/index.native.tsx`
- `shared/chat/conversation/list-area/index.desktop.tsx`
- `shared/chat/conversation/messages/wrapper/index.tsx`
- `shared/chat/conversation/messages/placeholder/wrapper.tsx`

### 2. Incremental Derived Message Metadata

- [x] Stop rebuilding whole-thread derived maps on every `messagesAdd`.
- [x] Update separator, username-grouping, and reaction-order metadata only for changed ordinals and any affected neighbors.
- [x] Avoid rebuilding and resorting `messageOrdinals` unless thread membership actually changed.
- [x] Re-evaluate whether some derived metadata should live in store state at all.
- [ ] Audit per-message render-time computation and decide whether values that are only consumed by one caller should be stored in derived message state instead of recomputed during render.

Primary files:

- `shared/stores/convostate.tsx`
- `shared/chat/conversation/messages/separator.tsx`
- `shared/chat/conversation/messages/reactions-rows.tsx`

### 3. Row Subscription Consolidation

- [x] Move toward one main convo-store subscription per mounted row.
- [x] Push row data down as props instead of reopening store subscriptions in reply, reactions, emoji, send-indicator, exploding-meta, and similar children.
- [x] Audit attachment and unfurl helpers for repeated `messageMap.get(ordinal)` selectors.
- [x] Keep selectors narrow and stable when a child still needs to subscribe directly.

Decision note:

- Avoid override/fallback component modes when a parent can supply concrete row data.
- Prefer separate components for distinct behaviors, such as a real reaction chip versus an add-reaction button, rather than one component that mixes controlled, connected, and fallback paths.

Primary files:

- `shared/chat/conversation/messages/wrapper/wrapper.tsx`
- `shared/chat/conversation/messages/text/wrapper.tsx`
- `shared/chat/conversation/messages/text/reply.tsx`
- `shared/chat/conversation/messages/reactions-rows.tsx`
- `shared/chat/conversation/messages/emoji-row.tsx`
- `shared/chat/conversation/messages/wrapper/send-indicator.tsx`
- `shared/chat/conversation/messages/wrapper/exploding-meta.tsx`

### 4. Split Volatile UI State From Message Data

- [x] Inventory convo-store fields that are transient UI state rather than message graph state.
- [x] Move thread-search visibility and search request/results state out of `convostate` into route params plus screen-local UI state.
- [x] Move route-local or composer-local state out of the main convo message store.
- [x] Keep dispatch call sites readable and avoid direct component store mutation.
- [x] Minimize unrelated selector recalculation when typing/search/composer state changes.

Primary files:

- `shared/stores/convostate.tsx`
- `shared/chat/conversation/*`

### 5. List Data Stability And Recycling

- [ ] Remove avoidable array cloning / reversing in the hottest list path.
- [x] Replace effect-driven recycle subtype reporting with data available before or during row render.
- [ ] Re-check list item type stability after workstreams 1 and 3 land.
- [ ] Keep scroll position and centered-message behavior unchanged.

Primary files:

- `shared/chat/conversation/list-area/index.native.tsx`
- `shared/chat/conversation/messages/text/wrapper.tsx`
- `shared/chat/conversation/recycle-type-context.tsx`

### 6. Measurement And Regression Guardrails

- [ ] Add or improve lightweight profiling hooks where they help compare before/after behavior.
- [ ] Define a manual verification checklist for initial thread mount, new incoming message, placeholder resolution, reactions, edits, and centered jumps.
- [ ] Capture follow-up profiling notes after each landed workstream.

Primary files:

- `shared/chat/conversation/list-area/index.native.tsx`
- `shared/chat/conversation/list-area/index.desktop.tsx`
- `shared/perf/*`

## Recommended Order

1. Workstream 1: Row Renderer Boundary
2. Workstream 2: Incremental Derived Message Metadata
3. Workstream 3: Row Subscription Consolidation
4. Workstream 4: Split Volatile UI State From Message Data
5. Workstream 5: List Data Stability And Recycling
6. Workstream 6: Measurement And Regression Guardrails

## Clean Context Prompts

Use these as narrow follow-up task starts:

1. "Implement Workstream 1 from `PLAN.md`: introduce a row-level renderer boundary and remove the native placeholder redraw hack."
2. "Implement Workstream 2 from `PLAN.md`: make convo-store derived message metadata incremental instead of full-thread recompute."
3. "Implement Workstream 3 from `PLAN.md`: consolidate message row subscriptions so row children mostly receive props instead of subscribing directly."
4. "Implement Workstream 4 from `PLAN.md`: split volatile convo UI state from message graph state."
5. "Implement Workstream 5 from `PLAN.md`: stabilize list data and recycling after the earlier refactors."
6. "Implement Workstream 6 from `PLAN.md`: add measurement hooks and a regression checklist for the chat message perf cleanup."
Loading