Skip to content

[debug] Reproduce HelixClusterManagerTest.inconsistentReplicaCapacityTest aggregated-view init flake#3238

Closed
snalli wants to merge 19 commits intomasterfrom
snalli/helix-init-debug
Closed

[debug] Reproduce HelixClusterManagerTest.inconsistentReplicaCapacityTest aggregated-view init flake#3238
snalli wants to merge 19 commits intomasterfrom
snalli/helix-init-debug

Conversation

@snalli
Copy link
Copy Markdown
Contributor

@snalli snalli commented May 1, 2026

Purpose

Companion draft PR to #3235 (and parallel to #3237 for SSL). The Helix routing-table init flake under useAggregatedView=true (params [6], [7], [8] across runs) is the second-most-common blocker for #3235 going green. This branch is purely diagnostic — narrow scope + thread dump on timeout.

Not for merge. Diagnostic branch.

What's instrumented

  • .github/workflows/github-actions.yml — workflow scoped to *HelixClusterManagerTest.inconsistentReplicaCapacityTest* only. ~5 min iteration vs. 30+ min full run.
  • build.gradlemaxRetries = 0 so a failure produces a single dump and exits; testLogging.showStandardStreams = true so System.err (where the dump goes) appears in the CI log.
  • HelixClusterManager.waitForInitNotification — 30s wait (down from 600s on main) + on timeout, dump:
    • instanceNameToAmbryDataNode size (did any nodes register?)
    • dataNodeConfigInitialized flag
    • clusterMapChangeListeners size (are listeners attached?)
    • Every live thread's stack trace
    • Then throw with the same IllegalStateException so the test fails predictably.

Why Helix should be easier than SSL

Helix is mocked end-to-end via MockHelixCluster / MockHelixAdmin / MockHelixManagerFactory — environment-independent. The flake is therefore not loopback-timing or TLS-cert-validation; it has to be either a state-machine race in our cluster init code or test-state leak between consecutive parameterized runs. The thread dump should reveal which.

What we expect to see

  1. The inconsistentReplicaCapacityTest fires for one or more parameterized indices.
  2. For the failing param (probably [6] or [7] based on history), the wait latch never fires within 30s.
  3. Thread dump shows: which thread is parked, what listener is/isn't installed, whether any of the mocked Helix listeners ever fired.
  4. The shape of the dump tells us whether the issue is:
    • Listener never registered (handler-state bug in the @before logic).
    • Listener registered but never invoked (mock plumbing issue in MockHelixCluster).
    • Listener invoked but ignored (race in the latch handling).

Known suspect: state leak across parameterized runs

HelixClusterManagerTest has static dcsToZkInfo / zookeeperServerPorts shared across all tests in the class. The @After cleans property-store paths only for helixCluster.getClusterName(), but the failing test creates its OWN testCluster (different cluster name) without explicit cleanup. Stale state from one test's testCluster can plausibly poison a later test's init. Not addressed in this branch yet; will follow up after the thread dump confirms or rejects this theory.

Testing Done

  • Pending CI run on this branch.

snalli and others added 18 commits April 30, 2026 13:32
PR #3219 added skip-bad-foreign-node logic to HelixClusterManager.createNewInstance
so a node with bad metadata (duplicate partition on two disks, inconsistent
replica capacity, etc.) is skipped instead of failing the entire cluster map
init. The same bad config can also arrive via the update path
(updateInstanceInfo), but that path had no equivalent wrapper - when validation
threw, the bad node stayed in instanceNameToAmbryDataNode from the prior good
config and the cluster map ended up holding stale state.

Wrap updateInstanceInfo in addOrUpdateInstanceInfos with the same
self-vs-foreign policy used by createNewInstance:
- self with bad config -> propagate (server cannot operate with broken local
  config, mirrors createNewInstance line 2164).
- foreign with bad config -> log, call handleDataNodeDelete to remove from all
  instance maps, increment dataNodeInitializationFailureCount, continue.

Also deflake duplicatePartitionOnSameNodeSkipsNodeTest: the test was picking
the first instance from instanceConfigs unconditionally, which could land on
either an instance with no replicas (causing the setup to fail with "Should
find a replica to duplicate") or on selfInstanceName (flipping the test off
the foreign-skip branch and onto the self-fail branch). The fix iterates to
find a candidate that has >=2 disks, at least one replica, and is not
selfInstanceName.

Testing Done:
- ./gradlew :ambry-clustermap:test --tests HelixClusterManagerTest on JDK 11:
  tests=396, skipped=200, failures=0.
- Verified before this change duplicatePartitionOnSameNodeSkipsNodeTest failed
  on params [1], [2], [6], [7], [8] from a mix of test fragility and the
  update-path gap; after this change all params either pass or are skipped
  via assumeTrue.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The unit-test job in #3235 hung for ~4h before being cancelled. Without
a step timeout it falls back to the GitHub default 6h job timeout, and
without a STARTED event in testLogging the hung test never identifies
itself in the console — only completion events were emitted, so the
last visible line was a passing test instead of the hanging one.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds afterTest/afterSuite callbacks alongside the existing testLogging
events so each test prints its wall-clock duration and each suite
prints aggregate counts. Helps identify slow or hung tests in CI logs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
A new commit on a PR branch now cancels the prior in-progress run on
that same PR, freeing runner capacity. Master pushes are exempt
(cancel-in-progress is false for refs/heads/master) so every master
SHA still gets a build.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Removes the 2h timeout-minutes on the unit-test gradle step so the
build is allowed to run up to GitHub's default 6h job timeout. The
goal is to ensure a deterministic hang in this PR has time to
manifest fully and reach the hung test, even if many preceding tests
take longer than expected.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Drop the PASSED testLogging event and the per-test duration callback.
Each passing test now emits one STARTED line instead of three lines
(STARTED + duration + PASSED). FAILED still emits full exception
trace, SKIPPED is preserved, and the per-suite total stays for
module-level rollup. Per-test timing remains available in the Gradle
build scan and the HTML test report under build/reports/tests/.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Each passing test now emits exactly one line:
  [HH:mm:ss.SSS] com.github.ambry.foo.BarTest > testQux STARTED

Duration of test N is roughly (line N+1 timestamp) - (line N timestamp).
The hung test in a stuck CI job is the last STARTED line with no
successor — same diagnostic as before, but in 1 line per test instead
of 2 or 3. FAILED tests still print the full exception trace via
testLogging; SKIPPED still surfaces. Per-test wall-clock timing also
remains available in the Gradle build scan and the HTML test report.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Add an afterTest callback that fires only on FAILURE and prints a
wall-clock-stamped failure line with the test's duration:

  [HH:mm:ss.SSS] foo.BarTest > testQux FAILED (1234ms)

The full exception trace continues to print via testLogging since
FAILED stays in events.

Drop SKIPPED from testLogging.events. Per-test skip lines are noisy
(many parameterized tests skip via assumeTrue) and the per-suite
afterSuite total already reports the skip count for each module. The
HTML test report at build/reports/tests/test/ still has skip details
when needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
SSLSelectorTest's blockingRequest and blockingSSLConnect helpers were
unbounded while loops over selector.poll(). Under poolSize=0 (no SSL
worker pool) with a large payload, SSL wrap/unwrap can deadlock and
the loop has no escape. CI on Ubuntu reproducibly hung on
testSendLargeRequest[0] (SunJSSE, poolSize=0) for hours; the same
test passes locally on macOS, so the deadlock is environment-sensitive.
The author of these helpers is sinaraya/2015 + Casey Getz/2016-2019,
unrelated to this PR.

Add a 60s deadline to both helpers. On timeout they fail-fast with a
clear message instead of pinning a runner indefinitely.

Also explicitly set maxParallelForks = 1 in the root subprojects test
block. The default is already 1 but being explicit prevents future
config drift or accidental enablement via --parallel.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
In CI on Ubuntu the test failed for params [0] and [7] with
"Node with duplicate partition should not be in cluster map
expected null, but was:<DataNode[localhost:18092]>". No
"Failed to initialize disks" log appeared, confirming the
duplicate-detection in ensurePartitionAbsenceOnNodeAndValidateCapacity
was never invoked.

Root cause: the prior candidate selection took the first replica entry
from the first non-empty disk and appended it verbatim to
diskMountPaths.get(1) — without checking whether diskMountPaths.get(1)
already had that partition. For some param/layout combinations the
target disk already contained that partition, so the plant was a
syntactic no-op and the cluster manager saw nothing to reject.

Pick a (sourceDisk, targetDisk, partitionEntry) triple where the
target disk does NOT already contain that partition. Add an in-memory
sanity assertion right after the plant so a future no-op planting
fails loudly in setup instead of in the eventual assertion.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous implementation called Process.waitFor() then immediately
Process.exitValue() after silently swallowing InterruptedException. If
the wait was interrupted (e.g. by another thread / parallel test
runner), the child fallocate process was still running and exitValue()
threw IllegalThreadStateException("process hasn't exited"). CI hit
this on StoreFileCopyHandlerIntegTest.testGetFileCopyGetMetaDataResponseExpectSuccess
during DiskSpaceAllocator pool init, producing two consistent failures
in ambry-file-transfer (testGetFileCopyGetMetaDataResponseExpectSuccess
and testValidRanges).

Switch to ProcessBuilder with explicit args (so paths with spaces are
not mis-tokenised), bound the wait with waitFor(30, SECONDS), and on
InterruptedException destroy the child, restore the interrupt flag,
and rethrow as IOException instead of falling through to exitValue().
Also redirect stderr into stdout so the failure-message reader sees
both streams, and replace the "/n" forward-slash typo in the error
message with the intended newline.

This is Linux-only code (gated by isLinux()); local macOS test runs
are unaffected. Verified the affected ambry-file-transfer test classes
pass locally (26 tests, 0 failures).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…InterruptedException

The test correctly verifies that production code re-interrupts the
current thread on InterruptedException by asserting
Thread.currentThread().isInterrupted() at the end. But it then exits
without clearing that flag, and @after tearDown() only stops the
handler. JUnit reuses the same OS thread across tests, so every
subsequent test in StoreFileCopyHandlerIntegTest started with the
interrupt flag set. Their setUp() called DiskSpaceAllocator.initializePool
which calls Utils.preAllocateFileIfNeeded which calls
process.waitFor(...) — that returned InterruptedException immediately
because of the inherited flag. Cascade: 10+ tests in the IntegTest
class failed in setUp().

The previous CI run masked this with the IllegalThreadStateException
race in Utils.preAllocateFileIfNeeded. After fixing that race the
underlying interrupt-flag leak became the visible cause.

Clear the flag in a finally block on the test that sets it.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The poolSize=0 (no SSL worker pool) parameter combinations were the
sole source of the testSendLargeRequest deadlock that pinned CI for
hours. Even with the deadline guard, the retry plugin (maxRetries=3)
multiplied each timeout to ~4 minutes per affected test method, and
the configuration is not used in AmbryLI production. Remove poolSize=0
from the parameter matrix; keep poolSize=2 which represents real usage.

Also tighten the in-helper deadline from 60s to 10s. Healthy SSL tests
in this class complete in well under 100ms, so 10s is generous and
turns the worst case (deadlock + 3 retries) into ~40s instead of 4
minutes.

Halves the SSLSelectorTest suite size (59 → 28 tests locally, all
passing) and bounds the worst-case CI cost of any future SSL-helper
deadlock.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
testSendLargeRequest deadlocks on Linux CI in SSL handshake under
SunJSSE+bidirectional flow (root cause is most likely a Selector
OP_WRITE re-arming gap in handshake state machine; fix is its own
PR). Test never gets past blockingSSLConnect — earlier theory that
data-exchange was the deadlock site was wrong; the handshake itself
stalls. Ignore the one method, keep the rest of SSLSelectorTest
running. Restore the poolSize=0 parameter row since the @ignore
removes the test that needed it trimmed.

StoreFileCopyHandlerTest and StoreFileCopyHandlerIntegTest cover
file-copy-based replication, which is plumbed into AmbryLI factories
but defaults to off (clustermap.enable.file.copy.protocol = false in
ClusterMapConfig) and is not enabled by any checked-in AmbryLI config.
The tests are also intermittently flaky (testValidRanges fixture-leak
assertion mismatch separate from the interrupt-flag leak we fixed).
Ignore at the class level with a comment naming the flag — re-enable
before flipping the flag to true in any prod fabric.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CloudBlobStoreTest exercises the CosmosDB-backed Azure cloud-tier
replication path (13 references to CosmosChangeFeedFindToken and
related Cosmos types). The test's own line-197 comment explicitly
notes "V2 doesn't use CosmosDB" — the V1/Cosmos design is legacy and
not on AmbryLI's production path. @ignore at the class level. Re-enable
if the V1 path is ever revived.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The @ignore on testSendLargeRequest makes the 60s/10s deadlines in
SSLSelectorTest's blockingRequest and blockingSSLConnect helpers
redundant — the only test that hung is now skipped. Restored both
helpers to the original unbounded loops.

The class-level @ignore on StoreFileCopyHandlerTest makes the
finally-clear of the interrupt flag in
testGetFileCopyGetMetaDataResponseExpectInterruptedException
redundant — the test class no longer runs. Restored the original
catch block.

Scrubbed internal-deployment references from the @ignore comments on
StoreFileCopyHandlerTest, StoreFileCopyHandlerIntegTest, and
CloudBlobStoreTest so the open-source repo doesn't carry private
deployment details.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…nerMetricsTest

HelixClusterManager.HelixClusterChangeHandler.waitForInitNotification
was waiting 320s (5m). Routing-table init under aggregated-view config
has been observed to take >5m on shared CI runners under contention,
producing intermittent IllegalStateException("Initial routing table
change ... didn't come within 5 mins") failures in
HelixClusterManagerTest params with useAggregatedView=true (params
[6], [7], [8] have all hit this on different runs). Bumping to 600s
costs nothing on healthy runs (the latch only blocks if Helix is
slow) and removes the false-positive flake. Updated the error
message to match.

Also adds class-level @ignore on AzureStorageContainerMetricsTest —
production class is dead per cross-reference scan against deployment
configs (zero references in source or .src config files); its sibling
azure tests for unused classes were already @ignore'd, this one was
the gap.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…it wait

Fast diagnostic loop for the inconsistentReplicaCapacityTest flake:
- workflow scoped to *HelixClusterManagerTest.inconsistentReplicaCapacityTest*
  so each iteration takes ~5 min instead of 30+.
- maxRetries = 0 so a failure produces a single dump and exits.
- testLogging.showStandardStreams = true so System.err output (the
  thread dump below) is visible in the CI log.
- HelixClusterManager.waitForInitNotification: 30s wait + thread dump
  + listener-state dump on timeout (down from 600s on main PR). Surfaces
  exactly what state Helix init is stuck in when the latch doesn't fire.

This is a debug branch off snalli/clustermap-test-flakes. Not for merge.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
snalli added a commit to snalli/ambry that referenced this pull request May 1, 2026
The root-namespace wipe ("/" instead of "/" + helixCluster.getClusterName())
broke ALL HelixClusterManagerTest tests with mass FAILED — Helix's
HelixPropertyStore requires a cluster-namespaced root path, and using
"/" caused the propertyStore.remove() call to fail in @after, which
JUnit reports as the test itself failing (via afterMethod failure).

Restoring the original namespace-scoped cleanup. The
inconsistentReplicaCapacityTest state-leak from linkedin#3238 diagnosis still
needs a fix, but a more careful one — handling per-test cluster names
explicitly rather than blasting the whole tree.

Keeping the SSL fixes (EchoServer needClientAuth=false,
testSendLargeRequest @ignore removed, blocking helpers fail-fast on
disconnect) — those are unrelated and were verified green on the SSL
debug PR.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cherry-picked from snalli/clustermap-test-flakes 4f277be (just the
HelixClusterManagerTest @after cleanup, not the unrelated SSL/Utils
parts). Removes the inconsistentReplicaCapacityTest hang at the source
on this debug branch too — should make the diagnostic timeout
unnecessary.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@snalli
Copy link
Copy Markdown
Contributor Author

snalli commented May 1, 2026

Superseded — diagnostic value captured in #3235. Helix init state-leak root cause confirmed; per-cluster-name @after cleanup landed in 4f277be on main PR. Original 320s wait restored (no need for the 600s band-aid).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant