RC1 Push#179
Conversation
|
Important Review skippedReview was skipped due to path filters ⛔ Files ignored due to path filters (2)
CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughAdded Fabric bech32 payment encoding and BOLT12 semantics modules; extended Lightning service with JSON‑RPC helpers and static exports; introduced multiple native C translation units and crypto/security primitives (SHA‑2, HMAC/PBKDF2, secure memory, RNG); updated tx hashing, Chain proposal deduplication, envelope normalization, tests, and .gitignore to unignore native sources. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Complexity | 82 |
| Duplication | 2 |
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #179 +/- ##
==========================================
+ Coverage 86.31% 86.50% +0.18%
==========================================
Files 85 85
Lines 24298 24542 +244
Branches 1 1
==========================================
+ Hits 20973 21230 +257
+ Misses 3325 3312 -13 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (4)
tests/fabric.chain.js (1)
90-104: LGTM — dedup coverage is appropriate, consider one extra assertion.The test correctly pins down the new de-dup behavior: same payload → same
tx.id, single mempool entry. Optionally also asserttx1 === tx2(same instance) to lock in thatproposeTransactionreturns the existingTransactionobject rather than a freshly wrapped one — this is the cheaper path yourchain.jschanges now take.♻️ Optional extra assertion
assert.strictEqual(tx1.id, tx2.id); + assert.strictEqual(tx1, tx2, 'duplicate proposal should return the same Transaction instance'); assert.strictEqual(chain.mempool.filter((id) => id === tx1.id).length, 1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/fabric.chain.js` around lines 90 - 104, Add an extra assertion to the test to ensure proposeTransaction returns the same Transaction instance for duplicate payloads: after creating tx1 and tx2 via Chain.prototype.proposeTransaction (and existing assertions), assert that tx1 === tx2 so the test verifies both identical ids and identical object identity (same instance) rather than two separate wrappers.types/chain.js (1)
138-141: Deadnew Transaction(prior)branch.The constructor (lines 56–59) already normalizes every entry in
_state.transactionsto aTransactioninstance, andproposeTransactionbelow only ever storesactor(also aTransaction). Theprior instanceof Transaction ? prior : new Transaction(prior)ternary's false branch is therefore unreachable in practice — you can simplify toreturn prior;.♻️ Proposed simplification
- const prior = this._state.transactions[actor.id]; - if (prior) { - return prior instanceof Transaction ? prior : new Transaction(prior); - } + const prior = this._state.transactions[actor.id]; + if (prior) return prior;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@types/chain.js` around lines 138 - 141, The branch creating new Transaction instances is dead because the constructor and proposeTransaction already ensure every entry in _state.transactions is a Transaction; replace the ternary return (prior instanceof Transaction ? prior : new Transaction(prior)) with a simple return prior in the accessor so it always returns the stored Transaction instance (update the code around references to _state.transactions and the Transaction type accordingly).types/bitcoin/transaction.js (1)
14-19: Remove unreachablecatchblock. Node'sBuffer.from(string, 'hex')does not throw on malformed hex—it silently truncates at the first invalid nibble and returns the partial result. Thetry/catchhere is dead code.For stricter validation, replace the catch block with an additional guard:
!/^[0-9a-fA-F]+$/.test(s)to reject strings with non-hex characters before decoding.♻️ Proposed simplification
- const s = String(raw).replace(/\s+/g, ''); - if (!s.length || s.length % 2 !== 0) return Buffer.alloc(0); - try { - return Buffer.from(s, 'hex'); - } catch { - return Buffer.alloc(0); - } + const s = String(raw).replace(/\s+/g, ''); + if (!s.length || s.length % 2 !== 0 || !/^[0-9a-fA-F]+$/.test(s)) return Buffer.alloc(0); + return Buffer.from(s, 'hex');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@types/bitcoin/transaction.js` around lines 14 - 19, The try/catch around Buffer.from(s, 'hex') is unreachable; remove the catch and instead validate the input string first—if s is falsy or fails the hex regex /^[0-9a-fA-F]+$/ then return Buffer.alloc(0), otherwise call Buffer.from(s, 'hex') to decode; update the code that currently uses Buffer.from(s, 'hex') and Buffer.alloc(0) so the non-hex guard is applied before decoding.src/sha2.h (1)
1-17: Consider addingextern "C"guards to the header for portability.While
binding.ccwraps its C header includes in anextern "C"block (lines 4–12), includingsha2.h, the header itself lacks these guards. This is a best-practice portability measure: if another C++ file includessha2.hwithout wrapping it, function declarations would be name-mangled, causing link failures. Adding guards to the header makes it reusable without requiring consumers to manage linkage externally.🔧 Proposed header fix
`#include` <stddef.h> `#include` <stdint.h> +#ifdef __cplusplus +extern "C" { +#endif + int fabric_sha256(const uint8_t *data, size_t len, uint8_t out32[32]); int fabric_sha512(const uint8_t *data, size_t len, uint8_t out64[64]); int fabric_hmac_sha512(const uint8_t *key, size_t key_len, const uint8_t *data, size_t data_len, uint8_t out64[64]); int fabric_pbkdf2_hmac_sha512(const uint8_t *password, size_t password_len, const uint8_t *salt, size_t salt_len, uint32_t iterations, uint8_t *out, size_t out_len); +#ifdef __cplusplus +} +#endif + `#endif`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sha2.h` around lines 1 - 17, The header lacks C++ linkage guards which can cause name-mangling when included from C++ directly; wrap the declarations in an extern "C" block guarded by `#ifdef` __cplusplus to preserve C linkage. Modify src/sha2.h to add the standard pattern (e.g., `#ifdef __cplusplus extern "C" {` before the declarations and `} `#endif`` after) so all functions like fabric_sha256, fabric_sha512, fabric_hmac_sha512, and fabric_pbkdf2_hmac_sha512 retain C linkage for C++ consumers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@functions/bolt12Semantics.js`:
- Around line 23-57: Rename the exported constant BOLT12_TLV to CLN_BOLT12_TLV
to explicitly mark Core Lightning extensions (keep all existing recurrence
entries so tests and summarizeBolt12RecurrenceFromDecode() continue to work),
and augment the constant with the missing BOLT `#12` spec TLV ids: INVREQ_FEATURES
(84), INVREQ_PAYER_NOTE (89), INVREQ_PATHS (90), INVREQ_BIP353_NAME (91),
INVOICE_RELATIVE_EXPIRY (166), INVOICE_FALLBACKS (172), and INVOICE_FEATURES
(174); then update any imports/usages in tests and code (e.g., change
bs.BOLT12_TLV to bs.CLN_BOLT12_TLV) to match the new name.
In `@functions/fabricPaymentBech32.js`:
- Around line 53-56: The array-to-buffer conversion for o.hash32 can silently
accept invalid numbers; update the Array.isArray(o.hash32) branch to validate
each element before calling Buffer.from: ensure every element is a finite
integer and between 0 and 255 (reject negatives, >255, NaN, and non-integers)
and throw a TypeError if any element fails; keep existing handling for Buffer
and Uint8Array as-is but consider throwing if their lengths are unexpected
elsewhere in the function (use the unique symbol o.hash32 to locate the checks).
In `@services/lightning.js`:
- Around line 999-1001: The static exports Lightning.Bolt12 and
Lightning.FabricPayment reference undefined symbols (lightningBolt12,
fabricPaymentBech32); import or require the bridge modules that define those
symbols before assigning the exports. Locate the assignments to Lightning.Bolt12
and Lightning.FabricPayment and add explicit imports/require calls that produce
lightningBolt12 and fabricPaymentBech32 (or ensure those variables are declared
earlier), then assign the imported values to Lightning.Bolt12 and
Lightning.FabricPayment so requiring the module does not throw.
- Around line 290-351: The RPC methods (createOffer, fetchInvoice, pay,
listOffers, disableOffer, createInvoiceRequest, listInvoiceRequests,
disableInvoiceRequest, sendInvoice) are wrapping parameter objects in arrays
which sends them as positional params; change calls to _makeRPCRequest to pass
named-parameter objects directly (e.g. _makeRPCRequest('offer', params) or
_makeRPCRequest('listoffers', filter) instead of [params]) and for variants like
fetchInvoice that accept either an offer string or an object, pass the single
argument directly (or merge invoiceParams into an object when both are provided)
so Core Lightning receives named params; keep using [] or null only for RPCs
that truly take no params and ensure method signatures still validate shapes
before calling _makeRPCRequest.
In `@src/security.c`:
- Around line 36-37: The allocation tracking logic can fail-open: when creating
an allocation_record_t (see allocations and memory_initialized) inside
fabric_secure_malloc (and the other record-creation sites), the code currently
returns the secure buffer even if malloc(sizeof(allocation_record_t)) fails, so
later fabric_secure_free cannot find the record to zero memory; change this to
fail-closed by checking the result of malloc for the allocation_record_t and on
failure free the allocated secure buffer and return NULL (do not expose an
untracked allocation). Apply the same pattern to every record-creation site (the
blocks that manipulate allocation_record_t around fabric_secure_malloc, the
other two similar blocks referenced) so that on any tracking-memory allocation
failure you clean up any previously allocated resources and return NULL.
- Around line 568-585: The loop computing the longest repeated-byte run can miss
a trailing run because max_consecutive is only updated when a change is seen;
after the for loop ends you must compare consecutive_count with max_consecutive
and update max_consecutive if needed so the final run (at buffer end) is
considered before the threshold check; update symbols: test_buffer,
consecutive_count, max_consecutive, test_size.
- Around line 382-395: The getrandom() call in the __linux__ block (using buffer
and length) should treat EINTR by retrying instead of immediately returning
FABRIC_ERROR_SYSTEM_CALL_FAILED; modify the logic around the getrandom result so
that if result < 0 and errno == EINTR you loop and call getrandom again, if
errno == ENOSYS fall through to the /dev/urandom fallback, and for any other
errno return FABRIC_ERROR_SYSTEM_CALL_FAILED; ensure the loop stops on success
(result == (ssize_t)length) or any non-retryable error.
- Around line 186-210: In fabric_secure_realloc, recover the real tracked old
size before copying when old_size may be 0: call the tracked-size lookup variant
of fabric_secure_free (e.g. fabric_secure_free(ptr, 0) or an equivalent helper)
to obtain actual_old_size, use that actual_old_size to compute copy_size and
memcpy, then call fabric_secure_free(ptr, actual_old_size) to securely free the
original buffer; keep the rest of the logic (allocating new_ptr via
fabric_secure_malloc and handling failures) unchanged.
In `@src/sha2.c`:
- Around line 288-320: Sensitive stack buffers (key_block, key_hash, o_key_pad,
i_key_pad, inner and similar per-function temporaries like u, t, sbuf) are left
on the stack on both success and early-return failure paths; wipe them
explicitly before every return. Update the HMAC/PBKDF2 routines (the block using
key_block/key_hash/... and the other block referenced at 328-358) to call a
secure zeroing routine (e.g., secure_memzero/OPENSSL_cleanse or an internal
constant-time memset) on those local buffers immediately after they are no
longer needed and also just before any early return, ensuring no sensitive
material remains on the stack. Ensure the zeroing covers the full sizeof(...) of
each buffer and is not optimized away by the compiler.
- Around line 264-276: Update the input validation in fabric_sha256 and
fabric_sha512 (and any other functions that currently do if (!ptr || !out)
return 0) to allow NULL data when len == 0 by changing the check to fail only
when (ptr == NULL && len != 0) or when output pointer is NULL; ensure
PBKDF2/HMAC entry points use the same logic. After any use of stack buffers that
hold sensitive material (key_block, o_key_pad, i_key_pad, inner in HMAC; u, t,
sbuf in PBKDF2), securely wipe them with a cryptographic secure zeroing function
(e.g., explicit_bzero/OPENSSL_cleanse/secure_memzero wrapper used by the
project) rather than plain memset, and call it in all return paths (including
error paths) so sensitive data is always cleared; reference functions:
fabric_sha256, fabric_sha512, HMAC-related functions that allocate
key_block/o_key_pad/i_key_pad/inner, and PBKDF2 implementation that uses
u/t/sbuf.
In `@tests/lightning/lightning.service.unit.js`:
- Around line 248-258: The tests currently expect and assert that methods like
createOffer, fetchInvoice, pay, listOffers, createInvoiceRequest,
listInvoiceRequests, and sendInvoice wrap their parameters in a positional array
when calling _makeRPCRequest; change each test to assert that _makeRPCRequest
receives the params as a dictionary/object (e.g., assert.deepStrictEqual(params,
p) rather than [p]) and update the stubbed _makeRPCRequest implementations used
in these specs so they check the named-parameter style (and return the same
mocked response), ensuring each test calls the corresponding method
(createOffer, fetchInvoice, pay, listOffers, createInvoiceRequest,
listInvoiceRequests, sendInvoice) and verifies the returned value as before.
In `@types/bitcoin/transaction.js`:
- Around line 47-71: The stored state field _state.content.raw is set once from
settings.raw in the constructor but the getters hash, txid and id call _rawBuf()
which reads from this.settings.raw, allowing divergence if settings.raw is
mutated; change the implementation so the canonical raw buffer is owned by state
(or normalized once in the constructor) and have _rawBuf() read from
this._state.content.raw (or store a frozen Buffer on construction and use that
for both _state.content.raw and _rawBuf()), and optionally memoize the computed
doubleSha256Hex value used by hash to avoid repeated work.
- Around line 73-77: The signAsHolder function is using doubleSha256Hex and
passing a hex string into this.holder.sign (which converts it as UTF-8),
producing a signature over ASCII bytes instead of the 32-byte hash; replace that
by computing the binary double-SHA256 of this._rawBuf() (a 32-byte Buffer) and
call this.holder.signSchnorrHash(digestBuf) to produce a correct Schnorr
signature (update references in signAsHolder from doubleSha256Hex /
this.holder.sign to the binary hash computation and
this.holder.signSchnorrHash).
---
Nitpick comments:
In `@src/sha2.h`:
- Around line 1-17: The header lacks C++ linkage guards which can cause
name-mangling when included from C++ directly; wrap the declarations in an
extern "C" block guarded by `#ifdef` __cplusplus to preserve C linkage. Modify
src/sha2.h to add the standard pattern (e.g., `#ifdef __cplusplus extern "C" {`
before the declarations and `} `#endif`` after) so all functions like
fabric_sha256, fabric_sha512, fabric_hmac_sha512, and fabric_pbkdf2_hmac_sha512
retain C linkage for C++ consumers.
In `@tests/fabric.chain.js`:
- Around line 90-104: Add an extra assertion to the test to ensure
proposeTransaction returns the same Transaction instance for duplicate payloads:
after creating tx1 and tx2 via Chain.prototype.proposeTransaction (and existing
assertions), assert that tx1 === tx2 so the test verifies both identical ids and
identical object identity (same instance) rather than two separate wrappers.
In `@types/bitcoin/transaction.js`:
- Around line 14-19: The try/catch around Buffer.from(s, 'hex') is unreachable;
remove the catch and instead validate the input string first—if s is falsy or
fails the hex regex /^[0-9a-fA-F]+$/ then return Buffer.alloc(0), otherwise call
Buffer.from(s, 'hex') to decode; update the code that currently uses
Buffer.from(s, 'hex') and Buffer.alloc(0) so the non-hex guard is applied before
decoding.
In `@types/chain.js`:
- Around line 138-141: The branch creating new Transaction instances is dead
because the constructor and proposeTransaction already ensure every entry in
_state.transactions is a Transaction; replace the ternary return (prior
instanceof Transaction ? prior : new Transaction(prior)) with a simple return
prior in the accessor so it always returns the stored Transaction instance
(update the code around references to _state.transactions and the Transaction
type accordingly).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 84cf117a-9054-418f-8c7a-6f6135057f85
⛔ Files ignored due to path filters (2)
package-lock.jsonis excluded by!**/package-lock.jsonreports/install.logis excluded by!**/*.log
📒 Files selected for processing (19)
.gitignorefunctions/bolt12Semantics.jsfunctions/fabricPaymentBech32.jsfunctions/lightningBolt12.jsservices/lightning.jssrc/crypto.csrc/p2p.csrc/protocol.csrc/security.csrc/sha2.csrc/sha2.htests/bitcoin/transaction.jstests/bolt12.semantics.unit.jstests/fabric.chain.jstests/fabricPaymentBech32.unit.jstests/lightning/bolt12.classify.unit.jstests/lightning/lightning.service.unit.jstypes/bitcoin/transaction.jstypes/chain.js
| it('createOffer calls offer with object', async function () { | ||
| const ln = new Lightning(); | ||
| const p = { amount_msat: '1000', description: 'd' }; | ||
| ln._makeRPCRequest = async (method, params) => { | ||
| assert.strictEqual(method, 'offer'); | ||
| assert.deepStrictEqual(params, [p]); | ||
| return { bolt12: 'lno1...' }; | ||
| }; | ||
| const out = await ln.createOffer(p); | ||
| assert.strictEqual(out.bolt12, 'lno1...'); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect Lightning wrapper param forwarding for object/named Core Lightning RPC params.
# Expected: object-argument branches call _makeRPCRequest(method, paramsObject),
# not _makeRPCRequest(method, [paramsObject]) or _makeRPCRequest(method, ['offer', paramsObject]).
rg -n -C4 '\b(createOffer|fetchInvoice|pay|listOffers|createInvoiceRequest|listInvoiceRequests|sendInvoice)\s*\(|\b_makeRPCRequest\s*\(' --type=jsRepository: FabricLabs/fabric
Length of output: 50373
🏁 Script executed:
#!/bin/bash
# Find and read the Lightning service implementation to understand how params are forwarded
fd -n "lightning.js" --path "*/services/*" --type fRepository: FabricLabs/fabric
Length of output: 291
🏁 Script executed:
#!/bin/bash
# Read the specific test file at the mentioned line ranges to see current assertions
sed -n '248,260p; 265,280p; 287,302p; 302,318p; 318,330p; 330,346p; 356,368p' tests/lightning/lightning.service.unit.jsRepository: FabricLabs/fabric
Length of output: 4526
🏁 Script executed:
#!/bin/bash
# Inspect the Lightning service implementation for createOffer, fetchInvoice, pay methods
rg -A15 "^\s*(createOffer|fetchInvoice|pay|listOffers|createInvoiceRequest|listInvoiceRequests|sendInvoice)\s*\(" services/lightning.jsRepository: FabricLabs/fabric
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Find the Lightning service file
find . -name "*.js" -path "*/services/*" | grep -i lightning | head -5Repository: FabricLabs/fabric
Length of output: 84
🏁 Script executed:
#!/bin/bash
# List services directory to understand structure
ls -la services/ 2>/dev/null | head -20Repository: FabricLabs/fabric
Length of output: 736
🏁 Script executed:
#!/bin/bash
# Find all lightning-related JS files
fd "lightning" --type f -e js | grep -E "(service|index)" | head -10Repository: FabricLabs/fabric
Length of output: 161
🏁 Script executed:
#!/bin/bash
# Read the Lightning service file to understand how parameters are forwarded
wc -l services/lightning.jsRepository: FabricLabs/fabric
Length of output: 87
🏁 Script executed:
#!/bin/bash
# Extract the createOffer method implementation
sed -n '/^\s*createOffer\s*(/,/^\s*}/p' services/lightning.js | head -30Repository: FabricLabs/fabric
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Search for how createOffer, fetchInvoice, pay methods call _makeRPCRequest
rg -B2 -A8 "createOffer|fetchInvoice|pay\s*\(" services/lightning.js | head -100Repository: FabricLabs/fabric
Length of output: 1464
🏁 Script executed:
#!/bin/bash
# Continue reading the rest of createInvoiceRequest, listInvoiceRequests, sendInvoice methods
rg -B2 -A8 "createInvoiceRequest|listInvoiceRequests|sendInvoice" services/lightning.js | head -80Repository: FabricLabs/fabric
Length of output: 1449
🏁 Script executed:
#!/bin/bash
# Check if there's any documentation about Lightning RPC params in the repo
find . -name "*.md" -type f | xargs grep -l -i "offer\|fetchinvoice\|lightning.*rpc" | head -5Repository: FabricLabs/fabric
Length of output: 128
🏁 Script executed:
#!/bin/bash
# Look for any Core Lightning documentation or examples in the codebase
find . -name "*.md" -o -name "*.txt" | xargs grep -l "amount_msat\|lightningd" 2>/dev/null | head -5Repository: FabricLabs/fabric
Length of output: 99
🏁 Script executed:
#!/bin/bash
# Check if there's a Lightning compatibility documentation
cat docs/LIGHTNING_COMPAT.md 2>/dev/null | head -100Repository: FabricLabs/fabric
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Look for any comments or documentation about RPC params style in the Lightning service
rg -B5 -A5 "_makeRPCRequest\s*\(" services/lightning.js | grep -E "^|--$|//|/\*" | head -50Repository: FabricLabs/fabric
Length of output: 2116
🏁 Script executed:
#!/bin/bash
# Check if there's any reference to Core Lightning JSON-RPC conventions in comments
rg "JSON-RPC|named.*param|dict.*param|position.*param" services/lightning.jsRepository: FabricLabs/fabric
Length of output: 213
🏁 Script executed:
#!/bin/bash
# Look at the _makeRPCRequest implementation to see how it handles params
rg -B5 -A15 "^\s*async _makeRPCRequest\s*\(" services/lightning.jsRepository: FabricLabs/fabric
Length of output: 1056
🏁 Script executed:
#!/bin/bash
# Continue reading _makeRPCRequest to see how it constructs the JSON-RPC request
rg -A50 "async _makeRPCRequest" services/lightning.js | head -80Repository: FabricLabs/fabric
Length of output: 1945
🏁 Script executed:
#!/bin/bash
# Look for how the request is actually sent - look for JSON.stringify or similar
rg -B5 -A10 "JSON-RPC|jsonrpc|stringify.*method" services/lightning.js | head -100Repository: FabricLabs/fabric
Length of output: 867
🏁 Script executed:
#!/bin/bash
# Verify the current behavior - the implementation does wrap params in arrays
# Let's check one more time the exact issue by looking at a specific call pattern
sed -n '400,420p' services/lightning.jsRepository: FabricLabs/fabric
Length of output: 940
🏁 Script executed:
#!/bin/bash
# Search for any reference to how Core Lightning expects params in documentation
cat docs/LIGHTNING_COMPAT.md 2>/dev/null | head -50Repository: FabricLabs/fabric
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Check if LIGHTNING_COMPAT.md exists anywhere
find . -name "*COMPAT*" -o -name "*LIGHT*" -type f | head -10Repository: FabricLabs/fabric
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Search for any comments in the code referencing Core Lightning RPC conventions
rg -i "core lightning|cln|offer.*amount|amount_msat" . --type md | head -30Repository: FabricLabs/fabric
Length of output: 1093
🏁 Script executed:
#!/bin/bash
# Look at the actual test implementation more carefully
# Check if there are integration tests that show how _makeRPCRequest is actually used
sed -n '1,50p' tests/lightning/lightning.service.unit.jsRepository: FabricLabs/fabric
Length of output: 3013
🏁 Script executed:
#!/bin/bash
# Look for LIGHTNING_COMPAT.md file
find . -name "LIGHTNING_COMPAT.md"Repository: FabricLabs/fabric
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Check what the test file references about Lightning RPC surface
sed -n '1,30p' tests/lightning/lightning.cln.rpc.surface.test.jsRepository: FabricLabs/fabric
Length of output: 1270
🏁 Script executed:
#!/bin/bash
# Look for any existing issues or comments about param formatting
rg -i "array.*param|named.*param|positional" services/lightning.js tests/lightning/Repository: FabricLabs/fabric
Length of output: 713
🏁 Script executed:
#!/bin/bash
# Let me search for any test output or CI logs that might show actual Core Lightning RPC behavior
find . -name "*.log" -o -name "*.txt" -type f | xargs grep -l "offer\|fetchinvoice" 2>/dev/null | head -5Repository: FabricLabs/fabric
Length of output: 126
🏁 Script executed:
#!/bin/bash
# Check if there are any real integration tests that show how Core Lightning responds
sed -n '1,100p' tests/lightning/lightning.service.js | grep -A10 -B10 "createOffer\|fetchInvoice\|offer"Repository: FabricLabs/fabric
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Look for any actual RPC calls made during tests
rg -i "amount.*msat|offer.*lno" tests/lightning/ | head -20Repository: FabricLabs/fabric
Length of output: 1912
🏁 Script executed:
#!/bin/bash
# Check if the tests actually run and if there are any clues about what Core Lightning expects
sed -n '240,280p' tests/lightning/lightning.service.unit.jsRepository: FabricLabs/fabric
Length of output: 1733
🏁 Script executed:
#!/bin/bash
# Also check if there's any discussion of this issue in DEVELOPERS.md or other docs
cat DEVELOPERS.md 2>/dev/null | grep -A10 -B10 -i "lightning\|rpc\|offer"Repository: FabricLabs/fabric
Length of output: 9049
🏁 Script executed:
#!/bin/bash
# Check the actual parameter names for offer command - see if amount_msat is even correct
rg "amount_msat|amount.*msat" services/lightning.js | head -20Repository: FabricLabs/fabric
Length of output: 443
Pass dictionary-style params to RPC methods instead of wrapping in positional arrays.
The createOffer, fetchInvoice, pay, listOffers, createInvoiceRequest, listInvoiceRequests, and sendInvoice methods all wrap their parameters in arrays before forwarding to _makeRPCRequest. Core Lightning's JSON-RPC interface accepts both positional (array) and named (object/dictionary) parameter styles, but wrapping objects in arrays sends them as positional arguments. This breaks commands expecting named parameters—for example, offer should receive { amount_msat: '...', description: '...' } directly, not [{ amount_msat: '...', description: '...' }].
The tests reinforce this incorrect pattern by asserting deepStrictEqual(params, [p]) instead of deepStrictEqual(params, p). These assertions must be corrected to expect the proper format.
Proposed test fixes
- const p = { amount_msat: '1000', description: 'd' };
+ const p = { amount_msat: '1000', description: 'd' };
ln._makeRPCRequest = async (method, params) => {
assert.strictEqual(method, 'offer');
- assert.deepStrictEqual(params, [p]);
+ assert.deepStrictEqual(params, p);
return { bolt12: 'lno1...' };
};- if (calls === 1) assert.deepStrictEqual(params, ['lno1a']);
- else if (calls === 2) assert.deepStrictEqual(params, ['lno1b', { amount_msat: '5000' }]);
- else assert.deepStrictEqual(params, [{ offer: 'lno1c', amount_msat: '1000', payer_note: 'hi' }]);
+ if (calls === 1) assert.deepStrictEqual(params, 'lno1a');
+ else if (calls === 2) assert.deepStrictEqual(params, { offer: 'lno1b', amount_msat: '5000' });
+ else assert.deepStrictEqual(params, { offer: 'lno1c', amount_msat: '1000', payer_note: 'hi' });- if (n === 1) assert.deepStrictEqual(params, ['lnbc1']);
- else assert.deepStrictEqual(params, [{ bolt11: 'lnbc1', maxfeepercent: 0.5 }]);
+ if (n === 1) assert.deepStrictEqual(params, 'lnbc1');
+ else assert.deepStrictEqual(params, { bolt11: 'lnbc1', maxfeepercent: 0.5 });Also applies to: 265–279, 287–300, 302–316, 318–328, 330–344, 356–366
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/lightning/lightning.service.unit.js` around lines 248 - 258, The tests
currently expect and assert that methods like createOffer, fetchInvoice, pay,
listOffers, createInvoiceRequest, listInvoiceRequests, and sendInvoice wrap
their parameters in a positional array when calling _makeRPCRequest; change each
test to assert that _makeRPCRequest receives the params as a dictionary/object
(e.g., assert.deepStrictEqual(params, p) rather than [p]) and update the stubbed
_makeRPCRequest implementations used in these specs so they check the
named-parameter style (and return the same mocked response), ensuring each test
calls the corresponding method (createOffer, fetchInvoice, pay, listOffers,
createInvoiceRequest, listInvoiceRequests, sendInvoice) and verifies the
returned value as before.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
functions/fabricPaymentBech32.js (1)
42-45:⚠️ Potential issue | 🟠 MajorValidate
number[]bytes beforeBuffer.from.
Buffer.from(o.hash32)still silently coerces invalid array elements (negatives, >255, non-integers,NaN), producing a corrupt 32-byte hash rather than rejecting bad input. Same concern as prior review — add an element range/integer check in theArray.isArraybranch before conversion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/fabricPaymentBech32.js` around lines 42 - 45, The Array.isArray(o.hash32) branch must validate elements before calling Buffer.from: ensure o.hash32.length === 32 and every element is a finite integer between 0 and 255 (Number.isInteger(elem) && elem >= 0 && elem <= 255); if validation fails throw a TypeError('hash32 must be Buffer, Uint8Array, or byte array of 32 bytes with values 0-255'); only then call Buffer.from(o.hash32) to create buf. This targets the Array.isArray branch handling o.hash32 and replaces the silent coercion by Buffer.from with explicit checks and a clear error.
🧹 Nitpick comments (2)
functions/fabricPaymentBech32.js (2)
100-100: Pass the normalized string to the delegate.
classifyPaymentEncodingStringnormalizes input intotbut then forwards the raw, un-trimmedstolb.classifyLightningEncodedString. Not incorrect (the delegate re-normalizes), but forwardingtwould be consistent and avoids a second trim.- return lb.classifyLightningEncodedString(s); + return lb.classifyLightningEncodedString(t);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/fabricPaymentBech32.js` at line 100, classifyPaymentEncodingString normalizes the input into variable t but currently calls lb.classifyLightningEncodedString(s); change the call to pass the normalized t (i.e. lb.classifyLightningEncodedString(t)) so the delegate receives the already-trimmed/normalized string and avoids redundant re-normalization; update references in classifyPaymentEncodingString to use t where the normalized value is needed.
76-81: Simplify redundant length checks.Line 76 guards
raw.length < 34and line 81 rechecksraw.length !== 34; the first check is subsumed by the second, and interleaving the slice between them is confusing. Consider collapsing to a singleif (raw.length !== 34) return null;up front before readingversion/routeType/hash32.♻️ Proposed simplification
- if (raw.length < 2 + 32) return null; + if (raw.length !== 2 + 32) return null; const version = raw[0]; const routeType = raw[1]; const hash32 = Buffer.from(raw.slice(2, 34)); if (version !== FABRIC_PAYMENT_VERSION) return null; - if (raw.length !== 34) return null;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/fabricPaymentBech32.js` around lines 76 - 81, Collapse the redundant length guards by checking raw.length === 34 once before slicing: replace the two checks with a single if (raw.length !== 34) return null; placed before reading version, routeType and hash32, then proceed to set version = raw[0], routeType = raw[1], and hash32 = Buffer.from(raw.slice(2,34)) and keep the version comparison against FABRIC_PAYMENT_VERSION as-is; remove the earlier raw.length < 34 check and the later raw.length !== 34 check to avoid confusion and redundant work.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@functions/fabricPaymentBech32.js`:
- Around line 42-45: The Array.isArray(o.hash32) branch must validate elements
before calling Buffer.from: ensure o.hash32.length === 32 and every element is a
finite integer between 0 and 255 (Number.isInteger(elem) && elem >= 0 && elem <=
255); if validation fails throw a TypeError('hash32 must be Buffer, Uint8Array,
or byte array of 32 bytes with values 0-255'); only then call
Buffer.from(o.hash32) to create buf. This targets the Array.isArray branch
handling o.hash32 and replaces the silent coercion by Buffer.from with explicit
checks and a clear error.
---
Nitpick comments:
In `@functions/fabricPaymentBech32.js`:
- Line 100: classifyPaymentEncodingString normalizes the input into variable t
but currently calls lb.classifyLightningEncodedString(s); change the call to
pass the normalized t (i.e. lb.classifyLightningEncodedString(t)) so the
delegate receives the already-trimmed/normalized string and avoids redundant
re-normalization; update references in classifyPaymentEncodingString to use t
where the normalized value is needed.
- Around line 76-81: Collapse the redundant length guards by checking raw.length
=== 34 once before slicing: replace the two checks with a single if (raw.length
!== 34) return null; placed before reading version, routeType and hash32, then
proceed to set version = raw[0], routeType = raw[1], and hash32 =
Buffer.from(raw.slice(2,34)) and keep the version comparison against
FABRIC_PAYMENT_VERSION as-is; remove the earlier raw.length < 34 check and the
later raw.length !== 34 check to avoid confusion and redundant work.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 18986645-f1ad-4713-82ec-6d91b5ddb143
📒 Files selected for processing (3)
functions/fabricPaymentBech32.jsservices/lightning.jstests/lightning/lightning.service.unit.js
✅ Files skipped from review due to trivial changes (2)
- tests/lightning/lightning.service.unit.js
- services/lightning.js
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/security.c (1)
205-218:⚠️ Potential issue | 🟠 Major
fabric_secure_reallocstill silently drops data when the pointer is not tracked.The previous review suggested returning
NULLwhenfabric_secure_malloc_size(ptr)returns0. The current fix records the result but does not guard on it — ifptrwas never tracked (or tracking was lost),old_sizestays0, a new buffer is allocated,memcpycopies 0 bytes, andfabric_secure_free(ptr, 0)is then called. The caller receives a zero-initialized buffer and the old contents are irretrievably gone with no error signaled.🛡️ Proposed fix
if (old_size == 0) { old_size = fabric_secure_malloc_size(ptr); + if (old_size == 0) + { + /* Pointer is not tracked; refuse to silently lose data. */ + return NULL; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/security.c` around lines 205 - 218, fabric_secure_realloc must not silently drop the caller's data when fabric_secure_malloc_size(ptr) returns 0; in fabric_secure_realloc, after calling fabric_secure_malloc_size(ptr) into old_size, if old_size == 0 treat this as an error/unknown pointer and immediately return NULL without allocating a new buffer, without calling memcpy or fabric_secure_free(ptr, 0), and preserve the original pointer; update the code paths around fabric_secure_malloc(new_size), memcpy(new_ptr, ptr, copy_size) and fabric_secure_free(ptr, old_size) accordingly so that the function returns NULL on untracked pointers instead of returning a zero-length/new buffer.
🧹 Nitpick comments (5)
tests/lightning/bolt12.semantics.unit.js (1)
30-40: Consider assertinginvoice_recurrence_basetimeextraction.The decoded fixture sets
invoice_recurrence_basetime: 1700000000, but the test never asserts it ends up on the summary. SincesummarizeBolt12RecurrenceFromDecodehas a dedicated branch for this key (seefunctions/bolt12Semantics.jslines 135-137), adding an explicit check would close a small coverage gap.🧪 Proposed addition
assert.ok(sum); assert.strictEqual(sum.invreq_recurrence_counter, 2); assert.ok(sum.offer_recurrence); + assert.strictEqual(sum.invoice_recurrence_basetime, 1700000000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/lightning/bolt12.semantics.unit.js` around lines 30 - 40, The test for summarizeBolt12RecurrenceFromDecode is missing an assertion that invoice_recurrence_basetime is preserved; update the test in tests/lightning/bolt12.semantics.unit.js (the it block calling bs.summarizeBolt12RecurrenceFromDecode and assigning to sum) to include an assertion that sum.invoice_recurrence_basetime === 1700000000 so the function's branch handling invoice_recurrence_basetime is covered.functions/fabricPaymentBech32.js (2)
80-107: Minor: decoder accepts mixed-casefa1…input thatbech32.decodemay reject.
isFabricRoutedPaymentStringis case-insensitive (line 48), but bech32/bech32m forbids mixed-case strings, so a caller who passes e.g.Fa1…or an otherwise mixed-case value thatisFabricRoutedPaymentStringhappily recognizes will fall into thecatchand yieldnullhere rather than being normalized. If you want symmetry between recognition and decode, consider lowercasingstr.trim()before callingbech32.decode, or documenting that decode requires a fully-lowercase (or fully-uppercase) input. Not a correctness bug — just a usability/consistency nit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/fabricPaymentBech32.js` around lines 80 - 107, The decoder decodeFabricRoutedPayment currently calls bech32.decode on str.trim(), which rejects mixed-case bech32m input while isFabricRoutedPaymentString is case-insensitive; normalize the input by lowercasing the trimmed string before calling bech32.decode (e.g., use a local const normalized = str.trim().toLowerCase()), use normalized for the decode call but keep using the original trimmed value where you need to preserve casing if any, and ensure subsequent checks (d.hrp, d.spec, and version) operate on the decoded result as before so the function remains symmetric with isFabricRoutedPaymentString.
114-119: Consider handling non-string inputs consistently withisFabricRoutedPaymentString.
classifyPaymentEncodingStringcoerces viaString(s || ''), soisFabricRoutedPaymentString(t)is called on an already-trimmed, stringified value — fine. However, ifsis a non-string truthy object (e.g.{}),String(s)becomes"[object Object]"which then falls through tolb.classifyLightningEncodedStringand returns'unknown'. That matches current behavior but may be worth an explicittypeof s !== 'string'short-circuit returning'empty'or'unknown'to avoid surprising delegations. Optional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@functions/fabricPaymentBech32.js` around lines 114 - 119, classifyPaymentEncodingString currently stringifies non-string inputs which can produce misleading results (e.g. "[object Object]"); update classifyPaymentEncodingString to first check typeof s !== 'string' and short-circuit (return 'unknown') before coercing/trim so non-string objects don't get passed to isFabricRoutedPaymentString or lb.classifyLightningEncodedString; reference the function classifyPaymentEncodingString and the helper isFabricRoutedPaymentString and lb.classifyLightningEncodedString when applying the change.src/sha2.c (1)
298-304: Function-local#defineleaks to file scope.
FABRIC_HMAC_SHA512_WIPE(andFABRIC_PBKDF2_WIPEat line 350) are written inside function bodies but#defineis always file-scope in C and is not#undef'd. If any later translation unit included intosrc/crypto.cdefines a same-named helper, you'll get redefinition/shadowing surprises. Either#undefthem at the end of each function, or hoist them to a single static inline helper per buffer set.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sha2.c` around lines 298 - 304, The file-scope macros FABRIC_HMAC_SHA512_WIPE and FABRIC_PBKDF2_WIPE are being defined inside function bodies but remain visible for the whole translation unit; fix by removing these function-local `#define` usages and either (A) replace each macro with a static inline helper function (e.g., static inline void fabric_hmac_sha512_wipe(void *key_block, size_t kb_len, void *key_hash, size_t kh_len, ...) and similarly for pbkdf2) and call that helper from the function, or (B) if you prefer to keep the macro form, add an immediate `#undef` FABRIC_HMAC_SHA512_WIPE (and `#undef` FABRIC_PBKDF2_WIPE) at the end of the function that used it so the macro does not leak into file scope; update all call sites to use the chosen approach and keep the helper/macro names matching FABRIC_HMAC_SHA512_WIPE and FABRIC_PBKDF2_WIPE for easy location.src/security.c (1)
226-244:fabric_secure_memcmpreturn value doesn't matchmemcmpsemantics.The function name and signature (
int) strongly evokememcmp, but the returned value is a foldeduint8_tthat is only ever0(equal) or positive (not equal). Callers porting code likeif (memcmp(...) < 0)or sorting with this would silently misbehave. Either rename to something likefabric_secure_memeqreturningintboolean, or document the contract explicitly at the declaration.Also note the
if (!a || !b)short-circuit at lines 228-231 is not constant-time — fine for the NULL-guard case, but worth a comment so future readers don't reintroduce data-dependent branches.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/security.c` around lines 226 - 244, The function fabric_secure_memcmp currently returns 0 for equal and a non-zero uint8_t for not-equal, which violates memcmp semantics; rename it to fabric_secure_memeq (or change its declaration/documentation) and make it a boolean-style int that returns 0 for equal and 1 for not-equal (or true/false semantics consistent with its new name), update its prototype/comment accordingly, and add a note above the NULL-guard in fabric_secure_memeq that the early NULL check is intentionally not constant-time to prevent future accidental data-dependent branching; adjust all callers to use the new name/contract.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/security.c`:
- Around line 36-37: The global allocation registry and init flags (allocations,
memory_initialized, random_initialized, urandom_fd, hCryptProv) are not
thread-safe and must be protected to avoid races in track_allocation,
untrack_allocation, fabric_secure_malloc, fabric_secure_free, and
fabric_secure_random_init; add a platform-appropriate mutex (e.g.,
pthread_mutex_t on POSIX / CRITICAL_SECTION on Windows) to serialize access to
the linked list and one-shot init flags (lock around reads/writes in the listed
functions), initialize/destroy the mutex in the module init/shutdown, and use
the mutex to protect both the registry updates and any lazy initialization
checks (or alternatively add debug-only single-thread assertions documenting
single-thread usage).
---
Duplicate comments:
In `@src/security.c`:
- Around line 205-218: fabric_secure_realloc must not silently drop the caller's
data when fabric_secure_malloc_size(ptr) returns 0; in fabric_secure_realloc,
after calling fabric_secure_malloc_size(ptr) into old_size, if old_size == 0
treat this as an error/unknown pointer and immediately return NULL without
allocating a new buffer, without calling memcpy or fabric_secure_free(ptr, 0),
and preserve the original pointer; update the code paths around
fabric_secure_malloc(new_size), memcpy(new_ptr, ptr, copy_size) and
fabric_secure_free(ptr, old_size) accordingly so that the function returns NULL
on untracked pointers instead of returning a zero-length/new buffer.
---
Nitpick comments:
In `@functions/fabricPaymentBech32.js`:
- Around line 80-107: The decoder decodeFabricRoutedPayment currently calls
bech32.decode on str.trim(), which rejects mixed-case bech32m input while
isFabricRoutedPaymentString is case-insensitive; normalize the input by
lowercasing the trimmed string before calling bech32.decode (e.g., use a local
const normalized = str.trim().toLowerCase()), use normalized for the decode call
but keep using the original trimmed value where you need to preserve casing if
any, and ensure subsequent checks (d.hrp, d.spec, and version) operate on the
decoded result as before so the function remains symmetric with
isFabricRoutedPaymentString.
- Around line 114-119: classifyPaymentEncodingString currently stringifies
non-string inputs which can produce misleading results (e.g. "[object Object]");
update classifyPaymentEncodingString to first check typeof s !== 'string' and
short-circuit (return 'unknown') before coercing/trim so non-string objects
don't get passed to isFabricRoutedPaymentString or
lb.classifyLightningEncodedString; reference the function
classifyPaymentEncodingString and the helper isFabricRoutedPaymentString and
lb.classifyLightningEncodedString when applying the change.
In `@src/security.c`:
- Around line 226-244: The function fabric_secure_memcmp currently returns 0 for
equal and a non-zero uint8_t for not-equal, which violates memcmp semantics;
rename it to fabric_secure_memeq (or change its declaration/documentation) and
make it a boolean-style int that returns 0 for equal and 1 for not-equal (or
true/false semantics consistent with its new name), update its prototype/comment
accordingly, and add a note above the NULL-guard in fabric_secure_memeq that the
early NULL check is intentionally not constant-time to prevent future accidental
data-dependent branching; adjust all callers to use the new name/contract.
In `@src/sha2.c`:
- Around line 298-304: The file-scope macros FABRIC_HMAC_SHA512_WIPE and
FABRIC_PBKDF2_WIPE are being defined inside function bodies but remain visible
for the whole translation unit; fix by removing these function-local `#define`
usages and either (A) replace each macro with a static inline helper function
(e.g., static inline void fabric_hmac_sha512_wipe(void *key_block, size_t
kb_len, void *key_hash, size_t kh_len, ...) and similarly for pbkdf2) and call
that helper from the function, or (B) if you prefer to keep the macro form, add
an immediate `#undef` FABRIC_HMAC_SHA512_WIPE (and `#undef` FABRIC_PBKDF2_WIPE) at
the end of the function that used it so the macro does not leak into file scope;
update all call sites to use the chosen approach and keep the helper/macro names
matching FABRIC_HMAC_SHA512_WIPE and FABRIC_PBKDF2_WIPE for easy location.
In `@tests/lightning/bolt12.semantics.unit.js`:
- Around line 30-40: The test for summarizeBolt12RecurrenceFromDecode is missing
an assertion that invoice_recurrence_basetime is preserved; update the test in
tests/lightning/bolt12.semantics.unit.js (the it block calling
bs.summarizeBolt12RecurrenceFromDecode and assigning to sum) to include an
assertion that sum.invoice_recurrence_basetime === 1700000000 so the function's
branch handling invoice_recurrence_basetime is covered.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 11f0e167-62e5-475b-a574-cd9f4b063b79
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (9)
functions/fabricPaymentBech32.jssrc/security.csrc/sha2.csrc/sha2.htests/bitcoin/transaction.jstests/fabricPaymentBech32.unit.jstests/lightning/bolt12.semantics.unit.jstests/lightning/lightning.service.unit.jstypes/bitcoin/transaction.js
✅ Files skipped from review due to trivial changes (1)
- tests/fabricPaymentBech32.unit.js
🚧 Files skipped from review as they are similar to previous changes (4)
- tests/bitcoin/transaction.js
- tests/lightning/lightning.service.unit.js
- src/sha2.h
- types/bitcoin/transaction.js
| static allocation_record_t *allocations = NULL; | ||
| static int memory_initialized = 0; |
There was a problem hiding this comment.
Allocation registry is not thread-safe.
allocations, memory_initialized, random_initialized, urandom_fd, and hCryptProv are plain globals mutated by track_allocation/untrack_allocation/fabric_secure_malloc/fabric_secure_free/fabric_secure_random_init. If any Fabric consumer calls these from multiple threads (likely, given this is wired into a Node native addon per the PR summary), the linked-list updates will race and corrupt the tracking structure, which in turn breaks size-aware fabric_secure_free and leaks unzeroed memory.
Consider guarding the allocation registry and one-shot init flags with a mutex (pthread / CRITICAL_SECTION on Windows), or document that these APIs must be called from a single thread and assert it in debug builds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/security.c` around lines 36 - 37, The global allocation registry and init
flags (allocations, memory_initialized, random_initialized, urandom_fd,
hCryptProv) are not thread-safe and must be protected to avoid races in
track_allocation, untrack_allocation, fabric_secure_malloc, fabric_secure_free,
and fabric_secure_random_init; add a platform-appropriate mutex (e.g.,
pthread_mutex_t on POSIX / CRITICAL_SECTION on Windows) to serialize access to
the linked list and one-shot init flags (lock around reads/writes in the listed
functions), initialize/destroy the mutex in the module init/shutdown, and use
the mutex to protect both the registry updates and any lazy initialization
checks (or alternatively add debug-only single-thread assertions documenting
single-thread usage).
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/fabric.environment.js`:
- Around line 34-45: The test must isolate and restore environment variables
used by Environment so external FABRIC_* values don't affect the assertion:
before creating the temp home (near variables prev, home, store, walletPath)
capture the current values of FABRIC_SEED, FABRIC_XPRV, and FABRIC_XPUB and
delete them from process.env (or set to undefined) for the duration of the test,
then create and start the Environment and assert on environment.wallet as
before; in the finally block, restore process.env.NODE_ENV to prev but if prev
is undefined delete process.env.NODE_ENV (do not set the string "undefined"),
and restore the original FABRIC_SEED, FABRIC_XPRV, and FABRIC_XPUB values
(deleting them if they were originally undefined) so subsequent tests are not
polluted.
In `@types/environment.js`:
- Around line 86-94: The explicit-seed selection currently prepends FIXTURE_SEED
in test mode causing FIXTURE_SEED to override configured seeds and also omits
this.local; fix by building the explicit array to include this.settings.seed,
this['FABRIC_SEED'], this.readVariable('FABRIC_SEED') and this.local (in that
order) and only append FIXTURE_SEED when process.env.NODE_ENV === 'test' (do not
unshift), then return explicit.find(any) so configured seeds take precedence
over the fixture.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d5141206-78b7-4901-817a-0fe395064269
📒 Files selected for processing (2)
tests/fabric.environment.jstypes/environment.js
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@services/lightning.js`:
- Around line 294-305: In fetchInvoice, the object-form branch (fetchInvoice
when offerOrParams is an object) returns early and discards the separate
invoiceParams argument; update fetchInvoice to either merge invoiceParams into
the offer object or explicitly reject the combination—preferably merge
invoiceParams into offerOrParams (shallow copy) and pass the merged object to
_makeRPCRequest('fetchinvoice', [mergedParams]) so callers like
fetchInvoice({offer: '...'}, {amount_msat, payer_note}) are preserved; modify
the branch that checks typeof offerOrParams === 'object' in the fetchInvoice
method to perform the merge (or throw a clear Error mentioning both offer and
invoiceParams) before calling _makeRPCRequest.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0b49ecfd-23bd-446c-9cdd-5fab2e4c7c74
📒 Files selected for processing (8)
functions/bolt12Semantics.jsservices/lightning.jstests/fabric.chain.jstests/fabric.environment.jstests/lightning/bolt12.semantics.unit.jstypes/bitcoin/transaction.jstypes/chain.jstypes/environment.js
✅ Files skipped from review due to trivial changes (2)
- tests/fabric.environment.js
- tests/lightning/bolt12.semantics.unit.js
🚧 Files skipped from review as they are similar to previous changes (4)
- tests/fabric.chain.js
- types/environment.js
- types/bitcoin/transaction.js
- functions/bolt12Semantics.js
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/lightning/lightning.service.unit.js (2)
23-36: Avoid source-text assertions for module wiring.Line 35 already proves
Lightning.Bolt12Semanticsis the exact required module. The source scan on Lines 26-31 can fail after harmless service refactors.Trim the redundant assertion
const bolt12Semantics = require('../../functions/bolt12Semantics'); - const lightningPath = require.resolve('../../services/lightning'); - const src = fs.readFileSync(lightningPath, 'utf8'); - assert.ok( - /\bBolt12Semantics\b/.test(src) && src.includes('bolt12Semantics'), - `expected ${path.relative(process.cwd(), lightningPath)} to wire Bolt12Semantics` - ); assert.ok(Lightning.Bolt12, 'Lightning.Bolt12'); assert.ok(Lightning.FabricPayment, 'Lightning.FabricPayment'); assert.ok(Lightning.Bolt12Semantics, 'Lightning.Bolt12Semantics');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/lightning/lightning.service.unit.js` around lines 23 - 36, Remove the brittle source-text check that reads the service file and tests for a literal "Bolt12Semantics" export; specifically delete the require.resolve/lightningPath, fs.readFileSync(src), the /\bBolt12Semantics\b/ regex test and its corresponding assert.ok with the path-based error message, leaving the existing runtime assertions (assert.ok(Lightning.Bolt12), assert.ok(Lightning.FabricPayment), assert.ok(Lightning.Bolt12Semantics), and assert.strictEqual(Lightning.Bolt12Semantics, bolt12Semantics)) intact so the test relies on actual module wiring rather than source-text scanning.
13-20: Assert the exported doc paths resolve.These assertions validate the string literals, but not whether the files are actually present in the package root. Adding an existence check would catch stale
Lightning.DOCSentries before release.Test hardening
assert.strictEqual(Lightning.DOCS.fabricPaymentBech32, 'docs/FABRIC_PAYMENT_BECH32.md'); assert.strictEqual(Lightning.DOCS.lightningCompat, 'docs/LIGHTNING_COMPAT.md'); + for (const docPath of new Set(Object.values(Lightning.DOCS))) { + assert.ok( + fs.existsSync(path.resolve(__dirname, '..', '..', docPath)), + `missing Lightning.DOCS file: ${docPath}` + ); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/lightning/lightning.service.unit.js` around lines 13 - 20, The test currently only asserts string values on Lightning.DOCS; enhance it to verify the referenced files actually exist by resolving each Lightning.DOCS entry against the package root and asserting file existence (e.g., use path.resolve/package root and fs.existsSync or fs.promises.access) for keys like Lightning.DOCS.boltCompatibility, Lightning.DOCS.fabricLightningOffers, Lightning.DOCS.fabricPaymentBech32 and Lightning.DOCS.lightningCompat in the same describe block so stale doc paths fail the unit test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/lightning/lightning.service.unit.js`:
- Around line 23-36: Remove the brittle source-text check that reads the service
file and tests for a literal "Bolt12Semantics" export; specifically delete the
require.resolve/lightningPath, fs.readFileSync(src), the /\bBolt12Semantics\b/
regex test and its corresponding assert.ok with the path-based error message,
leaving the existing runtime assertions (assert.ok(Lightning.Bolt12),
assert.ok(Lightning.FabricPayment), assert.ok(Lightning.Bolt12Semantics), and
assert.strictEqual(Lightning.Bolt12Semantics, bolt12Semantics)) intact so the
test relies on actual module wiring rather than source-text scanning.
- Around line 13-20: The test currently only asserts string values on
Lightning.DOCS; enhance it to verify the referenced files actually exist by
resolving each Lightning.DOCS entry against the package root and asserting file
existence (e.g., use path.resolve/package root and fs.existsSync or
fs.promises.access) for keys like Lightning.DOCS.boltCompatibility,
Lightning.DOCS.fabricLightningOffers, Lightning.DOCS.fabricPaymentBech32 and
Lightning.DOCS.lightningCompat in the same describe block so stale doc paths
fail the unit test.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3514eb16-7447-4c56-bcac-b288f01ef783
⛔ Files ignored due to path filters (1)
reports/install.logis excluded by!**/*.log
📒 Files selected for processing (9)
functions/bolt12Semantics.jsservices/lightning.jstests/fabric.chain.jstests/fabric.environment.jstests/lightning/bolt12.semantics.unit.jstests/lightning/lightning.service.unit.jstypes/bitcoin/transaction.jstypes/chain.jstypes/environment.js
✅ Files skipped from review due to trivial changes (1)
- tests/lightning/bolt12.semantics.unit.js
🚧 Files skipped from review as they are similar to previous changes (4)
- types/environment.js
- types/chain.js
- types/bitcoin/transaction.js
- services/lightning.js
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@package.json`:
- Around line 170-172: The dependency override pins jayson -> uuid@14.0.0 which
is ESM-only and incompatible with jayson (which requires a CJS uuid); update the
override in package.json for the "jayson" entry to use a CJS-compatible uuid
(e.g., set the uuid version to "=13.0.0" or any 8–13.x that satisfies
jayson@4.3.0's ^8.3.2), or remove the uuid override entirely if not needed, then
reinstall to ensure jayson can require('uuid') at runtime.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 11314c67-d156-4a39-a8a6-111d5a44cc49
⛔ Files ignored due to path filters (2)
package-lock.jsonis excluded by!**/package-lock.jsonreports/install.logis excluded by!**/*.log
📒 Files selected for processing (2)
examples/http.jspackage.json
✅ Files skipped from review due to trivial changes (1)
- examples/http.js
| "jayson": { | ||
| "uuid": "14.0.0" | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does jayson 4.3.0 use uuid via CommonJS require(), and is it known to be compatible with uuid v14's ESM-only / exports layout?
💡 Result:
Yes, jayson 4.3.0 uses uuid via CommonJS require, specifically in its browser client code with const uuid = require('uuid/v4');. This is shown in the source code from older versions (e.g., 3.0.1), and given the Dec 2025 release date with uuid ^8.3.2 dependency (which supports CommonJS), it likely remains unchanged. It is known to be compatible with uuid v14's ESM-only layout? No evidence of incompatibility found; no reported issues. Jayson's Node.js server/client likely use uuid ^8 (CommonJS-compatible), while browser uses a bundled version. Uuid v14 drops CommonJS support per docs, but since jayson pins ^8.3.2, standard installs use compatible version unless manually overridden to v14.
Citations:
- 1: https://www.npmjs.com/package/jayson
- 2: https://github.com/tedeh/jayson
- 3: https://www.npmjs.com/package/uuid
- 4: https://jayson.tedeh.net/3.0.1/lib_client_browser.js.html
🏁 Script executed:
#!/bin/bash
# Confirm jayson's declared uuid dependency range and how it imports uuid,
# so we can reason about whether forcing uuid@14.0.0 is safe.
set -euo pipefail
# 1) What does jayson itself declare as its uuid dep range?
curl -s https://registry.npmjs.org/jayson/4.3.0 | jq '.dependencies.uuid, .devDependencies.uuid'
# 2) Check uuid@14.0.0 package.json exports/main to confirm CJS entrypoint still exists.
curl -s https://registry.npmjs.org/uuid/14.0.0 | jq '{main: .main, exports: .exports, type: .type, engines: .engines}'Repository: FabricLabs/fabric
Length of output: 283
Do not pin jayson to uuid@14.0.0 — it is incompatible and will break at runtime.
The override as written will fail. jayson@4.3.0 declares uuid@^8.3.2 as a dependency and uses CommonJS require('uuid/v4') to import it. However, uuid@14.0.0 is ESM-only (with "type": "module", "main": null, and no CommonJS entrypoint). When jayson tries to require('uuid'), Node.js will not find a CommonJS export and the request will fail at runtime.
To fix jayson, pin to a uuid version that still supports CommonJS — versions 8 through 13 are all compatible with jayson's ^8.3.2 requirement. For example, "uuid": "=13.0.0" (the last CJS-compatible major).
If you have tested this override with jayson in your environment and it works, please share that evidence; otherwise, change the pin immediately.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` around lines 170 - 172, The dependency override pins jayson ->
uuid@14.0.0 which is ESM-only and incompatible with jayson (which requires a CJS
uuid); update the override in package.json for the "jayson" entry to use a
CJS-compatible uuid (e.g., set the uuid version to "=13.0.0" or any 8–13.x that
satisfies jayson@4.3.0's ^8.3.2), or remove the uuid override entirely if not
needed, then reinstall to ensure jayson can require('uuid') at runtime.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@types/peer.js`:
- Around line 2014-2026: The function _sendLocalInventoryDocumentsWireResponse
currently returns false when items is empty, causing no reply for
kind:'documents'; change the logic to allow sending an empty inventory: remove
the items.length check from the early-return so the function proceeds when items
is an empty array, construct the obj with items (possibly empty) and send it via
conn._writeFabric as currently done (retain Message.fromVector,
m.signWithKey(this.key), conn._writeFabric(m.toBuffer())); apply the same change
to the similar documents-response path around the other occurrence referenced
(lines ~2058-2062) so an explicit empty catalog is returned instead of no
response.
- Around line 1697-1702: The declaration const relay under the case
'P2P_CHAT_MESSAGE' shares the switch lexical scope and triggers
noSwitchDeclarations; fix it by wrapping the case body in its own block (add {
... } after case 'P2P_CHAT_MESSAGE':) so the
Message.fromVector/relay.signWithKey/this.relayFrom calls (and any commented
debug) are inside that block, isolating the binding for relay.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6c4f61bd-ddd1-4338-8691-8d62a9e59faf
⛔ Files ignored due to path filters (2)
package-lock.jsonis excluded by!**/package-lock.jsonreports/install.logis excluded by!**/*.log
📒 Files selected for processing (3)
functions/fabricDocumentOfferEnvelope.jstests/fabric.peer.jstypes/peer.js
| case 'P2P_CHAT_MESSAGE': | ||
| this.emit('chat', message); | ||
| const relay = Message.fromVector(['ChatMessage', JSON.stringify(message)]); | ||
| this.emit('chat', msg); | ||
| const relay = Message.fromVector(['ChatMessage', JSON.stringify(msg)]); | ||
| relay.signWithKey(this.key); | ||
| // this.emit('debug', `Relayed chat message: ${JSON.stringify(relay.toGenericMessage())}`); | ||
| this.relayFrom(origin.name, relay); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python - <<'PY'
from pathlib import Path
p = Path("types/peer.js")
lines = p.read_text().splitlines()
for n in range(1697, 1703):
print(f"{n}: {lines[n-1]}")
PYRepository: FabricLabs/fabric
Length of output: 421
Wrap the chat relay clause in braces.
const relay is declared directly under the case 'P2P_CHAT_MESSAGE': label without braces, triggering Biome's noSwitchDeclarations rule. The binding currently shares the switch's lexical scope. Wrapping the statement block in braces will isolate the binding in its own block scope.
♻️ Proposed fix
case 'P2P_CHAT_MESSAGE':
- this.emit('chat', msg);
- const relay = Message.fromVector(['ChatMessage', JSON.stringify(msg)]);
- relay.signWithKey(this.key);
- // this.emit('debug', `Relayed chat message: ${JSON.stringify(relay.toGenericMessage())}`);
- this.relayFrom(origin.name, relay);
- break;
+ {
+ this.emit('chat', msg);
+ const relay = Message.fromVector(['ChatMessage', JSON.stringify(msg)]);
+ relay.signWithKey(this.key);
+ // this.emit('debug', `Relayed chat message: ${JSON.stringify(relay.toGenericMessage())}`);
+ this.relayFrom(origin.name, relay);
+ break;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'P2P_CHAT_MESSAGE': | |
| this.emit('chat', message); | |
| const relay = Message.fromVector(['ChatMessage', JSON.stringify(message)]); | |
| this.emit('chat', msg); | |
| const relay = Message.fromVector(['ChatMessage', JSON.stringify(msg)]); | |
| relay.signWithKey(this.key); | |
| // this.emit('debug', `Relayed chat message: ${JSON.stringify(relay.toGenericMessage())}`); | |
| this.relayFrom(origin.name, relay); | |
| case 'P2P_CHAT_MESSAGE': | |
| { | |
| this.emit('chat', msg); | |
| const relay = Message.fromVector(['ChatMessage', JSON.stringify(msg)]); | |
| relay.signWithKey(this.key); | |
| // this.emit('debug', `Relayed chat message: ${JSON.stringify(relay.toGenericMessage())}`); | |
| this.relayFrom(origin.name, relay); | |
| break; | |
| } |
🧰 Tools
🪛 Biome (2.4.13)
[error] 1699-1699: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
(lint/correctness/noSwitchDeclarations)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@types/peer.js` around lines 1697 - 1702, The declaration const relay under
the case 'P2P_CHAT_MESSAGE' shares the switch lexical scope and triggers
noSwitchDeclarations; fix it by wrapping the case body in its own block (add {
... } after case 'P2P_CHAT_MESSAGE':) so the
Message.fromVector/relay.signWithKey/this.relayFrom calls (and any commented
debug) are inside that block, isolating the binding for relay.
| _sendLocalInventoryDocumentsWireResponse (originName, items) { | ||
| if (!originName || !items || !items.length) return false; | ||
| const conn = this.connections[originName]; | ||
| if (!conn || !conn._writeFabric) return false; | ||
| const payload = { | ||
| type: 'INVENTORY_RESPONSE', | ||
| object: { items } | ||
| const obj = { | ||
| kind: 'documents', | ||
| items, | ||
| created: Date.now() | ||
| }; | ||
| const m = Message.fromVector(['P2P_INVENTORY_RESPONSE', JSON.stringify(payload.object || {})]); | ||
| const m = Message.fromVector(['P2P_INVENTORY_RESPONSE', JSON.stringify(obj)]); | ||
| m.signWithKey(this.key); | ||
| conn._writeFabric(m.toBuffer()); | ||
| return true; |
There was a problem hiding this comment.
Return an empty catalog for kind: 'documents' requests.
Right now _sendLocalInventoryDocumentsWireResponse() treats items.length === 0 as failure. In the kind: 'documents' path that means a peer with serveLocalDocumentInventory enabled but no local docs sends no reply at all, because this request type is not relayed later. Callers then have to time out instead of receiving a deterministic empty inventory.
🐛 Proposed fix
- _sendLocalInventoryDocumentsWireResponse (originName, items) {
- if (!originName || !items || !items.length) return false;
+ _sendLocalInventoryDocumentsWireResponse (originName, items, { allowEmpty = false } = {}) {
+ if (!originName || !Array.isArray(items)) return false;
+ if (!allowEmpty && items.length === 0) return false;
const conn = this.connections[originName];
if (!conn || !conn._writeFabric) return false;
@@
- const items = this._collectDocumentCatalogInventoryItems({});
- if (!items.length) return false;
- return this._sendLocalInventoryDocumentsWireResponse(origin.name, items);
+ const items = this._collectDocumentCatalogInventoryItems({});
+ return this._sendLocalInventoryDocumentsWireResponse(origin.name, items, { allowEmpty: true });
}Also applies to: 2058-2062
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@types/peer.js` around lines 2014 - 2026, The function
_sendLocalInventoryDocumentsWireResponse currently returns false when items is
empty, causing no reply for kind:'documents'; change the logic to allow sending
an empty inventory: remove the items.length check from the early-return so the
function proceeds when items is an empty array, construct the obj with items
(possibly empty) and send it via conn._writeFabric as currently done (retain
Message.fromVector, m.signWithKey(this.key), conn._writeFabric(m.toBuffer()));
apply the same change to the similar documents-response path around the other
occurrence referenced (lines ~2058-2062) so an explicit empty catalog is
returned instead of no response.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@types/environment.js`:
- Around line 603-617: The code calls readWallet() before entering the try block
so a failing read will throw and bypass the recoverable path; move the call to
readWallet() (and the subsequent text normalization) inside the existing try
that handles tryParsePersistedJson so any read errors are caught, then on catch
set this.wallet = false and emit the same warning via this.emit('warning', ...)
(preserve the message text and use this.settings.path) so the code falls back
safely; refer to walletExists(), readWallet(), tryParsePersistedJson(),
this.emit and this.wallet when making the change.
- Around line 86-95: The seed selection builds the explicit array (explicit)
from candidates like this.settings.seed, this['FABRIC_SEED'],
this.readVariable('FABRIC_SEED'), this.local and FIXTURE_SEED but does not
normalize whitespace, so this.local (read from .FABRIC_SEED) can include
trailing newlines and be chosen raw; update the selection to normalize each
candidate before checking with any by mapping/trimming strings (e.g., for each
candidate from explicit call .trim() and treat empty/undefined results as falsy)
so that the .find(any) operates on trimmed seed values (apply trimming to the
explicit array generation or immediately before .find).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 1349e7a5-4575-427a-902c-9aa5cbb6295e
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (2)
tests/fabric.environment.jstypes/environment.js
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/fabric.environment.js
| const explicit = [ | ||
| this.settings.seed, | ||
| this['FABRIC_SEED'], | ||
| this.readVariable('FABRIC_SEED') | ||
| ].find(any); | ||
| this.readVariable('FABRIC_SEED'), | ||
| this.local | ||
| ]; | ||
| if (process.env.NODE_ENV === 'test') { | ||
| explicit.push(FIXTURE_SEED); | ||
| } | ||
| return explicit.find(any); |
There was a problem hiding this comment.
Normalize seed candidates before selection.
this.local is read from .FABRIC_SEED as raw text and can include trailing newline/whitespace; in the current order it can be selected as-is and break wallet derivation.
🐛 Proposed fix
get seed () {
const explicit = [
this.settings.seed,
this['FABRIC_SEED'],
this.readVariable('FABRIC_SEED'),
this.local
- ];
+ ].map((candidate) => (typeof candidate === 'string' ? candidate.trim() : candidate));
if (process.env.NODE_ENV === 'test') {
explicit.push(FIXTURE_SEED);
}
return explicit.find(any);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const explicit = [ | |
| this.settings.seed, | |
| this['FABRIC_SEED'], | |
| this.readVariable('FABRIC_SEED') | |
| ].find(any); | |
| this.readVariable('FABRIC_SEED'), | |
| this.local | |
| ]; | |
| if (process.env.NODE_ENV === 'test') { | |
| explicit.push(FIXTURE_SEED); | |
| } | |
| return explicit.find(any); | |
| const explicit = [ | |
| this.settings.seed, | |
| this['FABRIC_SEED'], | |
| this.readVariable('FABRIC_SEED'), | |
| this.local | |
| ].map((candidate) => (typeof candidate === 'string' ? candidate.trim() : candidate)); | |
| if (process.env.NODE_ENV === 'test') { | |
| explicit.push(FIXTURE_SEED); | |
| } | |
| return explicit.find(any); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@types/environment.js` around lines 86 - 95, The seed selection builds the
explicit array (explicit) from candidates like this.settings.seed,
this['FABRIC_SEED'], this.readVariable('FABRIC_SEED'), this.local and
FIXTURE_SEED but does not normalize whitespace, so this.local (read from
.FABRIC_SEED) can include trailing newlines and be chosen raw; update the
selection to normalize each candidate before checking with any by
mapping/trimming strings (e.g., for each candidate from explicit call .trim()
and treat empty/undefined results as falsy) so that the .find(any) operates on
trimmed seed values (apply trimming to the explicit array generation or
immediately before .find).
| } else if (this.walletExists()) { | ||
| const data = this.readWallet(); | ||
| const text = typeof data === 'string' ? data : String(data ?? ''); | ||
|
|
||
| if (text.trim() === '') { | ||
| if (this.emit) { | ||
| this.emit('warning', `[FABRIC:KEYGEN] Wallet file is empty (${this.settings.path}); remove it or regenerate with fabric setup`); | ||
| } | ||
| this.wallet = false; | ||
| return this; | ||
| } | ||
|
|
||
| try { | ||
| const text = typeof data === 'string' ? data : String(data ?? ''); | ||
| const pr = tryParsePersistedJson(text); | ||
| if (!pr.ok) throw pr.error; |
There was a problem hiding this comment.
Wrap wallet file read inside the recoverable try path.
Line 604 runs readWallet() before try. If the file disappears or becomes unreadable after walletExists(), startup throws instead of falling back to wallet = false.
🐛 Proposed fix
} else if (this.walletExists()) {
- const data = this.readWallet();
- const text = typeof data === 'string' ? data : String(data ?? '');
-
- if (text.trim() === '') {
- if (this.emit) {
- this.emit('warning', `[FABRIC:KEYGEN] Wallet file is empty (${this.settings.path}); remove it or regenerate with fabric setup`);
- }
- this.wallet = false;
- return this;
- }
-
try {
+ const data = this.readWallet();
+ const text = typeof data === 'string' ? data : String(data ?? '');
+
+ if (text.trim() === '') {
+ if (this.emit) {
+ this.emit('warning', `[FABRIC:KEYGEN] Wallet file is empty (${this.settings.path}); remove it or regenerate with fabric setup`);
+ }
+ this.wallet = false;
+ return this;
+ }
+
const pr = tryParsePersistedJson(text);
if (!pr.ok) throw pr.error;
const input = pr.value;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } else if (this.walletExists()) { | |
| const data = this.readWallet(); | |
| const text = typeof data === 'string' ? data : String(data ?? ''); | |
| if (text.trim() === '') { | |
| if (this.emit) { | |
| this.emit('warning', `[FABRIC:KEYGEN] Wallet file is empty (${this.settings.path}); remove it or regenerate with fabric setup`); | |
| } | |
| this.wallet = false; | |
| return this; | |
| } | |
| try { | |
| const text = typeof data === 'string' ? data : String(data ?? ''); | |
| const pr = tryParsePersistedJson(text); | |
| if (!pr.ok) throw pr.error; | |
| } else if (this.walletExists()) { | |
| try { | |
| const data = this.readWallet(); | |
| const text = typeof data === 'string' ? data : String(data ?? ''); | |
| if (text.trim() === '') { | |
| if (this.emit) { | |
| this.emit('warning', `[FABRIC:KEYGEN] Wallet file is empty (${this.settings.path}); remove it or regenerate with fabric setup`); | |
| } | |
| this.wallet = false; | |
| return this; | |
| } | |
| const pr = tryParsePersistedJson(text); | |
| if (!pr.ok) throw pr.error; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@types/environment.js` around lines 603 - 617, The code calls readWallet()
before entering the try block so a failing read will throw and bypass the
recoverable path; move the call to readWallet() (and the subsequent text
normalization) inside the existing try that handles tryParsePersistedJson so any
read errors are caught, then on catch set this.wallet = false and emit the same
warning via this.emit('warning', ...) (preserve the message text and use
this.settings.path) so the code falls back safely; refer to walletExists(),
readWallet(), tryParsePersistedJson(), this.emit and this.wallet when making the
change.
Further cleanup for 0.1.0-RC1.
Summary by CodeRabbit
New Features
Bug Fixes / Improvements
Chores
Tests