feat(tenants): dual-control export/delete requests + safer status transitions#19497
feat(tenants): dual-control export/delete requests + safer status transitions#19497BrianCLong wants to merge 1 commit intomainfrom
Conversation
|
Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly hardens tenant lifecycle controls by preventing unsafe status transitions and gating high-sensitivity operations behind a dual-control approval mechanism. The changes ensure that all sensitive actions are auditable, governed by provenance entries, and recorded within bounded per-tenant governance request records. This enhancement improves the security posture and compliance readiness of the hosted SaaS platform by enforcing stricter controls and providing clear visibility into critical tenant-related operations. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
WalkthroughImplements tenant status hardening (Sprint +3) with new lifecycle controls via PATCH endpoint, dual-control-gated sensitive operation requests (export/delete), transactional status transitions, provenance logging, and corresponding task specification, documentation, prompts, and test coverage across routes, services, and governance configurations. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant API as PATCH /:id<br/>Endpoint
participant Schema as Zod<br/>Validator
participant Service as TenantService
participant ProvenanceLedger
participant Database
Client->>API: PATCH /api/tenants/{id}<br/>(status, reason, actor)
API->>Schema: Validate status schema
Schema-->>API: Valid
API->>Service: updateStatus(tenantId,<br/>status, actorId, reason)
Service->>Database: BEGIN TRANSACTION
Service->>Service: Validate transition<br/>(active ↔ suspended)
Service->>Database: Update tenant.lifecycle<br/>.statusHistory
Service->>ProvenanceLedger: Log provenance event<br/>(actor, status, reason)
ProvenanceLedger-->>Service: Event recorded
Service->>Database: COMMIT
Database-->>Service: Transaction complete
Service-->>API: Updated Tenant +<br/>Receipt
API->>Schema: Format response
API-->>Client: 200 OK<br/>{data, receipt}
sequenceDiagram
actor Client
participant API as POST /:id/<br/>export-requests
participant Schema as Zod<br/>Validator
participant DualControl as Dual-Control<br/>Middleware
participant Service as TenantService
participant ProvenanceLedger
participant Database
Client->>API: POST /api/tenants/{id}<br/>/export-requests<br/>(reason, approvals)
API->>Schema: Validate sensitive<br/>operation schema
Schema-->>API: Valid
API->>DualControl: normalizeApprovalActors
DualControl-->>API: Normalized approvals
API->>DualControl: validateDualControlRequirement<br/>(approvals, roles,<br/>action: export)
alt Dual-Control Satisfied
DualControl-->>API: Valid (2 approvals,<br/>compliance-officer)
API->>Service: createSensitiveOperationRequest<br/>(tenantId, export,<br/>reason, actorId, approvals)
Service->>Database: BEGIN TRANSACTION
Service->>Database: Update tenant.config<br/>.governance.requests
Service->>ProvenanceLedger: Log operation with<br/>approvals metadata
Service->>Database: COMMIT
Service-->>API: TenantSensitiveRequest
API-->>Client: 202 Accepted<br/>{data, receipt}
else Dual-Control Not Satisfied
DualControl-->>API: Violations found
API-->>Client: 403 Forbidden<br/>{violations}
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces significant enhancements to tenant lifecycle management, including dual-control for export and delete operations and safer status transitions. However, a high-severity security vulnerability exists where the dual-control mechanism in the routes can be bypassed, as the current design allows a requester to spoof approvals in the request body. This requires a multi-step, server-side approval workflow. Additionally, the review identifies opportunities to improve maintainability by reducing code duplication in route handlers, make error handling more robust using custom error types, and enhance type safety and provenance data accuracy.
| const parsed = sensitiveOperationSchema.parse(req.body); | ||
| const approvals = normalizeApprovalActors(parsed.approvals); | ||
|
|
||
| const dualControl = await validateDualControlRequirement({ | ||
| requestId: parsed.requestId || randomUUID(), | ||
| action: 'tenant_delete_request', | ||
| tenantId, | ||
| requesterId: actorId, | ||
| requiredApprovals: 2, | ||
| requiredRoles: ['compliance-officer', 'security-admin'], | ||
| approvals, | ||
| }); |
There was a problem hiding this comment.
The delete-requests endpoint has a high-severity security flaw: it allows the requester to provide approvals in the request body, bypassing dual-control. The server does not verify these approvals, enabling a single user to spoof approval data. A server-side, state-based approval workflow is required where each approver logs in independently. Furthermore, the route handlers for export-requests and delete-requests exhibit significant code duplication, which increases maintenance overhead. Refactoring this shared logic into a single, parameterized helper function would improve maintainability.
| const parsed = sensitiveOperationSchema.parse(req.body); | ||
| const approvals = normalizeApprovalActors(parsed.approvals); | ||
|
|
||
| const dualControl = await validateDualControlRequirement({ | ||
| requestId: parsed.requestId || randomUUID(), | ||
| action: 'tenant_export_request', | ||
| tenantId, | ||
| requesterId: actorId, | ||
| requiredApprovals: 2, | ||
| requiredRoles: ['compliance-officer'], | ||
| approvals, | ||
| }); |
There was a problem hiding this comment.
The dual-control implementation for sensitive operations (export/delete requests) is insecure because it allows the requester to provide the list of approvals directly in the request body. The validateDualControlRequirement function is called with these user-supplied approvals without any verification of their authenticity (e.g., cryptographic signatures or independent server-side approval records). An attacker can bypass the dual-control requirement by providing fake approval entries in the JSON payload, allowing a single user to authorize sensitive operations that are intended to require multiple independent approvals. To remediate this, implement a multi-step approval process where approvers must independently authenticate and approve a pending request record stored on the server.
| if ( | ||
| error instanceof Error | ||
| && ( | ||
| error.message.includes('Disabled tenants require dedicated reactivation workflow') | ||
| || error.message.includes('Unsupported current status transition') | ||
| ) | ||
| ) { | ||
| return res.status(409).json({ success: false, error: error.message }); | ||
| } |
There was a problem hiding this comment.
Relying on string matching for error messages is fragile. If the error message in TenantService is modified, this logic will fail silently. A more robust approach is to use custom error classes. You can define a specific error class (e.g., InvalidTenantStateTransitionError) in your service layer, throw it, and then check for it in the route handler using instanceof. This makes the contract between the service and route layers explicit and less prone to breaking.
|
|
||
| await provenanceLedger.appendEntry({ | ||
| action: 'TENANT_STATUS_UPDATED', | ||
| actor: { id: actorId, role: 'admin' }, |
There was a problem hiding this comment.
The actor's role is hardcoded as 'admin' when creating the provenance entry. This reduces the value of the audit log, as it doesn't capture the actual role of the user performing the action. The user's role is available in the route handler and should be passed to this service method.
Consider changing the method signature to accept an actor object with both id and role:
async updateStatus(tenantId: string, status: 'active' | 'suspended', actor: { id: string; role: string }, reason?: string)
This will allow you to record the correct role, making the provenance data more accurate and useful.
| actor: { id: actorId, role: 'admin' }, | |
| actor: { id: actor.id, role: actor.role }, |
| const governance = Array.isArray((current.config as any)?.governance?.requests) | ||
| ? (current.config as any).governance.requests | ||
| : []; |
There was a problem hiding this comment.
Using as any to access nested properties on current.config bypasses TypeScript's type safety and can lead to runtime errors if the object structure changes. To improve type safety and code clarity, it's better to define a more specific type for the Tenant['config'] object that includes expected structures like governance and lifecycle.
For example:
interface TenantConfig {
lifecycle?: {
statusHistory: any[];
};
governance?: {
requests: TenantSensitiveRequest[];
};
}
export interface Tenant {
// ...
config: TenantConfig;
}With a properly typed config object, you can safely access its properties without type casting.
const governance = current.config?.governance?.requests ?? [];|
|
||
| await provenanceLedger.appendEntry({ | ||
| action: action === 'export' ? 'TENANT_EXPORT_REQUESTED' : 'TENANT_DELETE_REQUESTED', | ||
| actor: { id: actorId, role: 'admin' }, |
There was a problem hiding this comment.
Similar to the updateStatus method, the actor's role is hardcoded as 'admin'. For more accurate and meaningful audit trails, the actual user role should be passed in from the route handler and used here.
Consider modifying the function signature to accept an actor object:
async createSensitiveOperationRequest(..., actor: { id: string; role: string }, ...)
This will ensure the provenance ledger contains precise information about who performed the action.
actor: { id: actor.id, role: actor.role },There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/src/routes/tenants.ts`:
- Around line 325-350: The code generates different request IDs for validation
and persistence when parsed.requestId is missing; instead, create a single
requestId variable (e.g., const requestId = parsed.requestId || randomUUID())
and pass that same requestId into validateDualControlRequirement (requestId:
requestId) and tenantService.createSensitiveOperationRequest (last arg) so both
use the identical ID; make the same change in the delete-request route where the
same pattern occurs.
- Around line 322-333: Do not trust approval.role from the request body: instead
of passing parsed.approvals (and the caller-supplied role strings) into
validateDualControlRequirement, resolve each approver identity and their roles
server-side and pass those verified values. Concretely, in the tenant export
endpoints (where sensitiveOperationSchema.parse and normalizeApprovalActors are
used) ignore any approval.role from req.body, use the approver actorId(s) from
normalized approvals to fetch the canonical actor record/roles (e.g., via your
existing user/actor lookup like getActorById or a roles service), build a new
approvals array containing the server-resolved actorId and role(s)/claims, and
pass that verified approvals array into validateDualControlRequirement (same in
the other endpoint noted). Ensure validateDualControlRequirement receives only
server-validated roles so callers cannot fabricate required roles like
"compliance-officer".
In `@server/src/services/TenantService.ts`:
- Around line 464-467: The current branch in TenantService that detects
no-change status (the if (current.status === status) block) should return an
explicit no-op result rather than the same tenant object so callers (e.g., the
PATCH handler that emits TENANT_STATUS_UPDATED) can distinguish true updates
from no-ops; modify the updateStatus/updateTenantStatus method to return a
structured response indicating noOp (for example { noOp: true, tenant: current }
or a result enum with NO_OP vs UPDATED) and keep existing behavior for actual
updates (e.g., { noOp: false, tenant: updated }), then update callers to check
noOp before emitting TENANT_STATUS_UPDATED.
- Around line 450-493: The tenant row is read and then rewritten without any
lock, allowing concurrent transactions to clobber lifecycle fields; modify the
mutator (e.g., updateStatus in TenantService) to SELECT the tenant using SELECT
* FROM tenants WHERE id = $1 FOR UPDATE inside the same transaction (client)
before mapping with mapRowToTenant and before computing nextConfig, then perform
the UPDATE and RETURNING based on that locked row; apply the same FOR UPDATE
locking (or equivalent optimistic version check using a version/updated_at WHERE
clause) to the other mutator that updates tenant config/status (the similar
block around the governance.requests change) so both paths serialize reads and
writes to prevent lost updates.
- Around line 497-509: The provenance append is executed outside the tenant DB
transaction because provenanceLedger.appendEntry(...) performs its own commit,
so move the provenance write into the same transaction by either (A) changing
appendEntry to accept a client/transaction parameter and call it with the
current transaction client in TenantService (so the ledger write uses the same
connection and does not commit independently), or (B) instead of calling
provenanceLedger.appendEntry(...) inside TenantService, insert a staged outbox
row (e.g. provenance_outbox) within the same transaction alongside the tenant
update and commit both together, letting a background worker/process pick up the
outbox and call provenanceLedger.appendEntry() afterwards; update calls at both
the shown block around client.query('COMMIT') and the other occurrence (lines
~566-579) to use the chosen approach.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4bd389d0-d3f9-4334-af0a-fe54da2b2a1e
📒 Files selected for processing (7)
agents/examples/TENANT_STATUS_HARDENING_20260206.jsondocs/roadmap/STATUS.jsonprompts/registry.yamlprompts/tenants/tenant-status-hardening@v1.mdserver/src/routes/__tests__/tenants.test.tsserver/src/routes/tenants.tsserver/src/services/TenantService.ts
| const parsed = sensitiveOperationSchema.parse(req.body); | ||
| const approvals = normalizeApprovalActors(parsed.approvals); | ||
|
|
||
| const dualControl = await validateDualControlRequirement({ | ||
| requestId: parsed.requestId || randomUUID(), | ||
| action: 'tenant_export_request', | ||
| tenantId, | ||
| requesterId: actorId, | ||
| requiredApprovals: 2, | ||
| requiredRoles: ['compliance-officer'], | ||
| approvals, | ||
| }); |
There was a problem hiding this comment.
Do not trust approval.role from the request body.
These endpoints feed client-supplied role strings straight into dual-control validation. Since the validator only checks whether the required role names are present, a caller can invent compliance-officer / security-admin approvals and satisfy the gate without any real approver. Resolve approver identities and roles server-side before evaluating the requirement.
Also applies to: 384-392
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/routes/tenants.ts` around lines 322 - 333, Do not trust
approval.role from the request body: instead of passing parsed.approvals (and
the caller-supplied role strings) into validateDualControlRequirement, resolve
each approver identity and their roles server-side and pass those verified
values. Concretely, in the tenant export endpoints (where
sensitiveOperationSchema.parse and normalizeApprovalActors are used) ignore any
approval.role from req.body, use the approver actorId(s) from normalized
approvals to fetch the canonical actor record/roles (e.g., via your existing
user/actor lookup like getActorById or a roles service), build a new approvals
array containing the server-resolved actorId and role(s)/claims, and pass that
verified approvals array into validateDualControlRequirement (same in the other
endpoint noted). Ensure validateDualControlRequirement receives only
server-validated roles so callers cannot fabricate required roles like
"compliance-officer".
| const dualControl = await validateDualControlRequirement({ | ||
| requestId: parsed.requestId || randomUUID(), | ||
| action: 'tenant_export_request', | ||
| tenantId, | ||
| requesterId: actorId, | ||
| requiredApprovals: 2, | ||
| requiredRoles: ['compliance-officer'], | ||
| approvals, | ||
| }); | ||
|
|
||
| if (!dualControl.satisfied) { | ||
| return res.status(403).json({ | ||
| success: false, | ||
| error: 'Dual-control requirement not satisfied', | ||
| violations: dualControl.violations, | ||
| }); | ||
| } | ||
|
|
||
| const requestRecord = await tenantService.createSensitiveOperationRequest( | ||
| tenantId, | ||
| 'export', | ||
| parsed.reason, | ||
| actorId, | ||
| approvals, | ||
| parsed.requestId, | ||
| ); |
There was a problem hiding this comment.
Use one requestId through validation and persistence.
When the body omits requestId, the dual-control check gets a fresh UUID, but createSensitiveOperationRequest() gets undefined and generates a different ID. That breaks audit correlation and any request-level dedupe keyed by the validator's ID.
🔁 Proposed fix
+ const requestId = parsed.requestId ?? randomUUID();
const dualControl = await validateDualControlRequirement({
- requestId: parsed.requestId || randomUUID(),
+ requestId,
action: 'tenant_export_request',
tenantId,
requesterId: actorId,
requiredApprovals: 2,
requiredRoles: ['compliance-officer'],
approvals,
});
const requestRecord = await tenantService.createSensitiveOperationRequest(
tenantId,
'export',
parsed.reason,
actorId,
approvals,
- parsed.requestId,
+ requestId,
);Apply the same change in the delete-request route.
Also applies to: 384-409
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/routes/tenants.ts` around lines 325 - 350, The code generates
different request IDs for validation and persistence when parsed.requestId is
missing; instead, create a single requestId variable (e.g., const requestId =
parsed.requestId || randomUUID()) and pass that same requestId into
validateDualControlRequirement (requestId: requestId) and
tenantService.createSensitiveOperationRequest (last arg) so both use the
identical ID; make the same change in the delete-request route where the same
pattern occurs.
| const existing = await client.query('SELECT * FROM tenants WHERE id = $1', [tenantId]); | ||
| if (!existing.rowCount) { | ||
| throw new Error('Tenant not found'); | ||
| } | ||
|
|
||
| const current = this.mapRowToTenant(existing.rows[0]); | ||
| if (current.status === 'disabled') { | ||
| throw new Error('Disabled tenants require dedicated reactivation workflow'); | ||
| } | ||
|
|
||
| if (!['active', 'suspended'].includes(current.status)) { | ||
| throw new Error(`Unsupported current status transition from '${current.status}'`); | ||
| } | ||
|
|
||
| if (current.status === status) { | ||
| await client.query('ROLLBACK'); | ||
| return current; | ||
| } | ||
|
|
||
| const statusHistory = Array.isArray(current.config?.lifecycle?.statusHistory) | ||
| ? current.config.lifecycle.statusHistory | ||
| : []; | ||
|
|
||
| const nextConfig = { | ||
| ...(current.config || {}), | ||
| lifecycle: { | ||
| ...(current.config?.lifecycle || {}), | ||
| statusHistory: [ | ||
| { | ||
| previousStatus: current.status, | ||
| nextStatus: status, | ||
| actorId, | ||
| reason, | ||
| changedAt: new Date().toISOString(), | ||
| }, | ||
| ...statusHistory, | ||
| ].slice(0, 10), | ||
| }, | ||
| }; | ||
|
|
||
| const result = await client.query( | ||
| 'UPDATE tenants SET status = $1, config = $2, updated_at = NOW() WHERE id = $3 RETURNING *', | ||
| [status, nextConfig, tenantId], | ||
| ); |
There was a problem hiding this comment.
Lock the tenant row before rewriting status and config.
Both methods read the tenant row, derive new state from that snapshot, and then write it back without any lock or version guard. A concurrent disableTenant, second status update, or another sensitive-request insert can overwrite lifecycle.statusHistory / governance.requests, and updateStatus() can still commit after another transaction has already disabled the tenant. Use SELECT ... FOR UPDATE or optimistic concurrency consistently across tenant mutators before calculating the next state.
Also applies to: 533-564
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/services/TenantService.ts` around lines 450 - 493, The tenant row
is read and then rewritten without any lock, allowing concurrent transactions to
clobber lifecycle fields; modify the mutator (e.g., updateStatus in
TenantService) to SELECT the tenant using SELECT * FROM tenants WHERE id = $1
FOR UPDATE inside the same transaction (client) before mapping with
mapRowToTenant and before computing nextConfig, then perform the UPDATE and
RETURNING based on that locked row; apply the same FOR UPDATE locking (or
equivalent optimistic version check using a version/updated_at WHERE clause) to
the other mutator that updates tenant config/status (the similar block around
the governance.requests change) so both paths serialize reads and writes to
prevent lost updates.
| if (current.status === status) { | ||
| await client.query('ROLLBACK'); | ||
| return current; | ||
| } |
There was a problem hiding this comment.
Surface no-op status requests explicitly.
This branch returns the current tenant as if the update succeeded. The PATCH route in server/src/routes/tenants.ts Lines 284-287 always emits a TENANT_STATUS_UPDATED receipt on success, so callers can get an "updated" receipt even though no row write or provenance entry happened. Return an explicit no-op result or separate status so the route can avoid issuing a false audit record.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/services/TenantService.ts` around lines 464 - 467, The current
branch in TenantService that detects no-change status (the if (current.status
=== status) block) should return an explicit no-op result rather than the same
tenant object so callers (e.g., the PATCH handler that emits
TENANT_STATUS_UPDATED) can distinguish true updates from no-ops; modify the
updateStatus/updateTenantStatus method to return a structured response
indicating noOp (for example { noOp: true, tenant: current } or a result enum
with NO_OP vs UPDATED) and keep existing behavior for actual updates (e.g., {
noOp: false, tenant: updated }), then update callers to check noOp before
emitting TENANT_STATUS_UPDATED.
| await provenanceLedger.appendEntry({ | ||
| action: 'TENANT_STATUS_UPDATED', | ||
| actor: { id: actorId, role: 'admin' }, | ||
| metadata: { | ||
| tenantId, | ||
| previousStatus: current.status, | ||
| nextStatus: status, | ||
| reason, | ||
| }, | ||
| artifacts: [], | ||
| }); | ||
|
|
||
| await client.query('COMMIT'); |
There was a problem hiding this comment.
The provenance write and tenant write are not atomic.
provenanceLedger.appendEntry() commits on its own database transaction in server/src/provenance/ledger.ts Lines 230-335, so a successful ledger append followed by an outer COMMIT failure leaves an audit record for a status/request change that never persisted. For governance flows, write the event in the same transaction or stage it in an outbox row that is committed with the tenant change.
Also applies to: 566-579
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/services/TenantService.ts` around lines 497 - 509, The provenance
append is executed outside the tenant DB transaction because
provenanceLedger.appendEntry(...) performs its own commit, so move the
provenance write into the same transaction by either (A) changing appendEntry to
accept a client/transaction parameter and call it with the current transaction
client in TenantService (so the ledger write uses the same connection and does
not commit independently), or (B) instead of calling
provenanceLedger.appendEntry(...) inside TenantService, insert a staged outbox
row (e.g. provenance_outbox) within the same transaction alongside the tenant
update and commit both together, letting a background worker/process pick up the
outbox and call provenanceLedger.appendEntry() afterwards; update calls at both
the shown block around client.query('COMMIT') and the other occurrence (lines
~566-579) to use the chosen approach.
|
Temporarily closing to reduce Actions queue saturation and unblock #22241. Reopen after the golden-main convergence PR merges. |
1 similar comment
|
Temporarily closing to reduce Actions queue saturation and unblock #22241. Reopen after the golden-main convergence PR merges. |
Motivation
Description
TenantService.updateStatusto reject transitions fromdisabledand other unsupported states and to persist a cappedlifecycle.statusHistoryand emitTENANT_STATUS_UPDATEDprovenance entries.POST /api/tenants/:id/export-requestsandPOST /api/tenants/:id/delete-requestsinserver/src/routes/tenants.ts, including request validation, approval normalization, and dual-control validation viamiddleware/dual-control.TenantService.createSensitiveOperationRequestto persist a bounded governancerequestsarray on the tenantconfigand to emitTENANT_EXPORT_REQUESTED/TENANT_DELETE_REQUESTEDprovenance entries.server/src/routes/__tests__/tenants.test.tsto cover status transition conflict handling, successful export request when dual-control is satisfied, and rejected delete requests when dual-control fails.docs/roadmap/STATUS.json,prompts/tenants/tenant-status-hardening@v1.md,prompts/registry.yaml, andagents/examples/TENANT_STATUS_HARDENING_20260206.jsonto reflect expanded scope and new success criteria.Testing
node scripts/check-boundaries.cjswhich succeeded (no parallelization/boundary violations).server/src/routes/__tests__/tenants.test.ts(status patch, export/delete request flows) but running the local Jest invocation (pnpm --filter intelgraph-server exec jest --config jest.config.ts -- tenants.test.ts) failed in this environment because thejestbinary was not available; the test file is included and expected to run in CI where the test runtime is provisioned.dual-controlmiddleware in tests and verified behavior for both satisfied and unsatisfied dual-control scenarios (test assertions added; local execution of jest must be performed in CI/dev environment with jest installed).Codex Task
Summary by CodeRabbit
New Features
Documentation
Tests