Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions docs/pilot/buyable-demo/audit-artifact.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
{
"artifact_id": "audit-syn-ky-001-v1",
"generated_at": "2026-03-29T00:00:00Z",
"input_hash_sha256": "7b09ea3ac2ee63c1296b7402393fd18a11479ee877cc1f113e9477bea9039a22",
"run_hash_sha256": "342123e7c6788bbb90e8c49331fd052925fe6ce7160b7ee6e6870aca63a952ca",
"decision_id": "dec-2026-03-29-001",
"evidence_to_decision_linkage": [
{
"evidence_id": "evt-001",
"supports": "approval provenance"
},
{
"evidence_id": "evt-006",
"supports": "financial diversion"
},
{
"evidence_id": "evt-009",
"supports": "coordination signal"
}
],
"replay_contract": {
"command": "node scripts/pilot/verify-buyable-demo.mjs",
"expected_result": "Deterministic replay check passed"
}
}
13 changes: 13 additions & 0 deletions docs/pilot/buyable-demo/demo-script.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Summit Demo Script: “Make It Buyable”

## Talk Track (7 minutes)
1. **Set context (30s):** “This is a synthetic procurement diversion case built for reproducible scrutiny.”
2. **Show ingest (60s):** Open the dataset JSON and point out explicit entities, edges, and evidence IDs.
3. **Show graph (90s):** Open saved graph state; trace the risk path from approver to secondary account.
4. **Aha moment (60s):** “Both key actors used the same mobile device one minute apart before the transfer.”
5. **Decision (60s):** Show `decision_record` and explain why escalation/freeze is justified.
6. **Proof layer (90s):** Open audit artifact; map each evidence item to decision rationale.
7. **Replay (90s):** Run deterministic replay command and confirm matching hash.

## Buyer Close
“If we can reduce investigation time by 30–50% while producing a defensible audit trail, is that enough to move forward with a 14-day pilot?”
23 changes: 23 additions & 0 deletions docs/pilot/buyable-demo/follow-up-email.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
Subject: Summit 14-Day Pilot Proposal — Defensible Analysis + Audit Trail

Hi {{name}},

Thank you for the session today. As discussed, Summit’s value is not just speed — it is defensibility.

## Proposed Pilot (14 Days)
- One use case
- Your real or semi-real data
- Outputs:
1. Faster analysis workflow
2. Defensible report
3. Exportable audit artifact

## Success Criteria
- 30–50% reduction in investigation cycle time
- Every major finding linked to source evidence
- Deterministic replay for auditability

If these criteria are met, are you comfortable moving to an expanded deployment plan?

Best,
{{sender}}
29 changes: 29 additions & 0 deletions docs/pilot/buyable-demo/graph-state.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
{
"graph_id": "graph-syn-ky-001-v1",
"created_at": "2026-03-29T00:00:00Z",
"node_count": 9,
"edge_count": 9,
"high_risk_path": [
"person:alex-mercer",
"invoice:inv-2137",
"org:blueaster-logistics",
"account:ba-7782",
"account:hc-4409"
],
"insight": {
"title": "Shared-device and post-invoice transfer convergence",
"summary": "Invoice approvals by internal procurement are followed by vendor payments and a near-immediate transfer to a secondary account linked to off-contract consulting activity.",
"confidence": 0.89,
"external_verification": [
"ERP-44791",
"BANK-ALERT-8821",
"MDM-LOG-2201"
]
},
"decision_record": {
"decision_id": "dec-2026-03-29-001",
"decision": "Escalate and freeze",
"linked_evidence_ids": ["evt-001", "evt-006", "evt-009"],
Comment on lines +24 to +26
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Standardize decision text across artifacts.

decision_record.decision uses a different phrase than the dataset’s expected_outcome.decision, which can cause confusion in scripted validation/demo narration. Pick one canonical decision string and reuse it everywhere.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/pilot/buyable-demo/graph-state.json` around lines 24 - 26, The decision
text is inconsistent between artifacts (e.g., decision_record.decision vs
expected_outcome.decision for decision_id "dec-2026-03-29-001"); choose one
canonical phrase (for example "Escalate and freeze" or the dataset's existing
phrase) and update all occurrences to match exactly—search for
decision_record.decision and expected_outcome.decision and normalize their
values to the chosen canonical string so scripted validation/demo narration uses
the same text everywhere.

"audit_export": "audit-artifact.json"
}
}
22 changes: 22 additions & 0 deletions docs/pilot/buyable-demo/one-pager.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Summit One-Pager (Pilot Close)

## Problem
Analysts can generate answers, but leadership cannot always defend those answers under audit and external review.

## What Summit Does
Summit converts mixed operational evidence into an explainable graph workflow that produces a decision and its audit chain in one flow.

## Why Summit Is Different
Summit enforces defensibility: **you cannot ship an answer that cannot be justified** through linked evidence and deterministic replay.

## 14-Day Pilot Offer
- **Duration:** 2 weeks
- **Scope:** one investigation use case
- **Input:** customer real or semi-real data
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add explicit data-handling guardrails for pilot inputs.

The current wording allows real customer data without stating minimum controls (de-identification, approval path, or DPA/security review), which is a compliance/privacy risk in buyer-facing materials.

Suggested wording update
-- **Input:** customer real or semi-real data
+- **Input:** customer data that is de-identified/sanitized by default; use real data only with approved legal/security controls (e.g., DPA + data handling sign-off)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- **Input:** customer real or semi-real data
- **Input:** customer data that is de-identified/sanitized by default; use real data only with approved legal/security controls (e.g., DPA + data handling sign-off)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/pilot/buyable-demo/one-pager.md` at line 15, The line "**Input:**
customer real or semi-real data" lacks required data-handling guardrails; update
this copy to require only de-identified or synthetic data unless explicit
approvals are obtained, and reference required controls (de-identification,
customer approval/POC sign-off, and completion of a DPA/security review) before
real customer data is used; ensure the revised wording explicitly states allowed
data types (synthetic/de-identified), an approval path, and a note about
compliance/security review to mitigate privacy risk.

- **Output:**
- 30–50% faster analysis cycle time target
- defensible report package
- exportable audit artifact

## Success Question
“If we reduce investigation time by 30–50% and deliver a defensible audit trail, do we have approval to expand?”
150 changes: 150 additions & 0 deletions docs/pilot/buyable-demo/synthetic-case.dataset.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
{
"case_id": "SYN-KY-001",
"case_name": "Procurement Diversion Network",
"generated_at": "2026-03-29T00:00:00Z",
"entities": [
{
"id": "person:alex-mercer",
"type": "Person",
"name": "Alex Mercer",
"role": "Procurement Officer",
"department": "North Harbor Infrastructure"
},
{
"id": "person:rina-patel",
"type": "Person",
"name": "Rina Patel",
"role": "Vendor Account Lead",
"company": "BlueAster Logistics"
},
{
"id": "org:blueaster-logistics",
"type": "Organization",
"name": "BlueAster Logistics"
},
{
"id": "org:harborline-consulting",
"type": "Organization",
"name": "Harborline Consulting"
},
{
"id": "account:ba-7782",
"type": "BankAccount",
"name": "BlueAster Operating 7782"
},
{
"id": "account:hc-4409",
"type": "BankAccount",
"name": "Harborline Settlement 4409"
},
{
"id": "invoice:inv-2091",
"type": "Invoice",
"amount_usd": 184000,
"date": "2026-02-08"
},
{
"id": "invoice:inv-2137",
"type": "Invoice",
"amount_usd": 191500,
"date": "2026-02-19"
},
{
"id": "device:dx-55",
"type": "Device",
"name": "DX-55 Mobile"
}
],
"edges": [
{
"from": "person:alex-mercer",
"to": "invoice:inv-2091",
"type": "APPROVED",
"timestamp": "2026-02-08T10:12:00Z",
"evidence_ref": "evt-001"
},
{
"from": "person:alex-mercer",
"to": "invoice:inv-2137",
"type": "APPROVED",
"timestamp": "2026-02-19T09:57:00Z",
"evidence_ref": "evt-002"
},
{
"from": "invoice:inv-2091",
"to": "org:blueaster-logistics",
"type": "PAID_TO",
"timestamp": "2026-02-08T11:30:00Z",
"evidence_ref": "evt-003"
},
{
"from": "invoice:inv-2137",
"to": "org:blueaster-logistics",
"type": "PAID_TO",
"timestamp": "2026-02-19T11:04:00Z",
"evidence_ref": "evt-004"
},
{
"from": "org:blueaster-logistics",
"to": "account:ba-7782",
"type": "USES_ACCOUNT",
"timestamp": "2026-01-01T00:00:00Z",
"evidence_ref": "evt-005"
},
{
"from": "account:ba-7782",
"to": "account:hc-4409",
"type": "TRANSFERRED_TO",
"amount_usd": 153000,
"timestamp": "2026-02-20T02:10:00Z",
"evidence_ref": "evt-006"
},
{
"from": "person:rina-patel",
"to": "org:blueaster-logistics",
"type": "EMPLOYED_BY",
"timestamp": "2025-06-01T00:00:00Z",
"evidence_ref": "evt-007"
},
{
"from": "person:rina-patel",
"to": "device:dx-55",
"type": "USED_DEVICE",
"timestamp": "2026-02-20T02:08:00Z",
"evidence_ref": "evt-008"
},
{
"from": "person:alex-mercer",
"to": "device:dx-55",
"type": "USED_DEVICE",
"timestamp": "2026-02-20T02:09:00Z",
"evidence_ref": "evt-009"
}
],
"evidence": [
{
"id": "evt-001",
"source_type": "erp_approval_log",
"external_ref": "ERP-44791",
"checksum_sha256": "1c5a7d4f95f8c4eaa6a0cb3eb83f63e219b1e4a9f16bafba63e1be12d457bc0f"
},
{
"id": "evt-006",
"source_type": "bank_transfer_alert",
"external_ref": "BANK-ALERT-8821",
"checksum_sha256": "a0dcba7930f350f5d5fe7efe9a6b88dd875f6e8c3f86c5480464ae4c64d7318a"
},
{
"id": "evt-009",
"source_type": "mobile_device_login",
"external_ref": "MDM-LOG-2201",
"checksum_sha256": "ec1b6ec22f9d8ed4ed65dc7db0d4c344b2f5ac740e75ea8656cb0f53bcf52c4d"
}
],
Comment on lines +58 to +143
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Evidence references are not fully resolvable.

Multiple edges reference evt-002, evt-003, evt-004, evt-005, evt-007, and evt-008, but these IDs are missing from the evidence array. This breaks artifact referential integrity for downstream consumers that dereference evidence_ref.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/pilot/buyable-demo/synthetic-case.dataset.json` around lines 58 - 143,
The evidence array is missing entries referenced by edges (e.g., evidence_ref
values evt-002, evt-003, evt-004, evt-005, evt-007, evt-008) which breaks
referential integrity; add corresponding evidence objects for each missing id to
the "evidence" array (each object must include id, source_type, external_ref and
checksum_sha256) so that every evidence_ref in the edges (see edges referencing
invoice:inv-2137, invoice:inv-2091, org:blueaster-logistics, person:rina-patel,
device:dx-55) resolves to a matching evidence entry. Ensure ids exactly match
the referenced strings (evt-002, evt-003, evt-004, evt-005, evt-007, evt-008)
and provide plausible source_type/external_ref/checksum_sha256 values consistent
with the existing evidence objects.

"expected_outcome": {
"finding": "Likely collusive diversion between internal approver and vendor counterpart.",
"decision": "Escalate to controlled forensic review and freeze settlement account.",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep decision wording consistent with graph state.

expected_outcome.decision does not match docs/pilot/buyable-demo/graph-state.json decision text. Aligning this avoids ambiguity in demo scripts and validation narratives.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/pilot/buyable-demo/synthetic-case.dataset.json` at line 146, Update the
expected_outcome.decision value so it exactly matches the decision string used
in the graph-state.json decision text; locate the expected_outcome.decision
entry (currently "Escalate to controlled forensic review and freeze settlement
account.") and replace it with the identical wording from the graph-state.json
decision to keep demo scripts and validation narratives consistent.

"risk_score": 0.89,
"deterministic_run_key": "SYN-KY-001::v1"
}
}
25 changes: 25 additions & 0 deletions docs/pilot/buyable-demo/walkthrough.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Buyable Demo Walkthrough (5–7 Steps)

## Objective
Show a complete, pressure-ready chain: raw inputs → graph insight → decision → verifiable audit trail.

## Step 1 — Ingest the raw case package
Load `synthetic-case.dataset.json` and confirm the deterministic run key `SYN-KY-001::v1`.

## Step 2 — Render graph state
Open `graph-state.json` and focus on the high-risk path from approver to settlement account.

## Step 3 — Reveal the “aha”
Highlight the shared device (`device:dx-55`) used by both internal approver and vendor lead within one minute.

## Step 4 — Link evidence to judgment
Show that decision `dec-2026-03-29-001` is backed by exactly three evidence items (`evt-001`, `evt-006`, `evt-009`).

## Step 5 — Export proof artifact
Export and present `audit-artifact.json` as the defensible report payload.

## Step 6 — Deterministic replay
Run `node scripts/pilot/verify-buyable-demo.mjs`; same input returns same run hash.

## Step 7 — External verification
Validate external references (`ERP-44791`, `BANK-ALERT-8821`, `MDM-LOG-2201`) with independent source systems.
33 changes: 33 additions & 0 deletions scripts/pilot/verify-buyable-demo.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
import { createHash } from 'node:crypto';
import { readFileSync } from 'node:fs';
import { resolve } from 'node:path';

const root = resolve(process.cwd());
const datasetPath = resolve(root, 'docs/pilot/buyable-demo/synthetic-case.dataset.json');
const artifactPath = resolve(root, 'docs/pilot/buyable-demo/audit-artifact.json');
Comment on lines +5 to +7
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Resolve demo file paths from script location

The verifier anchors root to process.cwd(), so running it from any directory other than the repo root (for example, cd /tmp && node /workspace/summit/scripts/pilot/verify-buyable-demo.mjs) fails with ENOENT because it looks for /tmp/docs/pilot/.... That makes the replay check fragile in CI wrappers or tooling that executes scripts from a different working directory; path resolution should be based on the script file location instead.

Useful? React with 👍 / 👎.


const datasetRaw = readFileSync(datasetPath, 'utf8');
const artifact = JSON.parse(readFileSync(artifactPath, 'utf8'));

const normalizedDataset = JSON.stringify(JSON.parse(datasetRaw));
const inputHash = createHash('sha256').update(normalizedDataset).digest('hex');

if (inputHash !== artifact.input_hash_sha256) {
console.error('Deterministic replay check failed: input hash mismatch');
console.error(`Expected: ${artifact.input_hash_sha256}`);
console.error(`Actual: ${inputHash}`);
process.exit(1);
}

const runHash = createHash('sha256')
.update(`${artifact.decision_id}:${inputHash}`)
.digest('hex');

if (runHash !== artifact.run_hash_sha256) {
console.error('Deterministic replay check failed: run hash mismatch');
console.error(`Expected: ${artifact.run_hash_sha256}`);
console.error(`Actual: ${runHash}`);
process.exit(1);
}

console.log('Deterministic replay check passed');
Comment on lines +1 to +33
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This script can be made more robust and maintainable with a couple of improvements:

  1. Robust Path Resolution: The script currently relies on process.cwd(), making it dependent on the directory from which it is run. Using import.meta.url to resolve paths relative to the script file makes it runnable from any directory.
  2. Comprehensive Error Handling: File I/O and JSON parsing operations can fail. Wrapping the script's logic in a try...catch block ensures that any error (e.g., file not found, invalid JSON) is caught and handled gracefully, providing a clear error message.

This refactoring improves the script's reliability and makes it easier to maintain.

import { createHash } from 'node:crypto';
import { readFileSync } from 'node:fs';
import { dirname, resolve } from 'node:path';
import { fileURLToPath } from 'node:url';

const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const root = resolve(__dirname, '../../..');

const datasetPath = resolve(root, 'docs/pilot/buyable-demo/synthetic-case.dataset.json');
const artifactPath = resolve(root, 'docs/pilot/buyable-demo/audit-artifact.json');

try {
  const datasetRaw = readFileSync(datasetPath, 'utf8');
  const artifact = JSON.parse(readFileSync(artifactPath, 'utf8'));

  const normalizedDataset = JSON.stringify(JSON.parse(datasetRaw));
  const inputHash = createHash('sha256').update(normalizedDataset).digest('hex');

  if (inputHash !== artifact.input_hash_sha256) {
    console.error('Deterministic replay check failed: input hash mismatch');
    console.error(`Expected: ${artifact.input_hash_sha256}`);
    console.error(`Actual:   ${inputHash}`);
    process.exit(1);
  }

  const runHash = createHash('sha256')
    .update(`${artifact.decision_id}:${inputHash}`)
    .digest('hex');

  if (runHash !== artifact.run_hash_sha256) {
    console.error('Deterministic replay check failed: run hash mismatch');
    console.error(`Expected: ${artifact.run_hash_sha256}`);
    console.error(`Actual:   ${runHash}`);
    process.exit(1);
  }

  console.log('Deterministic replay check passed');
} catch (error) {
  console.error(`An error occurred during verification: ${error.message}`);
  process.exit(1);
}

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Enforce replay contract fields in the verifier

The verifier parses audit-artifact.json but never validates artifact.replay_contract and instead hardcodes the pass message, so changes to the declared replay contract can silently drift without being caught as long as hashes remain internally consistent. This undermines the stated goal of enforcing the replay contract; add explicit checks for replay_contract.command and replay_contract.expected_result.

Useful? React with 👍 / 👎.

Comment on lines +10 to +33
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Replay contract fields are not currently enforced.

The script validates hashes, but it never checks that replay_contract.command and replay_contract.expected_result match the deterministic contract values.

Proposed fix
 import { createHash } from 'node:crypto';
 import { readFileSync } from 'node:fs';
 import { resolve } from 'node:path';

+const EXPECTED_COMMAND = 'node scripts/pilot/verify-buyable-demo.mjs';
+const EXPECTED_RESULT = 'Deterministic replay check passed';
+
 const root = resolve(process.cwd());
 const datasetPath = resolve(root, 'docs/pilot/buyable-demo/synthetic-case.dataset.json');
 const artifactPath = resolve(root, 'docs/pilot/buyable-demo/audit-artifact.json');

 const datasetRaw = readFileSync(datasetPath, 'utf8');
 const artifact = JSON.parse(readFileSync(artifactPath, 'utf8'));
+
+if (
+  artifact.replay_contract?.command !== EXPECTED_COMMAND ||
+  artifact.replay_contract?.expected_result !== EXPECTED_RESULT
+) {
+  console.error('Deterministic replay check failed: replay contract mismatch');
+  console.error(`Expected command: ${EXPECTED_COMMAND}`);
+  console.error(`Actual command:   ${artifact.replay_contract?.command}`);
+  console.error(`Expected result:  ${EXPECTED_RESULT}`);
+  console.error(`Actual result:    ${artifact.replay_contract?.expected_result}`);
+  process.exit(1);
+}

 const normalizedDataset = JSON.stringify(JSON.parse(datasetRaw));
 const inputHash = createHash('sha256').update(normalizedDataset).digest('hex');
@@
-console.log('Deterministic replay check passed');
+console.log(EXPECTED_RESULT);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/pilot/verify-buyable-demo.mjs` around lines 10 - 33, The script
currently validates input_hash_sha256 and run_hash_sha256 but doesn't enforce
that the artifact's replay_contract.command and replay_contract.expected_result
match the deterministic contract; add explicit checks after computing/validating
hashes to compare artifact.replay_contract.command to the expected command and
artifact.replay_contract.expected_result to the expected result, log clear
"Deterministic replay check failed" messages showing expected vs actual for each
field, and call process.exit(1) on mismatch so the script fails
deterministically (use the same style as existing input/run hash checks and
reference artifact.replay_contract.command and
artifact.replay_contract.expected_result when implementing).

Loading