Detect user intent in under 5ms. A single binary that classifies text against customizable phrase sets — built for agent hooks that need to steer behavior without calling an LLM.
The problem: Your agent needs to know how the user feels about its work — are they correcting it, frustrated with it, or happy? You could prompt an LLM to classify every message, but that's slow, expensive, and overkill for pattern matching.
csn solves this: Embed text locally, compare against curated phrase sets, and get a category + confidence score in milliseconds. No prompts, no tokens, no latency — just a function call that returns {"category": "frustration", "confidence": 0.93}.
| Feature | Detail |
|---|---|
| ~5ms classification | Background daemon keeps the model warm via unix socket |
| Typo-robust | Character n-gram features handle "wtf", "wft", "wttf" the same way |
| Per-category scores | Softmax MLP gives probability distribution, not just match/no-match |
| Single binary | cargo install computer-says-no — no Python, no Docker, no model servers |
| Customizable | Define your own categories in TOML. Ship corrections/frustration/neutral out of the box |
| Agent-agnostic | Works with any agent that supports hooks — Claude Code, Cursor, Windsurf, Codex, or your own |
cargo install computer-says-no
csn classify "what the fuck" --set corrections --json
# → {"category": "frustration", "confidence": 0.93, ...}
csn classify "wrong file" --set corrections --json
# → {"category": "correction", "confidence": 0.98, ...}
csn classify "sounds good" --set corrections --json
# → {"category": "neutral", "confidence": 0.70, ...}First call downloads the ONNX model + trains the MLP (~10s). After that: ~5ms via background daemon.
curl -fsSL https://raw.githubusercontent.com/srobroek/computer-says-no/main/install.sh | bashInstalls the csn binary via cargo install and downloads the default reference sets. Requires Rust 1.92+.
cargo install computer-says-noThen download reference sets (see Reference set locations).
Download a precompiled binary for your platform from Releases.
git clone https://github.com/srobroek/computer-says-no.git
cd computer-says-no
cargo build --releasecsn classify "test" --set corrections --jsonflowchart TD
A["User message"] --> B{"Daemon\nrunning?"}
B -- "Yes (~5ms)" --> C["Send via\nUnix socket"]
B -- "No (~370ms)" --> D["Load model\nin-process"]
C --> E
D --> E
E["ONNX Embedding\n384-dim vector"] --> F["Per-category\ncosine features\n(N × 3)"]
E --> G["Character\nn-gram features\n(256-dim)"]
F --> H["MLP Neural Network"]
G --> H
H --> I["Softmax"]
I --> J["correction: 0.98\nfrustration: 0.01\nneutral: 0.01"]
- CLI:
csn classify,csn embed,csn similarity— auto-route through daemon when warm - MCP server:
csn mcp— stdio transport, 4 tools (classify, list_sets, embed, similarity) - Daemon: Lazy background process that keeps the embedding model and MLP weights in memory
The daemon is transparent — you never start or manage it manually.
- First CLI call:
csn classifychecks for a unix socket at~/.cache/computer-says-no/csn.sock - No socket found: Spawns
csn daemonas a detached background process, which loads the embedding model, trains/loads MLP weights, and listens on the socket - Socket found: Sends the classify request over the socket, gets a response in ~5ms
- Idle timeout: After 5 minutes of no requests (configurable via
CSN_IDLE_TIMEOUT), the daemon exits and cleans up its socket/PID files - Next CLI call: Cycle repeats — daemon restarts on demand
The MCP server (csn mcp) is separate — it runs over stdio and is managed by your MCP client (Claude Code, Cursor, etc.). It does NOT use the daemon.
Hooks let your agent automatically detect and respond to user corrections, frustration, or any custom signal. See Installation first.
For the best experience, pair csn with Vestige — a spaced-repetition memory server for AI agents. When the hook detects a correction or frustration, the agent saves a lesson to vestige. On the next session, the agent retrieves those lessons and applies them as behavioral rules — so it doesn't repeat the same mistakes.
Without vestige, the hook still works — the agent steers its tone and can save lessons to file-based memory (MEMORY.md or similar). Vestige adds spaced repetition, semantic search, and automatic deduplication on top.
The core pattern: pipe the user message through csn classify --json, check the category, and inject context into the agent.
Claude Code
Download the hook:
mkdir -p .claude/hooks
curl -fsSL -o .claude/hooks/user-frustration-check.sh \
https://raw.githubusercontent.com/srobroek/computer-says-no/main/.claude/hooks/user-frustration-check.sh
chmod +x .claude/hooks/user-frustration-check.shRegister in .claude/settings.json:
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": ".claude/hooks/user-frustration-check.sh",
"timeout": 5
}
]
}
]
}
}For lesson persistence, add to your CLAUDE.md (project or global):
> **MANDATORY ON EVERY SESSION START**: Call `mcp__vestige__session_context`
> BEFORE responding to the user's first message. Treat retrieved lessons as
> behavioral rules. Do it silently.This ensures the agent loads past lessons on every session start and applies them.
Cursor
Add a rule in .cursorrules:
Before responding to user messages, classify the input:
Run: csn classify "<user_message>" --set corrections --json
If category is "correction": acknowledge the mistake and adjust.
If category is "frustration": reflect on what went wrong, de-escalate.
Or use csn as an MCP server in Cursor's MCP config for tool-based classification.
Any agent with shell hooks
#!/usr/bin/env bash
USER_MESSAGE="$1"
RESULT=$(csn classify "$USER_MESSAGE" --set corrections --json 2>/dev/null)
CATEGORY=$(echo "$RESULT" | jq -r '.category // empty')
case "$CATEGORY" in
correction) echo "User is correcting you. Acknowledge and fix." ;;
frustration) echo "User is frustrated. Reflect on what went wrong." ;;
*) ;; # neutral — no action
esacAdapt the output format to your agent's hook protocol.
Default: 70% confidence (tuned to avoid false positives on neutral text):
export CSN_FRUSTRATION_THRESHOLD=0.60 # more sensitive
export CSN_FRUSTRATION_THRESHOLD=0.85 # less sensitive| Category | Examples | Suggested behavior |
|---|---|---|
| Correction | "wrong file", "revert that", "not what I asked" | Acknowledge mistake, confirm understanding, adjust |
| Frustration | "wtf", "are you kidding me", "I give up" | Reflect on what went wrong, de-escalate, save lesson |
| Neutral | "sounds good", "add error handling", "how does this work?" | No action needed |
Reference sets are TOML files that define classification patterns. csn ships with a corrections set (1600+ phrases for correction/frustration/neutral detection).
csn searches for reference sets in this order:
| Priority | Location | When to use |
|---|---|---|
| 1 | --sets-dir CLI flag |
One-off testing |
| 2 | CSN_SETS_DIR env var |
CI, hooks |
| 3 | sets_dir in config.toml |
Permanent override |
| 4 | Platform config dir (default) | Normal use |
| 5 | Next to the binary | GitHub release downloads |
| 6 | ./reference-sets/ in CWD |
Development (source builds) |
Platform config directories (via the directories crate):
| Platform | Path |
|---|---|
| macOS | ~/Library/Application Support/computer-says-no/reference-sets/ |
| Linux | ~/.config/computer-says-no/reference-sets/ |
| Windows | %APPDATA%\computer-says-no\reference-sets\ |
The install script handles this automatically. For manual setup:
# macOS
mkdir -p ~/Library/Application\ Support/computer-says-no/reference-sets
curl -fsSL -o ~/Library/Application\ Support/computer-says-no/reference-sets/corrections.toml \
https://raw.githubusercontent.com/srobroek/computer-says-no/main/reference-sets/corrections.toml
# Linux
mkdir -p ~/.config/computer-says-no/reference-sets
curl -fsSL -o ~/.config/computer-says-no/reference-sets/corrections.toml \
https://raw.githubusercontent.com/srobroek/computer-says-no/main/reference-sets/corrections.tomlCreate a .toml file in your reference sets directory:
Multi-category (recommended):
[metadata]
name = "my-classifier"
description = "Classify text into categories"
mode = "multi-category"
threshold = 0.5
[categories.positive]
phrases = ["example positive 1", "example positive 2"]
[categories.negative]
phrases = ["example negative 1", "example negative 2"]
[categories.neutral]
phrases = ["neutral phrase 1", "neutral phrase 2"]Binary (simpler, for yes/no classification):
[metadata]
name = "my-pattern"
mode = "binary"
threshold = 0.5
[phrases]
positive = ["phrases that should match"]
negative = ["phrases that should NOT match"]Classify against it: csn classify "test" --set my-classifier --json. The MLP trains automatically on first use and caches weights.
- Multi-category: 2+ phrases per category, 4+ total
- Binary: 1+ positive phrase (negatives optional but improve accuracy)
- More phrases = better accuracy. Aim for 50+ per category. The shipped
correctionsset has ~500 per category. - Include near-misses. "holy shit that's amazing" in neutral helps the MLP distinguish it from "holy shit you broke everything" in frustration.
- Cover vocabulary range. Include formal ("that is incorrect"), informal ("nah"), profane ("wtf"), and abbreviated ("no") variants.
- Use intent for boundaries. Correction = directive ("wrong file"). Frustration = emotional ("I give up"). Sarcasm = classify by the underlying intent.
csn is a general-purpose text classifier — not limited to frustration detection. Any categorization problem where you can define example phrases works: binary (match/no-match) or multi-category (N categories with softmax probabilities). Define your categories in a TOML file, provide training phrases, and the MLP trains automatically.
Run benchmarks yourself:
csn benchmark run # all 12 models
csn benchmark run --model nomic-embed-text-v1.5-Q # specific model
csn benchmark run --json --output results.json # save for comparison
csn benchmark run --compare old-results.json # diff against previousResults on the shipped corrections dataset (62 prompts, 3 categories):
| Model | Accuracy | p50 (ms) | Cold start | Notes |
|---|---|---|---|---|
| gte-large-en-v1.5 | 87.1% | 46.4 | 2.7s | Best accuracy |
| gte-large-en-v1.5-Q | 87.1% | 19.3 | 0.8s | Best accuracy, quantized |
| nomic-embed-text-v1.5 | 85.5% | 13.1 | 0.4s | |
| nomic-embed-text-v1.5-Q | 85.5% | 8.0 | 0.2s | Default model |
| bge-small-en-v1.5 | 82.3% | 9.0 | 0.3s | |
| bge-small-en-v1.5-Q | 82.3% | 14.4 | 0.2s | |
| all-MiniLM-L6-v2 | 80.6% | 3.0 | 0.2s | |
| all-MiniLM-L6-v2-Q | 79.0% | 2.5 | 0.1s | Fastest |
| snowflake-arctic-embed-s | 75.8% | 6.1 | 0.3s | |
| snowflake-arctic-embed-s-Q | 75.8% | 3.7 | 0.1s |
Default: nomic-embed-text-v1.5-Q — best accuracy/speed balance (85.5%, 8ms). Switch to all-MiniLM-L6-v2-Q for minimum latency (2.5ms), or gte-large-en-v1.5-Q for maximum accuracy (87.1%).
csn mcp exposes 4 tools over stdio:
| Tool | Description |
|---|---|
classify |
Classify text against a reference set |
list_sets |
List sets with categories and phrase counts |
embed |
Generate embedding vector |
similarity |
Cosine similarity between two texts |
Add to your agent's MCP config (Claude Code, Cursor, or any MCP-compatible client):
{
"mcpServers": {
"csn": {
"command": "csn",
"args": ["mcp"]
}
}
}| Command | Description |
|---|---|
csn classify <text> --set <name> --json |
Classify text |
csn embed <text> |
Embedding vector |
csn similarity <a> <b> |
Cosine similarity |
csn mcp |
MCP server (stdio) |
csn stop |
Stop daemon |
csn models |
List models |
csn sets list |
List reference sets |
csn benchmark run |
Accuracy benchmark |
csn benchmark compare-strategies |
Strategy comparison |
csn benchmark generate-datasets |
Dataset scaffolds |
Config file location matches the platform config directory (macOS: ~/Library/Application Support/computer-says-no/config.toml, Linux: ~/.config/computer-says-no/config.toml):
# Embedding model — smaller = faster startup, larger = better accuracy
# See `csn models` for all options
model = "nomic-embed-text-v1.5-Q"
# Log verbosity: trace, debug, info, warn, error
log_level = "warn"
[mlp]
# If true, fall back to cosine-only scoring when MLP training fails
# (e.g., too few phrases). If false, return an error.
fallback = false
# Training hyperparameters — defaults work well for most reference sets.
# Increase max_epochs/patience for larger datasets.
learning_rate = 0.001 # Adam optimizer learning rate
weight_decay = 0.001 # L2 regularization strength
max_epochs = 500 # Maximum training iterations
patience = 10 # Stop early after N epochs without improvement
[daemon]
# Seconds of inactivity before the daemon self-exits.
# Lower = less memory use. Higher = fewer cold starts.
idle_timeout = 300All config file settings can be overridden via environment variables:
| Variable | Default | Description |
|---|---|---|
CSN_MODEL |
nomic-embed-text-v1.5-Q |
Embedding model (see csn models for options) |
CSN_LOG_LEVEL |
warn |
Log verbosity: trace, debug, info, warn, error |
CSN_SETS_DIR |
Platform config dir | Path to reference sets directory |
CSN_CACHE_DIR |
Platform cache dir | Path to model/weight cache |
CSN_IDLE_TIMEOUT |
300 |
Daemon idle timeout in seconds before self-exit |
CSN_MLP_FALLBACK |
false |
If true, fall back to cosine-only when MLP training fails |
CSN_FRUSTRATION_THRESHOLD |
0.70 |
Confidence threshold for the hook (0.0–1.0) |
| Metric | Value |
|---|---|
| Warm classify (daemon) | ~5ms |
| Cold classify (daemon starts) | ~370ms |
| First run (model download + train) | ~10s |
| Binary size (stripped) | ~25MB |
Apache-2.0