Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions skills/.curated/brooks-lint/LICENSE.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2025 hyhmrright

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
186 changes: 186 additions & 0 deletions skills/.curated/brooks-lint/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
---
name: brooks-lint
description: >
AI code reviews grounded in six classic engineering books (The Mythical Man-Month,
Code Complete, Refactoring, Clean Architecture, The Pragmatic Programmer, Domain-Driven
Design). Four analysis modes: PR review, architecture audit, tech debt assessment, test
quality review. Every finding follows Iron Law: Symptom, Source, Consequence, Remedy.
Use when: reviewing code, checking a PR, auditing architecture, assessing tech debt or
test quality, or discussing Brooks's Law, conceptual integrity, code smells, refactoring,
clean architecture, DDD, SOLID, test debt, flaky tests, mock abuse, or legacy code testability.
---

# Brooks-Lint

Code quality diagnosis using principles from six classic software engineering books.

> **Full skill with slash commands and Claude Code plugin support:**
> install from `--repo hyhmrright/brooks-lint --path skills/brooks-lint`

## The Iron Law

```
NEVER suggest fixes before completing risk diagnosis.
EVERY finding must follow: Symptom → Source → Consequence → Remedy.
```

Violating this law produces reviews that list rule violations without explaining why they
matter. A finding without a consequence and a remedy is not a finding — it is noise.

## When to Use

**Auto-triggers:**
- User asks to review code, check a PR, or assess code quality
- User shares code and asks "what do you think?" or "is this good?"
- User discusses architecture, module structure, or system design
- User asks why the codebase is hard to maintain, why velocity is declining
- User mentions: code smells, refactoring, clean architecture, DDD, SOLID, Brooks,
conceptual integrity, second system effect, tech debt, ubiquitous language,
test smells, test debt, unit testing quality, flaky tests, mock abuse,
legacy code testability, characterization tests

## Mode Detection

Read the context and pick ONE mode before doing anything else.

| Context | Mode |
|---------|------|
| Code diff, specific files/functions, PR description, "review this" | **Mode 1: PR Review** |
| Project directory structure, module questions, "audit the architecture" | **Mode 2: Architecture Audit** |
| "tech debt", "where to refactor", health check, systemic maintainability questions | **Mode 3: Tech Debt Assessment** |
| Test files shared, "are our tests good?", test debt, flaky tests, mock abuse, legacy code testability | **Mode 4: Test Quality Review** |

**If context is genuinely ambiguous after reading:** ask once — "Should I do a PR-level code
review, a broader architecture audit, or a tech debt assessment?" — then proceed without
further clarification questions.

## The Six Decay Risks

(Full definitions, symptoms, sources, and severity guides are in `references/decay-risks.md` —
read it after selecting a mode.)

| Risk | Diagnostic Question |
|------|---------------------|
| Cognitive Overload | How much mental effort to understand this? |
| Change Propagation | How many unrelated things break on one change? |
| Knowledge Duplication | Is the same decision expressed in multiple places? |
| Accidental Complexity | Is the code more complex than the problem? |
| Dependency Disorder | Do dependencies flow in a consistent direction? |
| Domain Model Distortion | Does the code faithfully represent the domain? |

## Modes

### Mode 1: PR Review

1. Read `references/pr-review-guide.md` for the analysis process
2. Read `references/decay-risks.md` for symptom definitions and source attributions
3. Scan the diff or code for each decay risk in the order specified in the guide
4. Apply the Iron Law to every finding
5. Output using the Report Template below

### Mode 2: Architecture Audit

1. Read `references/architecture-guide.md` for the analysis process
2. Read `references/decay-risks.md` for symptom definitions and source attributions
3. Draw the module dependency graph as a Mermaid diagram (Step 1 of the guide)
4. Scan for each decay risk in the order specified in the guide
5. Assign node colors in the Mermaid diagram based on findings (red/yellow/green)
6. Run the Conway's Law check
7. Output using the Report Template below — Mermaid graph FIRST, then Findings

### Mode 3: Tech Debt Assessment

1. Read `references/debt-guide.md` for the analysis process
2. Read `references/decay-risks.md` for symptom definitions and source attributions
3. Scan for all six decay risks; list every finding before scoring any of them
4. Apply the Pain x Spread priority formula
5. Output using the Report Template below, plus the Debt Summary Table

### Mode 4: Test Quality Review

1. Read `references/test-guide.md` for the analysis process
2. Read `references/test-decay-risks.md` for symptom definitions and source attributions
3. Build the test suite map (unit/integration/E2E counts and ratio)
4. Scan for each test decay risk in the order specified in the guide
5. Output using the Report Template below

## Report Template

````
# Brooks-Lint Review

**Mode:** [PR Review / Architecture Audit / Tech Debt Assessment / Test Quality Review]
**Scope:** [file(s), directory, or description of what was reviewed]
**Health Score:** XX/100

[One sentence overall verdict]

---

## Module Dependency Graph

<!-- Mode 2 (Architecture Audit) ONLY — omit this section for other modes -->

```mermaid
graph TD
...
```

---

## Findings

<!-- Sort all findings by severity: Critical first, then Warning, then Suggestion -->

### 🔴 Critical

**[Risk Name] — [Short descriptive title]**
Symptom: [exactly what was observed in the code]
Source: [Book title — Principle or Smell name]
Consequence: [what breaks or gets worse if this is not fixed]
Remedy: [concrete, specific action]

### 🟡 Warning

**[Risk Name] — [Short descriptive title]**
Symptom: ...
Source: ...
Consequence: ...
Remedy: ...

### 🟢 Suggestion

**[Risk Name] — [Short descriptive title]**
Symptom: ...
Source: ...
Consequence: ...
Remedy: ...

---

## Summary

[2–3 sentences: what is the most important action, and what is the overall trend]
```

## Health Score Calculation

Base score: 100
Deductions:
- Each 🔴 Critical finding: −15
- Each 🟡 Warning finding: −5
- Each 🟢 Suggestion finding: −1
Floor: 0 (score cannot go below 0)

## Reference Files

Read on demand — do not preload all files:

| File | When to Read |
|------|-------------|
| `references/decay-risks.md` | After selecting a mode, before starting the review |
| `references/pr-review-guide.md` | At the start of every Mode 1 (PR Review) |
| `references/architecture-guide.md` | At the start of every Mode 2 (Architecture Audit) |
| `references/debt-guide.md` | At the start of every Mode 3 (Tech Debt Assessment) |
| `references/test-guide.md` | At the start of every Mode 4 (Test Quality Review) |
| `references/test-decay-risks.md` | After selecting Mode 4, before starting the review |
4 changes: 4 additions & 0 deletions skills/.curated/brooks-lint/agents/openai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
interface:
display_name: "Brooks-Lint"
short_description: "Code reviews grounded in six classic engineering books"
default_prompt: "Review this code for decay risks using Brooks-Lint."
165 changes: 165 additions & 0 deletions skills/.curated/brooks-lint/references/architecture-guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
# Architecture Audit Guide — Mode 2

**Purpose:** Analyze the module and dependency structure of a system for decay risks that
operate at the architectural level. Every finding must follow the Iron Law:
Symptom → Source → Consequence → Remedy.

**Monorepo note:** Treat each deployable service or library as a top-level module. Draw
dependencies between services, not between their internal packages. Apply the Conway's Law
check at the service ownership level. Within a single service, apply standard module-level analysis.

---

## Analysis Process

Work through these five steps in order.

### Step 1: Draw the Module Dependency Graph (Mermaid)

Before evaluating any risk, map the dependencies as a Mermaid diagram. Use this format:

````mermaid
graph TD
subgraph UI
WebApp
MobileApp
end

subgraph Domain
AuthService
OrderService
PaymentService
end

subgraph Infrastructure
Database
MessageQueue
end

WebApp --> AuthService
WebApp --> OrderService
MobileApp --> AuthService
MobileApp --> OrderService
OrderService --> PaymentService
OrderService --> Database
OrderService --> MessageQueue
PaymentService --> Database
AuthService -.->|circular| OrderService

classDef critical fill:#ff6b6b,stroke:#c92a2a,color:#fff
classDef warning fill:#ffd43b,stroke:#e67700
classDef clean fill:#51cf66,stroke:#2b8a3e,color:#fff

class PaymentService critical
class OrderService warning
class Database,MessageQueue,AuthService,WebApp,MobileApp clean
````

**Phase A (during Step 1):** Generate the graph structure only — nodes, subgraphs, and edges.
Do NOT add `classDef` or `class` lines yet. You need findings from Steps 2-4 before coloring.

**Phase B (after Step 4):** Add `classDef` definitions and `class` assignments based on findings.
The example above shows the final output after both phases.

Rules:
1. **Nodes** — Use top-level directories or services as nodes, not individual files
2. **Grouping** — One `subgraph` per architectural layer or top-level directory (e.g., UI, Domain, Infrastructure)
3. **Edges** — Solid arrows (`-->`) point FROM the depending module TO the dependency; use dotted arrows with label (`-.->|circular|`) for circular dependencies. If no circular dependencies exist, use only solid arrows
4. **Node limit** — Keep the graph to ~50 nodes maximum; collapse low-risk leaf modules into their parent if needed
5. **Fan-out** — For any node with fan-out > 5, use a descriptive label: `HighFanOutModule["ModuleName (fan-out: 7)"]`
6. **Colors** — Apply `classDef` colors AFTER completing Steps 2-4: `critical` (red `#ff6b6b`) for nodes with Critical findings, `warning` (yellow `#ffd43b`) for Warning findings, `clean` (green `#51cf66`) for nodes with no findings or only Suggestions. If no findings at all, classify all nodes as `clean`
7. **Direction** — Default to `graph TD` (top-down); use `graph LR` only if the architecture is clearly a left-to-right pipeline

### Step 2: Scan for Dependency Disorder

*The most architecturally consequential risk — scan this first.*

Look for:
- Circular dependencies (any ⚠️ in the map above)
- Arrows flowing upward (high-level domain depending on low-level infrastructure)
- Stable, widely-depended-on modules that import from frequently-changing modules
- Modules with fan-out > 5
- Absence of a clear layering rule (no consistent answer to "what depends on what?")

### Step 3: Scan for Domain Model Distortion

Look for:
- Do module names match the business domain vocabulary?
- Is there a layer called "services" that contains all the business logic while domain objects
are pure data structures?
- Are there modules that cross bounded context boundaries (e.g., billing logic in the user module)?
- Is there an anti-corruption layer where external systems interface with the domain?

### Step 4: Scan for Remaining Four Risks

Check each in turn:

**Knowledge Duplication:**
- Are there multiple modules implementing the same concept independently?
- Does the same domain concept appear under different names in different modules?

**Accidental Complexity:**
- Are there entire layers in the architecture that do not add value?
- Are there modules whose responsibility cannot be stated in one sentence?

**Change Propagation:**
- Which modules are "blast radius hotspots"? (A change here requires changes in many other modules)
- Does the dependency map reveal why certain features are slow to develop?

**Cognitive Overload:**
- Can the module responsibility of each module be stated in one sentence from its name alone?
- Would a new developer know which module to add a new feature to?

### Step 5: Conway's Law Check

After the six-risk scan, assess the relationship between architecture and team structure:

- Does the module/service structure reflect the team structure?
(Conway's Law: "Organizations design systems that mirror their communication structure")
- If yes: is this intentional design or accidental coupling?
- A mismatch that causes cross-team coordination overhead for every feature is 🔴 Critical.
- A mismatch that is theoretical but not yet causing pain is 🟡 Warning.
- If team structure is unknown, note this as context missing and skip the check.

---

## Applying the Iron Law

For every finding identified above, write it in this format:

```
**[Risk Name] — [Short title]**
Symptom: [the exact structural evidence — reference module names from the dependency map]
Source: [Book title — Principle or Smell name]
Consequence: [what architectural consequence follows if this is not addressed]
Remedy: [concrete architectural action]
```

---

## Output

Use the standard Report Template from `SKILL.md`.
Mode: Architecture Audit
Scope: the project or directory audited.

Place the Mermaid dependency graph FIRST in the report, before the Findings section,
under the heading "Module Dependency Graph". In each finding, reference the relevant
node by name (e.g., "See the red node `PaymentService` in the graph above") so the
reader can cross-reference visually. The `classDef` color assignments must be added
LAST, after all findings have been identified and severity levels determined.

---

## Design Note: Analysis-Render Separation

The dependency graph follows a two-step conceptual model:

1. **Analysis** — Identify nodes (modules), edges (dependencies), groups (folders/layers),
and severity per node. This produces a logical dependency structure independent of
any diagram format.
2. **Render** — Convert the logical structure to Mermaid syntax (graph TD, subgraph,
classDef, etc.).

This separation means adding an alternative output format (D2, Graphviz, SVG) in the
future only requires a new renderer — the analysis logic stays the same.
Loading