Visual spec-driven development for VS Code. LLM-agnostic. Compatible with GitHub Spec Kit.
Features • Installation • Quick Start • Providers • Jira • Contributing
Spec Kit brings spec-driven development to AI coding — but it only works through slash commands in chat. Kiro offers a visual experience — but locks you into a single LLM.
Caramelo fills the gap: a visual UI for spec-driven development that works with any LLM — Claude, Ollama, OpenAI, Groq, LM Studio, or any OpenAI-compatible endpoint.
- Unified sidebar — providers, constitution, specs, progress, and task checklist in one panel
- Sequential phase flow with approval gates: Requirements → Design → Tasks
- Constitution editor — visual form with AI generation (describe your project, LLM suggests principles)
- Workflow DAG — interactive graph showing all specs and their phase statuses
- Progress ring — overall completion percentage (phases 50% + tasks 50%), 100% only when all tasks done
- Stale alerts — downstream phases flagged when upstream is regenerated
- Inline task checklist — toggle tasks directly in the sidebar with immediate file sync
- Any provider: GitHub Copilot, Claude, OpenAI, Gemini, Ollama, Groq, LM Studio, or any OpenAI-compatible endpoint
- Multiple providers configured simultaneously — switch by clicking the dot indicator
- Auto-detect models — available models fetched from provider API, or enter manually
- Inline editing — click provider name, model, or auth settings to edit directly in the sidebar
- Custom auth headers — configurable header name and prefix for corporate proxies (e.g. Azure API Manager)
- Model validation — test request on model change, red indicator on failure
- Multiple instances — add several providers of the same type with custom aliases
- Secure credential storage via VS Code's native SecretStorage
- Streaming output — see documents being written in real time in the editor
- Output Channel — watch LLM reasoning during task execution
- Uses
specs/directory — fully interoperable with Spec Kit CLI - Auto-syncs templates from GitHub Spec Kit releases
- Generates intermediate artifacts: research.md, data-model.md, contracts/
- Constitution as LLM context — project principles included in every generation
- Offline-first — bundled fallback templates, no internet required
- CodeLens buttons — Approve, Regenerate, Next Phase persistent in documents
- Phase progress bar — visual step indicator at the top of every spec document
- Caramelo editor menu — grouped contextual actions under a single cat icon (adapts to dark/light themes)
- Task CodeLens — Run Task / Run All Tasks inline in tasks.md
- Parallel task execution — tasks marked
[P]run concurrently - Non-intrusive progress — status bar spinner instead of notification popups
- Auxiliary files — research.md, data-model.md, analysis.md, checklists shown under each phase
- Clarify — LLM identifies ambiguities, presents questions as QuickPick dialogs
- Analyze — cross-artifact consistency check with severity-coded findings
- Auto-fix — CodeLens buttons on analysis.md to fix individual or all findings with LLM
- Checklists — content-specific quality verification items per phase
- Import issues as specs — create specs directly from Jira Cloud issues
- Issue picker — QuickPick with dynamic search and issue preview
- Jira badge — spec cards show linked issue key with click-to-open
- Full context — issue title, description, acceptance criteria, and comments used for generation
Search for "Caramelo" in the Extensions panel, or install from the Marketplace page.
code --install-extension caramelo-0.0.8.vsixgit clone https://github.com/fsilvaortiz/caramelo.git
cd caramelo
npm install
npm run buildPress F5 in VS Code to launch the Extension Development Host.
-
Add a provider — Expand the Providers section. Click a preset (Ollama, Claude, OpenAI, Gemini, Groq, LM Studio, Copilot, Jira). Enter credentials if needed — models are fetched from the API or entered manually. For corporate proxies, expand "Custom auth header" to set the header name and prefix.
-
Set up your constitution — Click the Constitution bar in the Workflow panel. Describe your project and click "Generate with AI" to let the LLM suggest principles, or fill in manually.
-
Create a spec — Expand "New Spec" in the Workflow panel, enter a name and description, click "Create". Or click "From Jira" to import an issue.
-
Generate phases — Click "Generate" on each phase (Requirements → Design → Tasks). Watch the document stream in real time. Review and approve each before the next unlocks.
-
Execute tasks — Click "Implement" on the Tasks phase, or open
tasks.mdand use "Run Task" / "Run All Tasks" buttons. Watch LLM reasoning in the Output Channel. -
Quality checks — Use the Caramelo menu (cat icon in editor toolbar) to Clarify ambiguities, Analyze consistency, Fix issues, or Generate checklists.
| Provider | Endpoint | Auth |
|---|---|---|
| GitHub Copilot | Via VS Code API | Copilot subscription |
| Ollama | http://localhost:11434/v1 |
None |
| Claude | https://api.anthropic.com |
API key |
| OpenAI | https://api.openai.com/v1 |
API key |
| Gemini | https://generativelanguage.googleapis.com/v1beta/openai |
API key |
| Groq | https://api.groq.com/openai/v1 |
API key |
| LM Studio | http://localhost:1234/v1 |
None |
| Custom | Any OpenAI-compatible endpoint | Optional |
| Provider | Details | Auth |
|---|---|---|
| Jira Cloud | Any Atlassian Cloud instance | Email + API token |
Expand the Providers section, click a preset button, enter credentials. Models are fetched automatically from the provider's API when available, or enter the model name manually. All editing (name, model, auth headers) is done inline in the sidebar — click any field to edit it.
Constitution (project principles)
│
├──→ [Feature 1]
│ ├── Requirements (spec.md)
│ ├── Design (plan.md + research.md + data-model.md + contracts/)
│ ├── Tasks (tasks.md)
│ └── Implementation (task execution)
│
└──→ [Feature 2]
└── ...
Each phase must be approved before the next unlocks:
- Generate — LLM creates the document using templates + constitution + prior phases
- Approve — mark as complete, unlock the next phase
- Regenerate — re-run (marks downstream phases as stale)
- Edit manually — modify before approving
| File | Phase | Description |
|---|---|---|
| research.md | Design | Technical decisions with rationale |
| data-model.md | Design | Entities, attributes, relationships |
| contracts/ | Design | Interface definitions |
| analysis.md | Tasks | Consistency check findings |
| checklists/*.md | Any | Quality verification items |
| jira-context.md | Requirements | Imported Jira issue content |
VS Code settings (settings.json):
{
"caramelo.providers": [
{
"id": "ollama",
"name": "Ollama",
"type": "openai-compatible",
"endpoint": "http://localhost:11434/v1",
"model": "llama3"
}
],
"caramelo.activeProvider": "ollama"
}API keys and Jira tokens are stored securely in VS Code's SecretStorage, never in settings files.
See CONTRIBUTING.md for guidelines.