Skip to content

venslabs/vens

GitHub Release CI Go Report Card License: Apache-2.0 GitHub Downloads Documentation

vens — Prioritize vulnerabilities by real risk, not just CVSS

Your scanner found 300 CVEs. Which ones actually matter? Vens takes a Trivy or Grype report, combines it with a description of your system (exposure, data sensitivity, compliance, security controls), and scores every CVE based on its real risk to you — not just its generic severity.

The output is a CycloneDX VEX file with OWASP Risk Rating scores.

vens — scanner report + SBOM + system context, scored by an LLM into a CycloneDX VEX with OWASP ratings

Why vens?

OWASP scoring (Risk = Likelihood × Impact, 0-81) reflects your system's exposure, data sensitivity, and controls — not just generic CVE severity:

Scenario CVSS (Generic) OWASP (Contextual) Why?
Generic RCE in a library whose vulnerable path is not executed 8.8 HIGH 10.0 LOW ⬇️ Not reachable in your runtime
Info leak in a PII handler running under GDPR 5.3 MEDIUM 52.0 HIGH ⬆️ PII leak + compliance impact

Scores above are illustrative — actual scores depend on your config.yaml and the LLM model.

Installation

GitHub Action:

- uses: venslabs/vens-action@v0.1.0   # check the marketplace for the latest tag; pin by SHA in production
  with:
    version: v0.3.2                   # vens binary version
    config-file: .vens/config.yaml    # see docs/guides/configuration.md to author this file
    input-report: report.json
    sbom-serial-number: ${{ vars.SBOM_SERIAL }}
    llm-provider: openai
    llm-model: gpt-4o
    llm-api-key: ${{ secrets.OPENAI_API_KEY }}
    fail-on-severity: critical        # break the build on critical OWASP risk

See the GitHub Actions guide for the full input/output reference.

Run Vens locally:

# Go install
go install github.com/venslabs/vens/cmd/vens@latest

# Or as a Trivy plugin (https://trivy.dev/docs/latest/plugin/)
trivy plugin install github.com/venslabs/vens

Quick Example

# 1. Set up LLM (OpenAI shown — Anthropic, Google AI, or local Ollama also supported)
export OPENAI_API_KEY="sk-..."
export OPENAI_MODEL="gpt-4o"

# 2. Scan with Trivy or Grype
trivy image python:3.11-slim --format json --output report.json
# or
grype python:3.11-slim --output json --file report.json

# 3. Use your CycloneDX SBOM's serialNumber (or generate an ad-hoc UUID for this quickstart)
SBOM_SERIAL="urn:uuid:$(uuidgen | tr '[:upper:]' '[:lower:]')"

# 4. Generate contextual risk scores
vens generate --config-file config.yaml --sbom-serial-number "$SBOM_SERIAL" report.json output.vex.json

# 5. Optionally fold the OWASP ratings back into the Trivy report
vens enrich --vex output.vex.json report.json

Output is a CycloneDX VEX document; each vulnerability carries an OWASP rating:

{
  "vulnerabilities": [{
    "id": "CVE-XXXX-YYYY",
    "source": { "name": "NVD", "url": "https://nvd.nist.gov/vuln/detail/CVE-XXXX-YYYY" },
    "ratings": [{
      "method": "OWASP",
      "score": 52.0,
      "severity": "high",
      "vector": "SL:7/M:7/O:7/S:7/ED:6/EE:6/A:6/ID:3/LC:7/LI:7/LAV:7/LAC:7/FD:7/RD:7/NC:7/PV:7"
    }]
  }]
}

The per-CVE reasoning from the LLM is logged to stderr as the command runs, and is captured alongside prompts/responses when you pass --debug-dir <path>. It is intentionally not embedded in the VEX file to keep the document strictly CycloneDX-compliant.

Configuration

Create config.yaml:

project:
  name: "my-api"
  description: "Customer-facing REST API"

context:
  exposure: "internet"              # internal | private | internet
  data_sensitivity: "high"          # low | medium | high | critical
  business_criticality: "high"      # low | medium | high | critical
  compliance_requirements: ["GDPR", "SOC2"]
  controls:
    waf: true

LLM Providers:

Provider Environment Variable
OpenAI (recommended) OPENAI_API_KEY
Anthropic ANTHROPIC_API_KEY
Ollama (local) OLLAMA_MODEL
Google AI GOOGLE_API_KEY

Command Reference

vens generate

Generate VEX with contextual OWASP scores:

vens generate --config-file config.yaml INPUT OUTPUT

Supported scanners:

  • Trivy - Auto-detected from JSON report format
  • Grype - Auto-detected from JSON report format

Key flags:

  • --config-file (required) - Path to config.yaml
  • --input-format - Scanner format: auto | trivy | grype (default: auto)
  • --llm - LLM provider: openai | anthropic | ollama | googleai (default: auto)
  • --llm-batch-size - CVEs per request (default: 10)
  • --debug-dir - Save prompts/responses for debugging

vens enrich

Apply VEX scores to your Trivy report:

vens enrich --vex output.vex.json report.json

Learn More

Contributing

Contributions are welcome. See CONTRIBUTING.md for the full contributor guide: development setup, coding standards, testing with the mock LLM, commit conventions, and the review process.

Quick start for contributors:

  • Good first issue? Look for the good first issue label.
  • Bug report? Open an issue with the exact command, your vens --version, and (if possible) redacted --debug-dir output.
  • Bigger change? Open an issue first so we can agree on scope before you write the code.
  • Security report? See SECURITY.md — please do not open public issues for vulnerabilities.

License

Apache License 2.0 - See LICENSE


Focus on what matters. Patch smarter, not harder.

About

Prioritize vulnerabilities by real risk, not just CVSS

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors