Skip to content

NVIDIA/omniperf

Omniperf — Performance AI Skills and Dashboards for Omniverse, Isaac Sim, and Isaac Lab

Omniperf bundles two things for performance work across Omniverse, Isaac Sim, and Isaac Lab:

  1. Performance AI Skills — a library of Claude Code agent skills that install, benchmark, profile, and diagnose Kit-based applications. See Agent Skills below.
  2. Performance Dashboards — a static site that tracks benchmark results (FPS, GPU utilization, memory, startup times) across GPUs and historical CI runs.

Live dashboard: https://nvidia.github.io/omniperf/

Overview

The dashboards are a lightweight, zero-dependency static site driven by JSON benchmark data generated by internal CI/CD pipelines and committed to this repo. The AI skills live under .agents/skills/ and encode the install, benchmark, and profile-analysis workflows used to produce and interpret that data.

Dashboard features

  • Multi-GPU comparison — switch between GPU datasets (e.g. NVIDIA L40, RTX PRO 6000)
  • Overview tab — sparkline grid of all benchmarks grouped by workflow (Non-RL, RL Games, RSL RL)
  • Detail tab — full time-series charts, per-run summaries, and an all-benchmarks overview table
  • Search & filter — autocomplete search to quickly find specific benchmarks
  • Metric selection — explore runtime metrics (FPS, GPU/CPU utilization, memory) and startup metrics

Getting Started

Local Development

git clone https://github.com/NVIDIA/omniperf.git
cd omniperf
./serve.sh
# Open http://localhost:8765

Requires Python 3 (for the built-in HTTP server). No build step or dependencies needed.

Note: The default clone skips the benchmark preview images (~1.5 GB) to keep clones lightweight. To also download them, run git lfs pull --exclude="" after cloning.

Data Format

Benchmark data lives in docs/data/. The manifest.json file lists available GPU datasets:

{
  "generated_at": "2026-03-04T10:47:26Z",
  "datasets": [
    {
      "file": "ada_l40.json",
      "gpu_display_name": "NVIDIA L40",
      "total_runs": 228
    }
  ]
}

Each GPU data file contains an array of benchmark runs with commit metadata, runtime metrics, and startup metrics.

Adding New Data

  1. Generate the benchmark JSON using the omniperf tooling
  2. Place the JSON file in docs/data/
  3. Update docs/data/manifest.json to reference the new file
  4. Commit and push — GitHub Pages will automatically deploy

Deployment

The site is deployed automatically via GitHub Actions to GitHub Pages on every push to main. The workflow:

  1. Checks out the repo (with LFS)
  2. Uploads docs/ as a Pages artifact
  3. Deploys to GitHub Pages

To enable GitHub Pages for a fresh clone:

  1. Go to Settings → Pages in the GitHub repo
  2. Under Source, select GitHub Actions

Requirements

  • Data files are tracked with Git LFS — install it before cloning
  • Chart.js 4.4.7 is loaded from CDN (no npm install needed)
  • Python 3 for local development server

Agent Skills

This repo ships a set of Claude Code agent skills for working with Omniverse, Isaac Sim, Isaac Lab, and their profiling tools. They encode install steps, benchmark recipes, and profile-analysis workflows validated from live testing.

Security

  • Vulnerability disclosure: See SECURITY.md

Contributing

See CONTRIBUTING.md for guidelines, including the Developer Certificate of Origin (DCO) requirement.

License

This project is licensed under the Apache License 2.0.

About

Tracking Isaac Lab performance across GPUs and simulation backends.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors