Graphon is a Python graph execution engine for agentic AI workflows.
The repository is still evolving, but it already contains a working execution engine, built-in workflow nodes, model runtime abstractions, integration protocols, and a runnable end-to-end example.
- Queue-based
GraphEngineorchestration with event-driven execution - Graph parsing, validation, and fluent graph building
- Shared runtime state, variable pool, and workflow execution domain models
- Built-in node implementations for common workflow patterns
- Pluggable model runtime interfaces, including a local
SlimRuntime - HTTP, file, tool, and human-input integration protocols
- Extensible engine layers and external command channels
Repository modules currently cover node types such as start, end, answer,
llm, if-else, code, template-transform, question-classifier,
http-request, tool, variable-aggregator, variable-assigner, loop,
iteration, parameter-extractor, document-extractor, list-operator, and
human-input.
Graphon is currently easiest to evaluate from a source checkout.
- Python 3.12 or 3.13
uvmake
Python 3.14 is currently unsupported because unstructured, which backs part
of the document extraction stack, currently declares Requires-Python: <3.14.
make dev
source .venv/bin/activate
make testmake dev installs the project, syncs development dependencies, and sets up
prek Git hooks.
The repository includes a minimal runnable example at
examples/graphon_openai_slim.
It builds and executes this workflow:
start -> llm -> output
To run it:
make dev
source .venv/bin/activate
cd examples/graphon_openai_slim
cp .env.example .env
python3 workflow.py "Explain Graphon in one short sentence."Before running the example, fill in the required values in .env.
The example currently expects:
- an
OPENAI_API_KEY - a
SLIM_PLUGIN_ID - a local
dify-plugin-daemon-slimsetup or equivalent Slim runtime
For the exact environment variables and runtime notes, see examples/graphon_openai_slim/README.md.
At a high level, Graphon usage looks like this:
- Build or load a graph and instantiate nodes into a
Graph. - Prepare
GraphRuntimeStateand seed theVariablePool. - Configure model, file, HTTP, tool, or human-input adapters as needed.
- Run
GraphEngineand consume emitted graph events. - Read final outputs from runtime state.
The bundled example follows exactly that path. The execution loop is centered
around GraphEngine.run():
engine = GraphEngine(
workflow_id="example-start-llm-output",
graph=graph,
graph_runtime_state=graph_runtime_state,
command_channel=InMemoryChannel(),
)
for event in engine.run():
...See
examples/graphon_openai_slim/workflow.py
for the full example, including SlimRuntime, SlimPreparedLLM, graph
construction, input seeding, and streamed output handling.
src/graphon/graph: graph structures, parsing, validation, and builderssrc/graphon/graph_engine: orchestration, workers, command channels, and layerssrc/graphon/runtime: runtime state, read-only wrappers, and variable poolsrc/graphon/nodes: built-in workflow node implementationssrc/graphon/model_runtime: provider/model abstractions and Slim runtimesrc/graphon/graph_events: event models emitted during executionsrc/graphon/http: HTTP client abstractions and default implementationsrc/graphon/file: workflow file models and file runtime helperssrc/graphon/protocols: public protocol re-exports for integrationsexamples/: runnable examplestests/: unit and integration-style coverage
- CONTRIBUTING.md: contributor workflow, CI, commit/PR rules
- examples/graphon_openai_slim/README.md: runnable example setup
- src/graphon/model_runtime/README.md: model runtime overview
- src/graphon/graph_engine/layers/README.md: engine layer extension points
- src/graphon/graph_engine/command_channels/README.md: local and distributed command channels
Contributor setup, tooling details, CLA notes, and commit/PR conventions live in CONTRIBUTING.md.
CI currently validates commit messages, pull request titles, formatting, lint,
and tests on Python 3.12 and 3.13. Python 3.14 is currently excluded because
unstructured does not yet support it.
Apache-2.0. See LICENSE.