Testable AI workflows for the rest of us.

Most AI tooling assumes you have a platform team and a Kubernetes cluster. Heddle assumes you have a laptop — or a few of them — and a problem to solve. Local-first, privacy-first, on-prem-friendly.

The trade-offs we've made on purpose

Solo and SMB first.

If a feature needs a Kubernetes administrator to operate, it doesn't belong here. You should be able to install a CLI, edit a YAML file, and run a workflow in a browser.

Local before cloud.

The local tier (LM Studio, Ollama) is a first-class citizen, not a fallback. Three model tiers let you pick cost and privacy per step.

Privacy as a default.

Workloads tagged private never leave your machines. Knowledge silos and blind audit patterns prevent the same model from reviewing its own work.

Typed contracts everywhere.

Pydantic messages and per-worker JSON Schemas are the only safety net between actors. No untyped dicts on the wire, ever.

Stateless workers.

Every worker resets between tasks. Horizontal scaling without coordination, deterministic routing, full auditability.

Apple Silicon is a real substrate.

Personal-axis compute deserves a real control plane. Apple Silicon and hyperscaler chips solve different problems; we treat both as first-class.

The Heddle project family

heddle v0.9.2 — active development

The runtime. Python actor-mesh over NATS.

Six shipped workers, a Workshop web UI for testing, RAG pipeline, multi-agent councils, MCP gateway, Kubernetes manifests. The canonical Python framework — wire-protocol source of truth for the entire family.

heddle-sdk in development

Foreign-language SDKs. .NET and Swift, with NATS adapters.

Lets you write Heddle processor workers in C# or Swift against the same wire protocol. Vendored JSON Schemas, transport-agnostic worker bases, runnable echo examples. Adding a JVM SDK is on the roadmap.

warp-design design phase

Vision and ADRs for a macOS-first cluster agent.

Pre-implementation design exploration for warp — a Swift daemon that turns Heddle into an ad-hoc personal/SMB cluster orchestrator. mDNS peer discovery, capacity reporting, policy-driven scheduling, cloud-burst, longitudinal hardware advisor. The warp production repository will appear when v0 is ready.

Get started in 60 seconds

1. Install

pip install heddle-ai[workshop]

2. Configure

heddle setup

Auto-detects LM Studio and Ollama.

3. Try it

heddle workshop

Opens the web UI at localhost:8080.

No servers to run. No configuration files to write. Pick a worker in the browser, paste any text, click Run. When you're ready to scale, Heddle adds a NATS message bus that connects everything for production use.