ai-agencee logoai-agencee

Pricing

Simple, transparent pricing

The CLI and engine are free and open-source today. Cloud tiers are fully designed and launching soon — join the waitlist to be notified at launch.

warning

Cloud product not yet live. Starter, Professional, and Enterprise tiers are planned and will launch later this year. The Free tier — full CLI, DAG engine, MCP integration, and Mock provider — is available right now with no account required. Get started on GitHub ↗

Free

Free
0

Everything you need to evaluate ai-agencee locally. Zero API keys, zero cost.

Get started free
Tokens: Concurrent runs: 1
  • checkFull CLI (ai-kit commands)
  • checkMCP integration (Claude Desktop + VS Code)
  • checkMock provider — no API keys needed
  • checkDAG editor & Mermaid visualizer
  • checkCommunity support (GitHub)
  • checkUnlimited local runs (mock)
  • closeReal LLM providers (Anthropic / OpenAI)
  • closeManaged API keys
  • closeCost dashboards
  • closeAudit logs
  • closeSLA
Cloud — Coming Soon

Starter

$29/ month

$319 billed annually (save $29)

For indie developers and freelancers who want to run real LLM workflows on their own API keys. Cloud launch coming soon — join the waitlist.

Join the waitlist
Tokens: 1 M tokens / monthConcurrent runs: 5
  • checkEverything in Free
  • checkAnthropic & OpenAI provider support (BYOK)
  • check1 M tokens / month included
  • check5 concurrent agent runs
  • checkBasic cost tracking dashboard
  • checkEmail support
  • check30-day audit log retention
  • closeManaged API keys
  • closeCustom agent templates
  • closePrivate DAGs
  • closeCompliance exports
Most Popular — Coming Soon

Professional

$99/ month

$1089 billed annually (save $99)

For product squads who need managed keys, compliance logs, and custom agents. Cloud launch coming soon — join the waitlist.

Join the waitlist
Tokens: 10 M tokens / monthConcurrent runs: 25
  • checkEverything in Starter
  • check10 M tokens / month
  • check25 concurrent runs
  • checkManaged API keys — we handle billing & rate limits
  • checkAdvanced cost dashboard + cost optimization tips
  • checkCustom agent templates (3 / month)
  • checkPrivate DAGs
  • check1-year audit log retention
  • checkCompliance export (CSV / JSON)
  • checkPriority support (< 24 h SLA)
  • check99 % uptime SLA
  • closeUnlimited tokens
  • closeWhite-label
  • closeDedicated infrastructure
  • closeOllama / Bedrock / Gemini providers

Enterprise

Custom

Dedicated infrastructure, unlimited scale, and on-call engineering support for teams building AI-native products. Contact us to be a design partner.

Contact us
Tokens: Unlimited (custom tiers)Concurrent runs: Unlimited
  • checkEverything in Professional
  • checkUnlimited tokens (custom allocation)
  • checkUnlimited concurrent runs
  • checkDedicated infrastructure (single-tenant)
  • checkCustom model providers — Ollama, Bedrock, Gemini
  • checkWhite-label capabilities
  • checkCustom audit log retention
  • check99.9 % uptime SLA
  • checkDedicated account manager
  • checkOn-site or remote onboarding
  • checkCustom integrations (Jira, Slack, Teams)

How token billing works

Tokens are counted across all LLM calls within a billing month and reset on your billing anniversary. The built-in model router automatically selects the cheapest model tier that satisfies each task — keeping your token spend as low as possible without sacrificing output quality. Open-source CLI is always free. ↗

Frequently asked questions

Do I need an API key to try ai-agencee?
No. The Free tier uses the built-in Mock provider which produces realistic deterministic output at zero cost. You can run full multi-agent DAGs, trigger retries, test escalations, and integrate with your CI pipeline — all without spending anything.
What does "Managed API keys" mean on Professional?
We provision and rotate Anthropic/OpenAI keys on your behalf. You get predictable per-seat pricing; we absorb rate-limit complexity and per-token billing.
How is token usage counted?
We count input + output tokens across all LLM calls within a billing month. The mock provider does not consume tokens. Tokens reset on your billing anniversary date.
When will the cloud product launch?
We are targeting a public cloud launch later in 2026. The architecture, pricing tiers, and feature set are finalised — we are completing infrastructure and onboarding tooling. Join the waitlist via the contact form to be notified at launch.
Can I self-host?
Yes — the CLI, DAG engine, and MCP server are fully open source (MIT) and available today. Self-hosted deployments have no token limits or SLA. The SaaS tiers will add managed keys, persistent dashboards, and enterprise compliance features once the cloud product launches.

Ready to start?

Build your first multi-agent workflow in under 5 minutes

No API key, no credit card. Clone the repo and run pnpm demo to see DAG-supervised agents in action.