Appearance
What is Agentcy?
Agentcy is an AI-powered agent orchestration platform that connects to your infrastructure, builds a knowledge graph of everything, and gives you a conversational AI that can see and act across all your systems — from a single interface.
The Problem
Modern engineering teams rely on dozens of tools: GitHub for code, AWS for compute, Kubernetes for orchestration, PostgreSQL for data, Vercel for deploys, and on and on. Each tool has its own dashboard, its own mental model, and its own API. When an incident fires at 3 AM, you flip between 10 tabs trying to piece together what went wrong. When a new engineer joins, they spend weeks just mapping out how things connect.
There is no single place to see, search, and act on your entire stack.
The Solution
Agentcy ingests data from all your systems into a unified knowledge graph (powered by Neo4j), then exposes an AI agent that can:
- Search across repositories, cloud resources, database schemas, deployments, and teams in one query
- Visualize the relationships between services, repos, clusters, and infrastructure
- Execute live actions on any connected system through real-time connector tools
- Orchestrate multi-step workflows with sub-agents, triggers, and schedules
- Enforce zero-trust policies on every action with OPA/Rego and full audit logging
Architecture at a Glance
┌───────────────────────────────────────────────────────────┐
│ Frontend (Next.js 16) │
│ Chat · Graph Explorer · Connectors · Workflows │
└──────────────────────────┬────────────────────────────────┘
│ REST / SSE
┌──────────────────────────▼────────────────────────────────┐
│ Backend API (Rust / Axum) │
│ agentcy-chat · agentcy-rag · agentcy-ingest · agentcy-auth · agentcy-policy │
├───────────┬───────────┬───────────┬───────────────────────┤
│ PostgreSQL│ Neo4j │ Redis │ OpenFang (optional) │
│ (SQL) │ (Graph) │ (Queue) │ (Sub-agents) │
└───────────┴───────────┴───────────┴───────────────────────┘Key Capabilities
Universal Data Connectors
Connect to 15+ data sources, each implementing a dual-trait architecture: IngestionSource for ETL into the knowledge graph, and ConnectorToolProvider for live, real-time tool execution by the AI agent.
| Category | Connectors | Live Tools |
|---|---|---|
| Code & DevOps | GitHub (PAT, OAuth, App) | 12 tools (repos, PRs, issues, commits, files, actions) |
| Cloud | AWS, GCP | 8 tools each (EC2, S3, Lambda, IAM / GCE, GCS, GKE, IAM) |
| Platform | Vercel, Supabase | 8-10 tools each (deployments, projects, domains / tables, auth, storage) |
| Databases | PostgreSQL, MySQL, SQL Server, MongoDB | 8-10 tools each (query, schema, stats, indexes) |
| Infrastructure | Kubernetes | 10 tools (pods, deployments, services, logs, namespaces) |
| API | OpenAPI, MCP (Model Context Protocol) | Dynamic (auto-discovered from spec / server) |
| Files | CSV, JSON | Schema inference, bulk import |
| Execution | Remote Execution | Command execution on remote hosts |
| BI | Power BI | Dataset queries, report metadata |
Knowledge Graph
All ingested data lives in a Neo4j 5 graph database with:
- Full-text search across all nodes and properties
- Vector embeddings for semantic search via
fastembed(local, no API calls) - Interactive visualization — explore your infrastructure graph with a React Flow canvas
- Cypher queries — run arbitrary graph queries from the chat or graph explorer
- Multi-tenant isolation — label-based scoping ensures data separation between organizations
AI Chat with Tool Calling
A streaming conversational AI with access to every connected system:
- Graph tools — search nodes, traverse relationships, run Cypher queries
- Connector tools — execute live API calls to GitHub, AWS, K8s, databases, and more
- Tool catalog — meta-tools that let the LLM discover and invoke connector tools on demand, instead of loading all tools upfront
- Approval flow — the agent requests permission before executing sensitive operations, with configurable timeouts
- Policy enforcement — every tool invocation is checked against zero-trust policies before execution
- Streaming SSE — real-time token streaming with reasoning blocks, tool calls, and status updates
Supports multiple LLM providers:
| Provider | Models | Notes |
|---|---|---|
| Anthropic | Claude Sonnet 4, Claude Opus 4 | Recommended. Best tool-calling performance |
| OpenAI | GPT-4o, GPT-4.1 | Full support including streaming |
| Ollama | Llama 3, Mistral, Qwen, etc. | Local inference, no API key required |
| vLLM | Any supported model | Self-hosted, OpenAI-compatible |
| LM Studio | Any GGUF model | Desktop local inference |
| llama.cpp | Any GGUF model | Lightweight local inference |
| Vercel AI Gateway | Multiple providers | Unified gateway |
Sub-Agent Orchestration
Build complex automation with the OpenFang sidecar:
- Visual workflow editor — drag-and-drop canvas with conditional branching, loops, and parallel execution
- Sub-agent spawning — create specialized agents with isolated tool sets and contexts
- Triggers — cron schedules, webhooks, and lifecycle events
- Gateway management — route requests to different AI providers with load balancing
- Templates — pre-built workflow templates for common automation patterns
Zero-Trust Security
Enterprise-grade policy enforcement powered by OPA/Rego:
- In-process policy engine — uses
regorus(Rego in Rust), no OPA sidecar needed - Default-allow model — policies define
deny[msg]rules; any denial blocks the action - 15 granular permissions — from
connectors:readtopolicies:manage - Role-based access control — define custom roles mapping to permission sets
- Audit logging — every policy evaluation recorded with full context
- Policy sources — sync policies from GitHub repos or HTTP endpoints
- Policy simulator — test rules against sample inputs before deployment
- API and tool enforcement — middleware checks on every HTTP request, plus pre-execution checks in the agent loop
Distributed Workers
Run long-running jobs and remote execution across your infrastructure:
- Redis-based job queue — reliable task distribution with at-least-once delivery
- Heartbeat monitoring — track worker health and auto-reassign stalled jobs
- Remote execution — run commands on remote hosts through registered workers
- Horizontal scaling — add workers to handle increased load
Deployment Options
Agentcy runs wherever you need it:
| Option | Best For | Setup Time |
|---|---|---|
| Docker Compose | Development, small teams | 5 minutes |
| Desktop App (macOS) | Individual use, demos | 2 minutes |
| Railway | Quick cloud deployment | 10 minutes |
| Fly.io | Edge deployment | 10 minutes |
| AWS (ECS/EKS) | Production, enterprise | 30 minutes |
| Kubernetes / Helm | Large-scale, self-hosted | 30 minutes |
Feature Comparison
How Agentcy compares to alternative approaches:
| Capability | Agentcy | Backstage | Port | Kubecost | Custom Scripts |
|---|---|---|---|---|---|
| Knowledge graph | Yes (Neo4j) | No | No | No | No |
| AI chat with tool calling | Yes (multi-LLM) | No | Limited | No | No |
| Live connector tools | 15+ sources, 100+ tools | Plugins (read-only) | Integrations | K8s only | Manual |
| Visual graph explorer | Yes | No | No | No | No |
| Sub-agent orchestration | Yes (OpenFang) | No | No | No | No |
| Zero-trust policies | OPA/Rego, 15 permissions | Basic RBAC | RBAC | N/A | N/A |
| Semantic search (RAG) | Local embeddings | No | No | No | No |
| Desktop app | Tauri v2 (macOS) | No | No | No | No |
| Self-hosted LLM support | Ollama, vLLM, llama.cpp | N/A | No | N/A | N/A |
| Deployment model | Self-hosted or Agentcy Cloud (PaaS) | Self-hosted | SaaS | Self-hosted | DIY |
Technology Stack
| Layer | Technology |
|---|---|
| Backend | Rust, Axum, Tokio, sqlx |
| Frontend | Next.js 16, React 19, Tailwind CSS v4, shadcn/ui |
| Graph DB | Neo4j 5 (with APOC) |
| SQL DB | PostgreSQL 16 |
| Cache / Queue | Redis 7 |
| Desktop | Tauri v2 (macOS, WKWebView) |
| AI | Claude, GPT-4o, Ollama, vLLM, LM Studio, llama.cpp |
| Policies | OPA/Rego via regorus |
| Embeddings | fastembed (local, no API) |
| Orchestration | OpenFang (optional sidecar) |
Who Is It For?
- Platform Engineering teams managing complex multi-cloud infrastructure
- DevOps / SRE teams needing unified visibility and incident response across tools
- Security teams enforcing access policies and auditing actions across systems
- Developers building AI-powered internal tools and automations
- Data Engineers exploring relationships across databases, APIs, and services
- Startup CTOs who want one tool instead of ten dashboards
Next Steps
- Getting Started — set up Agentcy in under 5 minutes
- Core Concepts — understand the architecture
- Docker Quick Start — detailed Docker Compose setup
- Desktop App — install the macOS desktop app