Appearance
Configuration
Agentcy supports three layers of configuration, applied in order of priority:
- CLI flags (highest priority)
- Environment variables
- TOML config file
- Built-in defaults (lowest priority)
Config File
On first run, Agentcy auto-generates ~/.agentcy/config.toml with commented defaults.
Search order: --config flag / AGENTCY_CONFIG env → ./agentcy.toml → ~/.agentcy/config.toml → auto-generate + use defaults.
toml
[server]
bind_addr = "0.0.0.0:8080"
log_level = "agentcy_api=debug,tower_http=debug"
[database]
postgres_url = "postgres://postgres:password@localhost:5432/kgp"
neo4j_uri = "bolt://localhost:7687"
neo4j_username = "neo4j"
neo4j_password = "password"
redis_url = "redis://localhost:6379"
[llm]
provider = "openai"
model = "gpt-4o"
base_url = "https://api.openai.com/v1"
# api_key = "" # prefer env vars for secrets
[auth]
provider = "local"
jwt_secret = "agentcy-dev-secret-change-in-production"
jwt_expiry_secs = 86400
[orchestrator]
url = ""
# api_key = ""
[features]
orchestrator = true
rag = true
workers = true
policies = true
whatsapp = true
connectors_cloud = true
connectors_data = true
connectors_dev = trueCLI Flags
agentcy-api [OPTIONS]
-c, --config <PATH> Config file path [env: AGENTCY_CONFIG]
--bind <ADDR> Bind address override
--log-level <FILTER> Log level override
--disable <FEATURE> Disable feature (repeatable)
--init-config Write default config to stdout and exit
--show-config Print resolved config and exitExamples:
bash
# Print default config template
agentcy-api --init-config
# Show resolved config (after env/CLI overrides)
agentcy-api --show-config
# Disable specific features
agentcy-api --disable rag --disable whatsapp
# Use a specific config file
agentcy-api --config /etc/agentcy/config.tomlFeature Flags
Features can be toggled in three ways:
| Feature | Cargo Feature | TOML key | --disable value |
|---|---|---|---|
| Orchestrator (SubAgents) | (always compiled) | features.orchestrator | orchestrator |
| RAG pipeline | (always compiled) | features.rag | rag |
| Workers | (always compiled) | features.workers | workers |
| Zero Trust policies | (always compiled) | features.policies | policies |
| (always compiled) | features.whatsapp | whatsapp | |
| Cloud connectors (AWS, GCP, Vercel, Supabase, LocalStack) | connectors-cloud | features.connectors_cloud | connectors-cloud |
| Data connectors (SQL, MongoDB, PowerBI) | connectors-data | features.connectors_data | connectors-data |
| Dev connectors (GitHub, K8s, OpenAPI, MCP, Remote Exec) | connectors-dev | features.connectors_dev | connectors-dev |
Compile-time features (Cargo features) control which connector crates are linked. Runtime features control whether the routes are mounted and services initialized. A feature disabled at compile-time is automatically disabled at runtime.
Minimal Build
bash
cargo build -p agentcy-api --no-default-featuresThis excludes all optional connector groups, producing a smaller binary.
Environment Variables Reference
LLM Provider
| Variable | Default | Description |
|---|---|---|
LLM_PROVIDER | anthropic | LLM provider to use. Options: anthropic, openai, ollama, vllm, llama-cpp, lmstudio, vercel-ai-gateway |
LLM_MODEL | claude-sonnet-4-20250514 | Model identifier for the selected provider |
LLM_BASE_URL | (provider default) | Override the API endpoint URL. Required for vllm, llama-cpp, lmstudio; optional for others |
Using Local Models
For Ollama, set LLM_PROVIDER=ollama and LLM_MODEL to any model you have pulled (e.g., llama3, mistral, qwen2.5). No API key is needed. The default base URL is http://localhost:11434.
API Keys
| Variable | Default | Description |
|---|---|---|
ANTHROPIC_API_KEY | — | Anthropic API key (required when LLM_PROVIDER=anthropic) |
OPENAI_API_KEY | — | OpenAI API key (required when LLM_PROVIDER=openai) |
AI_GATEWAY_API_KEY | — | API key for vercel-ai-gateway provider |
At least one API key is required for AI chat functionality. If you are using a local model (Ollama, vLLM, llama.cpp, LM Studio), no API key is needed.
Database — PostgreSQL
| Variable | Default | Description |
|---|---|---|
DATABASE_URL | postgres://postgres:password@localhost:5432/kgp | PostgreSQL connection string |
Port for Local Development
When running the backend outside Docker with make dev-backend, use port 15432 (the Docker-mapped host port):
DATABASE_URL=postgres://postgres:password@localhost:15432/kgpWhen running inside Docker, use the internal port 5432 with the service hostname:
DATABASE_URL=postgres://postgres:password@postgres:5432/kgpGraph Database — Neo4j
| Variable | Default | Description |
|---|---|---|
NEO4J_URI | bolt://localhost:7687 | Neo4j Bolt connection URI |
NEO4J_USERNAME | neo4j | Neo4j username |
NEO4J_PASSWORD | password | Neo4j password |
For local development with Docker Compose, use bolt://localhost:17687.
Cache — Redis
| Variable | Default | Description |
|---|---|---|
REDIS_URL | redis://localhost:6379 | Redis connection URL |
For local development with Docker Compose, use redis://localhost:16379.
Server
| Variable | Default | Description |
|---|---|---|
BIND_ADDR | 0.0.0.0:18080 | Address and port the backend API listens on |
DEV_MODE | false | When true, bypasses JWT authentication and uses a default tenant. Never enable in production |
RUST_LOG | info | Log level filter. Recommended for development: agentcy_api=debug,agentcy_chat=debug,tower_http=info |
Frontend
| Variable | Default | Description |
|---|---|---|
NEXT_PUBLIC_API_URL | http://localhost:18080 | Backend API URL used by the frontend. This is a build-time variable (prefixed with NEXT_PUBLIC_) |
Authentication
| Variable | Default | Description |
|---|---|---|
AUTH_PROVIDER | local | Authentication provider. Options: local, oidc |
JWT_SECRET | — | Secret key for signing JWTs (required when AUTH_PROVIDER=local). Use a long random string in production |
JWT_EXPIRY_SECS | 86400 | JWT token lifetime in seconds (default: 24 hours) |
OIDC Authentication
These variables are required when AUTH_PROVIDER=oidc:
| Variable | Default | Description |
|---|---|---|
JWKS_URL | — | URL to the OIDC provider's JWKS endpoint (e.g., https://your-tenant.auth0.com/.well-known/jwks.json) |
JWT_AUDIENCE | — | Expected aud claim in the JWT (e.g., https://api.agentcy.dev) |
JWT_ISSUER | — | Expected iss claim in the JWT (e.g., https://your-tenant.auth0.com/) |
See the Authentication guide for detailed setup instructions for Auth0, Supabase, and Keycloak.
Sub-Agents / OpenFang
| Variable | Default | Description |
|---|---|---|
OPENFANG_URL | — | URL of the OpenFang sidecar (e.g., http://localhost:4200). Leave unset to disable orchestration features |
OPENFANG_API_KEY | — | API key for authenticating with OpenFang (optional) |
GitHub OAuth
| Variable | Default | Description |
|---|---|---|
GITHUB_OAUTH_CLIENT_ID | — | GitHub OAuth App client ID (for the GitHub OAuth connector type) |
GITHUB_OAUTH_CLIENT_SECRET | — | GitHub OAuth App client secret |
API_BASE_URL | http://localhost:18080 | Public URL of the Agentcy backend (used for OAuth callback URLs) |
Provider Configuration Examples
Anthropic (Default)
bash
LLM_PROVIDER=anthropic
LLM_MODEL=claude-sonnet-4-20250514
ANTHROPIC_API_KEY=sk-ant-your-key-hereOpenAI
bash
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o
OPENAI_API_KEY=sk-your-key-hereOllama (Local)
bash
LLM_PROVIDER=ollama
LLM_MODEL=llama3
# No API key needed
# LLM_BASE_URL=http://localhost:11434 # default, override if neededvLLM (Self-Hosted)
bash
LLM_PROVIDER=vllm
LLM_MODEL=meta-llama/Meta-Llama-3-8B-Instruct
LLM_BASE_URL=http://your-vllm-server:8000LM Studio
bash
LLM_PROVIDER=lmstudio
LLM_MODEL=your-loaded-model
LLM_BASE_URL=http://localhost:1234llama.cpp
bash
LLM_PROVIDER=llama-cpp
LLM_MODEL=default
LLM_BASE_URL=http://localhost:8080Vercel AI Gateway
bash
LLM_PROVIDER=vercel-ai-gateway
LLM_MODEL=anthropic/claude-sonnet-4-20250514
AI_GATEWAY_API_KEY=your-gateway-keyDevelopment vs Production
Development Configuration
A minimal .env for local development:
bash
# LLM
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-your-key
# Infrastructure (Docker Compose ports)
DATABASE_URL=postgres://postgres:password@localhost:15432/kgp
NEO4J_URI=bolt://localhost:17687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=password
REDIS_URL=redis://localhost:16379
# Server
BIND_ADDR=0.0.0.0:18080
DEV_MODE=true
RUST_LOG=agentcy_api=debug,agentcy_chat=debug,tower_http=info
# Frontend
NEXT_PUBLIC_API_URL=http://localhost:18080
# Auth (local dev)
AUTH_PROVIDER=local
JWT_SECRET=dev-secret-not-for-productionProduction Configuration
Key differences for production:
bash
# Disable dev mode — enforce JWT auth
DEV_MODE=false
# Strong JWT secret (generate with: openssl rand -hex 32)
JWT_SECRET=a1b2c3d4e5f6...long-random-string
# Production database credentials
DATABASE_URL=postgres://agentcy:strong-password@db.example.com:5432/agentcy
NEO4J_URI=bolt://neo4j.example.com:7687
NEO4J_PASSWORD=strong-neo4j-password
REDIS_URL=redis://:strong-password@redis.example.com:6379
# OIDC auth recommended for production
AUTH_PROVIDER=oidc
JWKS_URL=https://your-tenant.auth0.com/.well-known/jwks.json
JWT_AUDIENCE=https://api.agentcy.dev
JWT_ISSUER=https://your-tenant.auth0.com/
# Production logging
RUST_LOG=agentcy_api=info,agentcy_chat=info,tower_http=warn
# Frontend pointing to production API
NEXT_PUBLIC_API_URL=https://api.agentcy.devSecurity Checklist
Before deploying to production, ensure:
DEV_MODEisfalseJWT_SECRETis a long, random string (not the default)- Database passwords are strong and unique
- Default admin credentials have been changed
- OIDC authentication is configured for team use
RUST_LOGis set toinfoorwarn(notdebug)
Application Settings
In addition to environment variables, some settings are configurable at runtime through the Agentcy UI under Settings:
| Setting | Section | Description |
|---|---|---|
| LLM Provider | General | Switch between configured LLM providers |
| LLM Model | General | Change the active model |
| Approval timeout | Security | How long to wait for user approval (30-3600 seconds, default: 300) |
| Zero-trust enabled | Security | Toggle policy enforcement on/off |
| Organization name | General | Display name for the organization |
These settings are stored in PostgreSQL and apply to the entire organization.
Next Steps
- Authentication — detailed auth provider setup
- Docker Quick Start — infrastructure setup
- Deployment Overview — production deployment guides