Skip to content

Configuration

Agentcy supports three layers of configuration, applied in order of priority:

  1. CLI flags (highest priority)
  2. Environment variables
  3. TOML config file
  4. Built-in defaults (lowest priority)

Config File

On first run, Agentcy auto-generates ~/.agentcy/config.toml with commented defaults.

Search order: --config flag / AGENTCY_CONFIG env → ./agentcy.toml~/.agentcy/config.toml → auto-generate + use defaults.

toml
[server]
bind_addr = "0.0.0.0:8080"
log_level = "agentcy_api=debug,tower_http=debug"

[database]
postgres_url = "postgres://postgres:password@localhost:5432/kgp"
neo4j_uri = "bolt://localhost:7687"
neo4j_username = "neo4j"
neo4j_password = "password"
redis_url = "redis://localhost:6379"

[llm]
provider = "openai"
model = "gpt-4o"
base_url = "https://api.openai.com/v1"
# api_key = ""  # prefer env vars for secrets

[auth]
provider = "local"
jwt_secret = "agentcy-dev-secret-change-in-production"
jwt_expiry_secs = 86400

[orchestrator]
url = ""
# api_key = ""

[features]
orchestrator = true
rag = true
workers = true
policies = true
whatsapp = true
connectors_cloud = true
connectors_data = true
connectors_dev = true

CLI Flags

agentcy-api [OPTIONS]

  -c, --config <PATH>     Config file path [env: AGENTCY_CONFIG]
  --bind <ADDR>            Bind address override
  --log-level <FILTER>     Log level override
  --disable <FEATURE>      Disable feature (repeatable)
  --init-config            Write default config to stdout and exit
  --show-config            Print resolved config and exit

Examples:

bash
# Print default config template
agentcy-api --init-config

# Show resolved config (after env/CLI overrides)
agentcy-api --show-config

# Disable specific features
agentcy-api --disable rag --disable whatsapp

# Use a specific config file
agentcy-api --config /etc/agentcy/config.toml

Feature Flags

Features can be toggled in three ways:

FeatureCargo FeatureTOML key--disable value
Orchestrator (SubAgents)(always compiled)features.orchestratororchestrator
RAG pipeline(always compiled)features.ragrag
Workers(always compiled)features.workersworkers
Zero Trust policies(always compiled)features.policiespolicies
WhatsApp(always compiled)features.whatsappwhatsapp
Cloud connectors (AWS, GCP, Vercel, Supabase, LocalStack)connectors-cloudfeatures.connectors_cloudconnectors-cloud
Data connectors (SQL, MongoDB, PowerBI)connectors-datafeatures.connectors_dataconnectors-data
Dev connectors (GitHub, K8s, OpenAPI, MCP, Remote Exec)connectors-devfeatures.connectors_devconnectors-dev

Compile-time features (Cargo features) control which connector crates are linked. Runtime features control whether the routes are mounted and services initialized. A feature disabled at compile-time is automatically disabled at runtime.

Minimal Build

bash
cargo build -p agentcy-api --no-default-features

This excludes all optional connector groups, producing a smaller binary.

Environment Variables Reference

LLM Provider

VariableDefaultDescription
LLM_PROVIDERanthropicLLM provider to use. Options: anthropic, openai, ollama, vllm, llama-cpp, lmstudio, vercel-ai-gateway
LLM_MODELclaude-sonnet-4-20250514Model identifier for the selected provider
LLM_BASE_URL(provider default)Override the API endpoint URL. Required for vllm, llama-cpp, lmstudio; optional for others

Using Local Models

For Ollama, set LLM_PROVIDER=ollama and LLM_MODEL to any model you have pulled (e.g., llama3, mistral, qwen2.5). No API key is needed. The default base URL is http://localhost:11434.

API Keys

VariableDefaultDescription
ANTHROPIC_API_KEYAnthropic API key (required when LLM_PROVIDER=anthropic)
OPENAI_API_KEYOpenAI API key (required when LLM_PROVIDER=openai)
AI_GATEWAY_API_KEYAPI key for vercel-ai-gateway provider

At least one API key is required for AI chat functionality. If you are using a local model (Ollama, vLLM, llama.cpp, LM Studio), no API key is needed.

Database — PostgreSQL

VariableDefaultDescription
DATABASE_URLpostgres://postgres:password@localhost:5432/kgpPostgreSQL connection string

Port for Local Development

When running the backend outside Docker with make dev-backend, use port 15432 (the Docker-mapped host port):

DATABASE_URL=postgres://postgres:password@localhost:15432/kgp

When running inside Docker, use the internal port 5432 with the service hostname:

DATABASE_URL=postgres://postgres:password@postgres:5432/kgp

Graph Database — Neo4j

VariableDefaultDescription
NEO4J_URIbolt://localhost:7687Neo4j Bolt connection URI
NEO4J_USERNAMEneo4jNeo4j username
NEO4J_PASSWORDpasswordNeo4j password

For local development with Docker Compose, use bolt://localhost:17687.

Cache — Redis

VariableDefaultDescription
REDIS_URLredis://localhost:6379Redis connection URL

For local development with Docker Compose, use redis://localhost:16379.

Server

VariableDefaultDescription
BIND_ADDR0.0.0.0:18080Address and port the backend API listens on
DEV_MODEfalseWhen true, bypasses JWT authentication and uses a default tenant. Never enable in production
RUST_LOGinfoLog level filter. Recommended for development: agentcy_api=debug,agentcy_chat=debug,tower_http=info

Frontend

VariableDefaultDescription
NEXT_PUBLIC_API_URLhttp://localhost:18080Backend API URL used by the frontend. This is a build-time variable (prefixed with NEXT_PUBLIC_)

Authentication

VariableDefaultDescription
AUTH_PROVIDERlocalAuthentication provider. Options: local, oidc
JWT_SECRETSecret key for signing JWTs (required when AUTH_PROVIDER=local). Use a long random string in production
JWT_EXPIRY_SECS86400JWT token lifetime in seconds (default: 24 hours)

OIDC Authentication

These variables are required when AUTH_PROVIDER=oidc:

VariableDefaultDescription
JWKS_URLURL to the OIDC provider's JWKS endpoint (e.g., https://your-tenant.auth0.com/.well-known/jwks.json)
JWT_AUDIENCEExpected aud claim in the JWT (e.g., https://api.agentcy.dev)
JWT_ISSUERExpected iss claim in the JWT (e.g., https://your-tenant.auth0.com/)

See the Authentication guide for detailed setup instructions for Auth0, Supabase, and Keycloak.

Sub-Agents / OpenFang

VariableDefaultDescription
OPENFANG_URLURL of the OpenFang sidecar (e.g., http://localhost:4200). Leave unset to disable orchestration features
OPENFANG_API_KEYAPI key for authenticating with OpenFang (optional)

GitHub OAuth

VariableDefaultDescription
GITHUB_OAUTH_CLIENT_IDGitHub OAuth App client ID (for the GitHub OAuth connector type)
GITHUB_OAUTH_CLIENT_SECRETGitHub OAuth App client secret
API_BASE_URLhttp://localhost:18080Public URL of the Agentcy backend (used for OAuth callback URLs)

Provider Configuration Examples

Anthropic (Default)

bash
LLM_PROVIDER=anthropic
LLM_MODEL=claude-sonnet-4-20250514
ANTHROPIC_API_KEY=sk-ant-your-key-here

OpenAI

bash
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o
OPENAI_API_KEY=sk-your-key-here

Ollama (Local)

bash
LLM_PROVIDER=ollama
LLM_MODEL=llama3
# No API key needed
# LLM_BASE_URL=http://localhost:11434  # default, override if needed

vLLM (Self-Hosted)

bash
LLM_PROVIDER=vllm
LLM_MODEL=meta-llama/Meta-Llama-3-8B-Instruct
LLM_BASE_URL=http://your-vllm-server:8000

LM Studio

bash
LLM_PROVIDER=lmstudio
LLM_MODEL=your-loaded-model
LLM_BASE_URL=http://localhost:1234

llama.cpp

bash
LLM_PROVIDER=llama-cpp
LLM_MODEL=default
LLM_BASE_URL=http://localhost:8080

Vercel AI Gateway

bash
LLM_PROVIDER=vercel-ai-gateway
LLM_MODEL=anthropic/claude-sonnet-4-20250514
AI_GATEWAY_API_KEY=your-gateway-key

Development vs Production

Development Configuration

A minimal .env for local development:

bash
# LLM
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-your-key

# Infrastructure (Docker Compose ports)
DATABASE_URL=postgres://postgres:password@localhost:15432/kgp
NEO4J_URI=bolt://localhost:17687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=password
REDIS_URL=redis://localhost:16379

# Server
BIND_ADDR=0.0.0.0:18080
DEV_MODE=true
RUST_LOG=agentcy_api=debug,agentcy_chat=debug,tower_http=info

# Frontend
NEXT_PUBLIC_API_URL=http://localhost:18080

# Auth (local dev)
AUTH_PROVIDER=local
JWT_SECRET=dev-secret-not-for-production

Production Configuration

Key differences for production:

bash
# Disable dev mode — enforce JWT auth
DEV_MODE=false

# Strong JWT secret (generate with: openssl rand -hex 32)
JWT_SECRET=a1b2c3d4e5f6...long-random-string

# Production database credentials
DATABASE_URL=postgres://agentcy:strong-password@db.example.com:5432/agentcy
NEO4J_URI=bolt://neo4j.example.com:7687
NEO4J_PASSWORD=strong-neo4j-password
REDIS_URL=redis://:strong-password@redis.example.com:6379

# OIDC auth recommended for production
AUTH_PROVIDER=oidc
JWKS_URL=https://your-tenant.auth0.com/.well-known/jwks.json
JWT_AUDIENCE=https://api.agentcy.dev
JWT_ISSUER=https://your-tenant.auth0.com/

# Production logging
RUST_LOG=agentcy_api=info,agentcy_chat=info,tower_http=warn

# Frontend pointing to production API
NEXT_PUBLIC_API_URL=https://api.agentcy.dev

Security Checklist

Before deploying to production, ensure:

  1. DEV_MODE is false
  2. JWT_SECRET is a long, random string (not the default)
  3. Database passwords are strong and unique
  4. Default admin credentials have been changed
  5. OIDC authentication is configured for team use
  6. RUST_LOG is set to info or warn (not debug)

Application Settings

In addition to environment variables, some settings are configurable at runtime through the Agentcy UI under Settings:

SettingSectionDescription
LLM ProviderGeneralSwitch between configured LLM providers
LLM ModelGeneralChange the active model
Approval timeoutSecurityHow long to wait for user approval (30-3600 seconds, default: 300)
Zero-trust enabledSecurityToggle policy enforcement on/off
Organization nameGeneralDisplay name for the organization

These settings are stored in PostgreSQL and apply to the entire organization.


Next Steps

Built by AgentcyLabs. For in-house deployment or Agentcy Cloud (PaaS) access, visit agentcylabs.com.