Skip to content

Configuration

Clawbolt is configured via environment variables. Copy .env.example to .env and fill in the values.

All available settings are listed in .env.example with defaults and comments. This page documents every setting by category.

VariableDescription
TELEGRAM_BOT_TOKENTelegram bot token from @BotFather
LLM_PROVIDERProvider name (any provider supported by any-llm)
LLM_MODELModel identifier for the agent loop (e.g. the model name your provider expects)
LLM API keyThe API key env var for your chosen provider (e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY)
TELEGRAM_ALLOWED_CHAT_IDS or TELEGRAM_ALLOWED_USERNAMESWho can message the bot. Set to * to allow everyone, or a comma-separated list. Empty = deny all
VariableDefaultDescription
LOG_LEVELINFOPython log level (DEBUG, INFO, WARNING, ERROR)
CORS_ORIGINShttp://localhost:3000,http://localhost:8000Comma-separated list of allowed CORS origins
JWT_SECRETchange-me-in-productionSecret key for JWT signing. Change this in production
JWT_EXPIRY_MINUTES15JWT token expiry time in minutes
PREMIUM_PLUGIN(empty)Python import path for premium auth plugin. Leave empty for OSS single-tenant mode
VariableDefaultDescription
LLM_PROVIDER(required)LLM provider name (any provider supported by any-llm)
LLM_MODEL(required)Model to use for the agent loop
LLM_API_BASE(none)Custom API base URL (e.g. http://localhost:1234/v1 for LM Studio)
VISION_MODEL(same as LLM_MODEL)Model to use for image/document analysis. Falls back to LLM_MODEL if not set
ANY_LLM_KEYany-llm.ai managed platform key (replaces individual provider keys)

Set the API key env var for your chosen provider, or set ANY_LLM_KEY to use the any-llm.ai managed platform as a key vault for all providers.

VariableDefaultDescription
MESSAGING_PROVIDERtelegramMessaging backend (currently only telegram is supported)
TELEGRAM_BOT_TOKENBot token from @BotFather
TELEGRAM_WEBHOOK_SECRET(auto-derived)Webhook validation secret. Auto-derived from bot token if not set
TELEGRAM_ALLOWED_CHAT_IDS(empty)Comma-separated allowlist of Telegram chat IDs, or * to allow all. Empty = deny all
TELEGRAM_ALLOWED_USERNAMES(empty)Comma-separated allowlist of Telegram usernames, or * to allow all. Empty = deny all
VariableDefaultDescription
STORAGE_PROVIDERlocalStorage backend: local, dropbox, or google_drive
DROPBOX_ACCESS_TOKENDropbox access token (when using Dropbox)
GOOGLE_DRIVE_CREDENTIALS_JSONGoogle Drive OAuth credentials JSON (when using Google Drive)
PDF_STORAGE_DIRdata/estimatesDirectory for generated PDF estimates
FILE_STORAGE_BASE_DIRdata/storageBase directory for local file storage
DEFAULT_ESTIMATE_TERMSPayment due within 30 days of project completion.Default terms printed on estimates

See Storage Providers for setup instructions.

VariableDefaultDescription
DATABASE_URLPostgreSQL connection string. Set automatically by Docker Compose

When running with Docker Compose, the DATABASE_URL is set automatically. You don’t need to configure it.

VariableDefaultDescription
LLM_MAX_TOKENS_AGENT500Max tokens per agent loop LLM response
LLM_MAX_TOKENS_HEARTBEAT300Max tokens per heartbeat LLM response
LLM_MAX_TOKENS_VISION1000Max tokens per vision/image analysis response
VariableDefaultDescription
WHISPER_MODEL_SIZEbasefaster-whisper model size (tiny, base, small, medium, large)
WHISPER_DEVICEcpuDevice for inference (cpu or cuda)
WHISPER_COMPUTE_TYPEint8Quantization type (int8, float16, float32)
VariableDefaultDescription
MAX_TOOL_ROUNDS10Maximum tool-calling rounds per agent invocation
MAX_INPUT_TOKENS120000Max input token budget before context trimming
CONTEXT_TRIM_TARGET_TOKENS80000Target token count after trimming
RATE_LIMIT_RETRY_DELAY2.0Seconds to wait before retrying after a rate limit
VariableDefaultDescription
CONVERSATION_TIMEOUT_HOURS4Hours of inactivity before starting a new conversation
CONVERSATION_HISTORY_LIMIT20Max messages included in LLM context
MEMORY_RECALL_LIMIT20Max memory facts recalled per query
COMPACTION_ENABLEDtrueEnable automatic conversation compaction
COMPACTION_MODEL(same as LLM_MODEL)Model used for compaction
COMPACTION_PROVIDER(same as LLM_PROVIDER)Provider used for compaction
COMPACTION_MAX_TOKENS500Max tokens per compaction response
HEARTBEAT_STALE_ESTIMATE_HOURS24Hours before an unsent estimate is considered stale
VariableDefaultDescription
WEBHOOK_RATE_LIMIT_MAX_REQUESTS30Max webhook requests per window
WEBHOOK_RATE_LIMIT_WINDOW_SECONDS60Rate limit window in seconds
RATE_LIMIT_TRUST_PROXYfalseTrust X-Forwarded-For for client IP (set true behind a reverse proxy)
VariableDefaultDescription
HEARTBEAT_ENABLEDtrueEnable proactive check-in messages
HEARTBEAT_INTERVAL_MINUTES30Minutes between heartbeat evaluation ticks
HEARTBEAT_MAX_DAILY_MESSAGES5Max proactive messages per contractor per day
HEARTBEAT_QUIET_HOURS_START20Hour (24h) to stop sending heartbeats
HEARTBEAT_QUIET_HOURS_END7Hour (24h) to resume sending heartbeats
HEARTBEAT_IDLE_DAYS3Days of inactivity before flagging a contractor
HEARTBEAT_MODEL(same as LLM_MODEL)Model used for heartbeat messages
HEARTBEAT_PROVIDER(same as LLM_PROVIDER)Provider used for heartbeat messages
HEARTBEAT_CONCURRENCY5Max concurrent contractor evaluations per tick
HEARTBEAT_RECENT_MESSAGES_COUNT5Number of recent messages included in heartbeat context
CHECKLIST_DAILY_INTERVAL_HOURS20Hours between daily checklist heartbeat evaluations
VariableDefaultDescription
HTTP_TIMEOUT_SECONDS30.0Default timeout for outbound HTTP requests
CLOUDFLARED_METRICS_TIMEOUT_SECONDS5.0Timeout for cloudflared tunnel metrics check
TELEGRAM_WEBHOOK_TIMEOUT_SECONDS10.0Timeout for Telegram webhook registration
VariableDefaultDescription
MAX_MEDIA_SIZE_BYTES20971520Max upload size (20 MB default)