The any-llm-gateway requires configuration to connect to your database, authenticate requests, and route to LLM providers. This guide covers the two main configuration approaches and how to set up model pricing for cost tracking.
You can configure the gateway using either a YAML configuration file or environment variables:
Config File (Recommended): Best for development and when managing multiple providers with complex settings. Easier to version control and share across teams.
Environment Variables: Best for production deployments, containerized environments, or when following 12-factor app principles.
Both methods can also be combined—environment variables will override config file values.
You can set additional arguments to provider clients via the client_args configuration. These arguments are passed directly to the provider’s client initialization, enabling custom headers, timeouts, and other provider-specific options.
providers:
openai:
api_key: "${OPENAI_API_KEY}"
client_args:
custom_headers:
X-Custom-Header: "custom-value"
timeout: 60
Common use cases:
Custom headers: Pass additional headers to the provider (e.g., for proxy authentication or request tracing)
Timeouts: Configure connection and request timeouts
Provider-specific options: Pass any additional arguments supported by the provider’s client
The available client_args options depend on the provider. See the any-llm provider documentation for provider-specific options.