Quick Start
Run from Docker Image
docker run \
-e GATEWAY_MASTER_KEY="your-secure-master-key" \
-e OPENAI_API_KEY="your-api-key" \
-p 8000:8000 \
ghcr.io/mozilla-ai/any-llm/gateway:latest
Local development
Option 1: Docker compose
First, create a config.yaml file with your configuration, using config.example.yaml as a template.
Then run the Docker containers:
docker-compose -f docker/docker-compose.yml up -d --build
# Tail the logs
docker-compose -f docker/docker-compose.yml logs -f
This will run any-llm-gateway using the credentials and configuration specified in config.yaml.
Option 2: CLI
In order for this to work, you will need to have a Postgres DB running.
export GATEWAY_MASTER_KEY="your-secure-master-key"
export DATABASE_URL="postgresql://postgres:postgres@localhost:5432/any_llm_gateway"
export OPENAI_API_KEY="your-api-key" # Or GEMINI_API_KEY etc
any-llm-gateway serve # Or, you can put the env vars in a config.yaml file and run serve with --config path/to/yaml
Basic Usage
The gateway supports two authentication patterns for making completion requests:
Option 1: Direct Master Key Authentication
First, create a user.
curl -X POST http://localhost:8000/v1/users \
-H "X-AnyLLM-Key: Bearer your-secure-master-key" \
-H "Content-Type: application/json" \
-d '{"user_id": "user-123", "alias": "Alice"}'
Use the master key directly and specify which user is making the request.
curl -X POST http://localhost:8000/v1/chat/completions \
-H "X-AnyLLM-Key: Bearer your-secure-master-key" \
-H "Content-Type: application/json" \
-d '{
"model": "openai:gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}],
"user": "user-123"
}'
Option 2: Virtual API Keys
Create a virtual API key (you can optionally pass in a user_id too if you want the key linked to a user)
curl -X POST http://localhost:8000/v1/keys \
-H "X-AnyLLM-Key: Bearer your-secure-master-key" \
-H "Content-Type: application/json" \
-d '{"key_name": "mobile-app"}'
Now you can use that new api key and don't need to pass in the user field.
curl -X POST http://localhost:8000/v1/chat/completions \
-H "X-AnyLLM-Key: Bearer gw-..." \
-H "Content-Type: application/json" \
-d '{"model": "openai:gpt-5-mini", "messages": [{"role": "user", "content": "Hello!"}]}'
Usage is automatically tracked under the virtual user associated with the virtual key.