Skip to content

Supported Providers

any-llm supports the below providers. In order to discover information about what models are supported by a provider as well as what features the provider supports for each model, refer to the provider documentation.

Legend

  • Reasoning (Completions): Provider can return reasoning traces alongside the assistant message via the completions and/or streaming endpoints. This does not indicate whether the provider offers separate "reasoning models".See this
  • Streaming (Completions): Provider can stream completion results back as an iterator. discussion for more information.
  • Responses API: Provider supports the Responses API variant for text generation. See this to follow along with our implementation effort.
ID Env Var Source Code Responses Completion Streaming
(Completions)
Reasoning
(Completions)
Embedding
aws AWS_BEARER_TOKEN_BEDROCK Source
anthropic ANTHROPIC_API_KEY Source
azure AZURE_API_KEY Source
cerebras CEREBRAS_API_KEY Source
cohere CO_API_KEY Source
deepseek DEEPSEEK_API_KEY Source
fireworks FIREWORKS_API_KEY Source
google - Source
groq GROQ_API_KEY Source
huggingface HF_TOKEN Source
inception INCEPTION_API_KEY Source
lmstudio LM_STUDIO_API_KEY Source
llama LLAMA_API_KEY Source
mistral MISTRAL_API_KEY Source
moonshot MOONSHOT_API_KEY Source
nebius NEBIUS_API_KEY Source
ollama - Source
openai OPENAI_API_KEY Source
openrouter OPENROUTER_API_KEY Source
portkey PORTKEY_API_KEY Source
sambanova SAMBANOVA_API_KEY Source
together TOGETHER_API_KEY Source
watsonx WATSONX_API_KEY Source
xai XAI_API_KEY Source