Skip to content

Intro

any-llm logo

Stop rewriting code for every LLM provider

any-llm is a Python library providing a single interface to different llm providers.

from any_llm import completion

# Using the messages format
response = completion(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "What is Python?"}],
    provider="openai"
)
print(response)

# Switch providers without changing your code
response = completion(
    model="claude-sonnet-4-5-20250929",
    messages=[{"role": "user", "content": "What is Python?"}],
    provider="anthropic"
)
print(response)

Why any-llm

  • Switch providers in one line
  • Consistent error handling across providers
  • Simple API, powerful features

View supported providers →

Getting Started

Get started in 5 minutes → - Install the library and run your first API call.

Demo

Try any-llm in action with our interactive chat demo:

📂 Run the Demo

Features: real-time streaming responses, multiple provider support, and collapsible "thinking" content display.

API Documentation

any-llm provides two main interfaces:

Direct API Functions (recommended for simple use cases): - completion - Chat completions with any provider - embedding - Text embeddings - responses - OpenAI-style Responses API

AnyLLM Class (recommended for advanced use cases): - Provider API - Lower-level provider interface with metadata access and reusability

Error Handling

any-llm provides custom exceptions to indicate common errors like missing API keys and parameters that are unsupported by a specific provider.

For more details on exceptions, see the exceptions API documentation.

For AI Systems

This documentation is available in two AI-friendly formats:

  • llms.txt - A structured overview with curated links to key documentation sections
  • llms-full.txt - Complete documentation content concatenated into a single file