Getting Started with Any-LLM¶
Any-LLM is a unified interface that lets you work with language models from any provider using a consistent API. Whether you're using OpenAI, Anthropic, Google, local models, or open-source alternatives, any-llm makes it easy to switch between them without changing your code.
Why Any-LLM?¶
- Provider Agnostic: One API for all LLM providers
- Easy Switching: Change models with a single line
- Cost Comparison: Compare costs across providers
- Streaming Support: Real-time responses from any model
- Type Safe: Full TypeScript/Python type support
Installation¶
%pip install any-llm-sdk[all] nest-asyncio -q
# nest_asyncio allows us to use 'await' directly in Jupyter notebooks
# This is needed because any-llm uses async functions for API calls
import nest_asyncio
nest_asyncio.apply()
Setting Up API Keys¶
Different providers require different API keys. Let's set them up properly:
import os
from getpass import getpass
def setup_api_key(key_name: str, provider: str) -> None:
"""Set up API key for the specified provider."""
if key_name not in os.environ:
print(f"🔑 {key_name} not found in environment")
api_key = getpass(f"Enter your {provider} API key (or press Enter to skip): ")
if api_key:
os.environ[key_name] = api_key
print(f"✅ {key_name} set for this session")
else:
print(f"⏭️ Skipping {provider}")
else:
print(f"✅ {key_name} found in environment")
# Set up keys for different providers
print("Setting up API keys...\n")
setup_api_key("OPENAI_API_KEY", "OpenAI")
setup_api_key("ANTHROPIC_API_KEY", "Anthropic")
# You could add more using :
# setup_api_key("GOOGLE_API_KEY", "Google")
# setup_api_key("MISTRAL_API_KEY", "Mistral")
List Models Across Providers¶
any_llm can list all available models for an LLM provider - in this case, we are listing out models supported by OpenAI and Anthropic.
from any_llm import AnyLLM, LLMProvider
for provider in [LLMProvider.OPENAI, LLMProvider.ANTHROPIC]:
client = AnyLLM.create(provider=provider)
models = client.list_models()
print(f"Provider: {provider}")
print(", ".join([model.id for model in models]))
print()
Expected output¶
Provider: openai gpt-4o-mini, gpt-4-0613, gpt-4, gpt-3.5-turbo, gpt-5-search-api-2025-10-14, gpt-realtime-mini, gpt-realtime-mini-2025-10-06, sora-2, sora-2-pro, davinci-002, babbage-002, gpt-3.5-turbo-instruct, gpt-3.5-turbo-instruct-0914...
Provider: anthropic claude-haiku-4-5-20251001, claude-sonnet-4-5-20250929, claude-opus-4-1-20250805, claude-opus-4-20250514, claude-sonnet-4-20250514, claude-3-7-sonnet-20250219, claude-3-5-haiku-20241022, claude-3-haiku-20240307
Generate Text¶
Let's use one model from each provider to generate text for the same prompt.
from any_llm import acompletion
from any_llm.types.completion import ChatCompletion
prompt = "Write a Haiku on the solar system."
# OpenAI
model = "openai:gpt-4o-mini"
result = await acompletion(
model=model,
messages=[
{"role": "user", "content": prompt},
],
)
assert isinstance(result, ChatCompletion)
print(f"Model: {result.model}")
print(f"Response:\n{result.choices[0].message.content}\n")
# Anthropic
model = "anthropic:claude-haiku-4-5-20251001"
result = await acompletion(
model=model,
messages=[
{"role": "user", "content": prompt},
],
)
assert isinstance(result, ChatCompletion)
print(f"Model: {result.model}")
print(f"Response:\n{result.choices[0].message.content}")
Expected Output¶
Note: The haiku content will be different each time since it's generated by the LLM. This example shows the output format.
Model: gpt-4o-mini-2024-07-18 Response:
Planets spin and dance,
In the vast cosmic embrace,
Stars whisper their tales.
Model: claude-haiku-4-5-20251001 Response:
Eight worlds circle round,
Sun's gravity holds them close—
Dance through endless void.