Skip to content

AnyLLM

The AnyLLM class is the provider interface at the core of any-llm. Use it when you need to make multiple requests against the same provider without re-instantiating on every call.

Factory method that returns a configured AnyLLM instance for the given provider.

def create(
provider: str | LLMProvider,
api_key: str | None = None,
api_base: str | None = None,
**kwargs: Any,
) -> AnyLLM
ParameterTypeDefaultDescription
providerstr | LLMProviderrequiredThe provider name (e.g., ‘openai’, ‘anthropic’)
api_keystr | NoneNoneAPI key for the provider
api_basestr | NoneNoneBase URL for the provider API
**kwargsAnyrequiredAdditional provider-specific arguments

Returns: An AnyLLM instance bound to the specified provider.

from any_llm import AnyLLM
llm = AnyLLM.create("openai", api_key="sk-...")
response = llm.completion(
model="gpt-4.1-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

Parses a combined "provider:model" string into its components.

def split_model_provider(
model: str,
) -> tuple[LLMProvider, str]
ParameterTypeDescription
modelstrCombined identifier in "provider:model" format (e.g., "openai:gpt-4.1-mini"). The legacy "provider/model" format is also accepted but deprecated.

Returns: A (LLMProvider, model_name) tuple.

Raises: ValueError if the string does not contain a : or / delimiter.

provider, model_name = AnyLLM.split_model_provider("anthropic:claude-sonnet-4-20250514")
# provider = LLMProvider.ANTHROPIC
# model_name = "claude-sonnet-4-20250514"

Returns metadata for every supported provider, sorted alphabetically by name.

def get_all_provider_metadata() -> list[ProviderMetadata]

Returns: A list of ProviderMetadata objects.

for meta in AnyLLM.get_all_provider_metadata():
print(f"{meta.name}: streaming={meta.streaming}, embedding={meta.embedding}")

Returns a list of all supported provider key strings.

def get_supported_providers() -> list[str]

Returns: list[str] of provider keys (e.g., ["anthropic", "openai", ...]).

All instance methods below are called on an AnyLLM object returned by AnyLLM.create().

Create a chat completion. See the Completion reference for the full parameter list.

def completion(self, model, messages, *, stream=None, response_format=None, **kwargs)
-> ChatCompletion | Iterator[ChatCompletionChunk] | ParsedChatCompletion
async def acompletion(self, model, messages, *, stream=None, response_format=None, **kwargs)
-> ChatCompletion | AsyncIterator[ChatCompletionChunk] | ParsedChatCompletion

Create a response using the OpenResponses API. See the Responses reference.

def responses(self, **kwargs)
-> ResponseResource | Response | Iterator[ResponseStreamEvent]
async def aresponses(self, **kwargs)
-> ResponseResource | Response | AsyncIterator[ResponseStreamEvent]

Create a message using the Anthropic Messages API format. All providers support this through automatic conversion.

def messages(self, **kwargs)
-> MessageResponse | Iterator[MessageStreamEvent]
async def amessages(self, model, messages, max_tokens, **kwargs)
-> MessageResponse | AsyncIterator[MessageStreamEvent]

List available models for this provider. See the List Models reference.

def list_models(self, **kwargs) -> Sequence[Model]
async def alist_models(self, **kwargs) -> Sequence[Model]

Create a batch job. See the Batch reference.

def create_batch(self, **kwargs) -> Batch
async def acreate_batch(self, input_file_path, endpoint, completion_window="24h", metadata=None, **kwargs) -> Batch

Returns metadata for this provider instance’s class.

def get_provider_metadata() -> ProviderMetadata

Returns: A ProviderMetadata object describing the provider’s capabilities.

llm = AnyLLM.create("mistral")
meta = llm.get_provider_metadata()
print(f"Supports streaming: {meta.streaming}")
print(f"Supports embedding: {meta.embedding}")
print(f"Supports responses: {meta.responses}")