AnyLLM
The AnyLLM class is the provider interface at the core of any-llm. Use it when you need to make multiple requests against the same provider without re-instantiating on every call.
Creating an Instance
Section titled “Creating an Instance”AnyLLM.create()
Section titled “AnyLLM.create()”Factory method that returns a configured AnyLLM instance for the given provider.
def create( provider: str | LLMProvider, api_key: str | None = None, api_base: str | None = None, **kwargs: Any,) -> AnyLLM| Parameter | Type | Default | Description |
|---|---|---|---|
provider | str | LLMProvider | required | The provider name (e.g., ‘openai’, ‘anthropic’) |
api_key | str | None | None | API key for the provider |
api_base | str | None | None | Base URL for the provider API |
**kwargs | Any | required | Additional provider-specific arguments |
Returns: An AnyLLM instance bound to the specified provider.
from any_llm import AnyLLM
llm = AnyLLM.create("openai", api_key="sk-...")
response = llm.completion( model="gpt-4.1-mini", messages=[{"role": "user", "content": "Hello!"}],)print(response.choices[0].message.content)Static Methods
Section titled “Static Methods”AnyLLM.split_model_provider()
Section titled “AnyLLM.split_model_provider()”Parses a combined "provider:model" string into its components.
def split_model_provider( model: str,) -> tuple[LLMProvider, str]| Parameter | Type | Description |
|---|---|---|
model | str | Combined identifier in "provider:model" format (e.g., "openai:gpt-4.1-mini"). The legacy "provider/model" format is also accepted but deprecated. |
Returns: A (LLMProvider, model_name) tuple.
Raises: ValueError if the string does not contain a : or / delimiter.
provider, model_name = AnyLLM.split_model_provider("anthropic:claude-sonnet-4-20250514")# provider = LLMProvider.ANTHROPIC# model_name = "claude-sonnet-4-20250514"AnyLLM.get_all_provider_metadata()
Section titled “AnyLLM.get_all_provider_metadata()”Returns metadata for every supported provider, sorted alphabetically by name.
def get_all_provider_metadata() -> list[ProviderMetadata]Returns: A list of ProviderMetadata objects.
for meta in AnyLLM.get_all_provider_metadata(): print(f"{meta.name}: streaming={meta.streaming}, embedding={meta.embedding}")AnyLLM.get_supported_providers()
Section titled “AnyLLM.get_supported_providers()”Returns a list of all supported provider key strings.
def get_supported_providers() -> list[str]Returns: list[str] of provider keys (e.g., ["anthropic", "openai", ...]).
Instance Methods
Section titled “Instance Methods”All instance methods below are called on an AnyLLM object returned by AnyLLM.create().
completion() / acompletion()
Section titled “completion() / acompletion()”Create a chat completion. See the Completion reference for the full parameter list.
def completion(self, model, messages, *, stream=None, response_format=None, **kwargs) -> ChatCompletion | Iterator[ChatCompletionChunk] | ParsedChatCompletion
async def acompletion(self, model, messages, *, stream=None, response_format=None, **kwargs) -> ChatCompletion | AsyncIterator[ChatCompletionChunk] | ParsedChatCompletionresponses() / aresponses()
Section titled “responses() / aresponses()”Create a response using the OpenResponses API. See the Responses reference.
def responses(self, **kwargs) -> ResponseResource | Response | Iterator[ResponseStreamEvent]
async def aresponses(self, **kwargs) -> ResponseResource | Response | AsyncIterator[ResponseStreamEvent]messages() / amessages()
Section titled “messages() / amessages()”Create a message using the Anthropic Messages API format. All providers support this through automatic conversion.
def messages(self, **kwargs) -> MessageResponse | Iterator[MessageStreamEvent]
async def amessages(self, model, messages, max_tokens, **kwargs) -> MessageResponse | AsyncIterator[MessageStreamEvent]list_models() / alist_models()
Section titled “list_models() / alist_models()”List available models for this provider. See the List Models reference.
def list_models(self, **kwargs) -> Sequence[Model]async def alist_models(self, **kwargs) -> Sequence[Model]create_batch() / acreate_batch()
Section titled “create_batch() / acreate_batch()”Create a batch job. See the Batch reference.
def create_batch(self, **kwargs) -> Batchasync def acreate_batch(self, input_file_path, endpoint, completion_window="24h", metadata=None, **kwargs) -> Batchget_provider_metadata()
Section titled “get_provider_metadata()”Returns metadata for this provider instance’s class.
def get_provider_metadata() -> ProviderMetadataReturns: A ProviderMetadata object describing the provider’s capabilities.
llm = AnyLLM.create("mistral")meta = llm.get_provider_metadata()print(f"Supports streaming: {meta.streaming}")print(f"Supports embedding: {meta.embedding}")print(f"Supports responses: {meta.responses}")