Skip to content

Exceptions

any-llm provides a unified exception hierarchy so you can handle errors consistently regardless of which provider is being used. When unified exceptions are enabled, provider-specific SDK errors are automatically mapped to the appropriate any-llm exception type.

All exceptions inherit from AnyLLMError:

AnyLLMError
├── AuthenticationError
├── ContentFilterError
├── ContextLengthExceededError
├── InvalidRequestError
├── MissingApiKeyError
├── ModelNotFoundError
├── ProviderError
├── RateLimitError
├── UnsupportedParameterError
└── UnsupportedProviderError

Base exception for all any-llm errors. All custom exceptions in any-llm inherit from this class. It preserves the original exception for debugging while providing a unified interface.

def AnyLLMError(
self,
message: str | None = None,
original_exception: Exception | None = None,
provider_name: str | None = None,
) -> None
AttributeTypeDescription
messagestrHuman-readable error message.
original_exceptionException | NoneThe original SDK exception that triggered this error.
provider_namestr | NoneName of the provider that raised the error (if available).

The string representation includes the provider name when available: "[openai] Rate limit exceeded".

Raised when the API rate limit is exceeded.

class RateLimitError(AnyLLMError): ...

Default message: "Rate limit exceeded"

Raised when authentication with the provider fails (invalid or missing API key).

class AuthenticationError(AnyLLMError): ...

Default message: "Authentication failed"

Raised when the request to the provider is malformed or contains invalid parameters.

class InvalidRequestError(AnyLLMError): ...

Default message: "Invalid request"

Raised when the provider encounters an internal error (5xx-class errors).

class ProviderError(AnyLLMError): ...

Default message: "Provider error"

Raised when content is blocked by the provider’s safety filter.

class ContentFilterError(AnyLLMError): ...

Default message: "Content blocked by safety filter"

Raised when the requested model is not found or not available.

class ModelNotFoundError(AnyLLMError): ...

Default message: "Model not found"

Raised when the input exceeds the model’s maximum context length.

class ContextLengthExceededError(AnyLLMError): ...

Default message: "Context length exceeded"

Raised when a required API key is not provided via the parameter or environment variable.

class MissingApiKeyError(AnyLLMError):
def __init__(self, provider_name: str, env_var_name: str) -> None: ...
AttributeTypeDescription
provider_namestrName of the provider requiring the key.
env_var_namestrEnvironment variable name that was checked.

Example message: "No openai API key provided. Please provide it in the config or set the OPENAI_API_KEY environment variable."

Raised when an unsupported provider is specified.

class UnsupportedProviderError(AnyLLMError):
def __init__(self, provider_key: str, supported_providers: list[str]) -> None: ...
AttributeTypeDescription
provider_keystrThe unsupported provider key that was specified.
supported_providerslist[str]List of valid provider keys.

Raised when a parameter is not supported by the provider.

class UnsupportedParameterError(AnyLLMError):
def __init__(self, parameter_name: str, provider_name: str, additional_message: str | None = None) -> None: ...
AttributeTypeDescription
parameter_namestrThe unsupported parameter name.
provider_namestrName of the provider (also accessible via the inherited provider_name attribute).
from any_llm import completion
from any_llm.exceptions import (
AnyLLMError,
AuthenticationError,
RateLimitError,
ContextLengthExceededError,
)
try:
response = completion(
model="gpt-4.1-mini",
provider="openai",
messages=[{"role": "user", "content": "Hello!"}],
)
except RateLimitError as e:
print(f"Rate limited by {e.provider_name}: {e.message}")
# Access the original provider exception for details
print(f"Original: {e.original_exception}")
except AuthenticationError as e:
print(f"Auth failed: {e.message}")
except ContextLengthExceededError as e:
print(f"Input too long: {e.message}")
except AnyLLMError as e:
# Catch-all for any other any-llm error
print(f"Error: {e}")