Batch
Experimental API
The Batch API is experimental and subject to breaking changes in future versions. Use with caution in production environments.
The Batch API allows you to process multiple requests asynchronously at a lower cost.
File Path Interface
The any-llm batch API requires you to pass a path to a local JSONL file containing your batch requests. The provider implementation automatically handles uploading and file management as needed.
Different providers handle batch processing differently:
- OpenAI: Requires uploading a file first, then creating a batch with the file ID
- Anthropic (future): Expects file content passed directly in the request
- Other providers: May have their own unique requirements
By accepting a local file path, any-llm abstracts these provider differences and handles the implementation details automatically.
            any_llm.api.create_batch(provider, input_file_path, endpoint, *, completion_window='24h', metadata=None, api_key=None, api_base=None, client_args=None, **kwargs)
    Create a batch job.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| provider | str | LLMProvider | Provider name to use for the request (e.g., 'openai', 'mistral') | required | 
| input_file_path | str | Path to a local file containing batch requests in JSONL format. | required | 
| endpoint | str | The endpoint to be used for all requests (e.g., '/v1/chat/completions') | required | 
| completion_window | str | The time frame within which the batch should be processed (default: '24h') | '24h' | 
| metadata | dict[str, str] | None | Optional custom metadata for the batch | None | 
| api_key | str | None | API key for the provider | None | 
| api_base | str | None | Base URL for the provider API | None | 
| client_args | dict[str, Any] | None | Additional provider-specific arguments for client instantiation | None | 
| **kwargs | Any | Additional provider-specific arguments | {} | 
Returns:
| Type | Description | 
|---|---|
| Batch | The created batch object | 
Source code in src/any_llm/api.py
              
            any_llm.api.acreate_batch(provider, input_file_path, endpoint, *, completion_window='24h', metadata=None, api_key=None, api_base=None, client_args=None, **kwargs)
  
      async
  
    Create a batch job asynchronously.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| provider | str | LLMProvider | Provider name to use for the request (e.g., 'openai', 'mistral') | required | 
| input_file_path | str | Path to a local file containing batch requests in JSONL format. | required | 
| endpoint | str | The endpoint to be used for all requests (e.g., '/v1/chat/completions') | required | 
| completion_window | str | The time frame within which the batch should be processed (default: '24h') | '24h' | 
| metadata | dict[str, str] | None | Optional custom metadata for the batch | None | 
| api_key | str | None | API key for the provider | None | 
| api_base | str | None | Base URL for the provider API | None | 
| client_args | dict[str, Any] | None | Additional provider-specific arguments for client instantiation | None | 
| **kwargs | Any | Additional provider-specific arguments | {} | 
Returns:
| Type | Description | 
|---|---|
| Batch | The created batch object | 
Source code in src/any_llm/api.py
              
            any_llm.api.retrieve_batch(provider, batch_id, *, api_key=None, api_base=None, client_args=None, **kwargs)
    Retrieve a batch job.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| provider | str | LLMProvider | Provider name to use for the request (e.g., 'openai', 'mistral') | required | 
| batch_id | str | The ID of the batch to retrieve | required | 
| api_key | str | None | API key for the provider | None | 
| api_base | str | None | Base URL for the provider API | None | 
| client_args | dict[str, Any] | None | Additional provider-specific arguments for client instantiation | None | 
| **kwargs | Any | Additional provider-specific arguments | {} | 
Returns:
| Type | Description | 
|---|---|
| Batch | The batch object | 
Source code in src/any_llm/api.py
              
            any_llm.api.aretrieve_batch(provider, batch_id, *, api_key=None, api_base=None, client_args=None, **kwargs)
  
      async
  
    Retrieve a batch job asynchronously.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| provider | str | LLMProvider | Provider name to use for the request (e.g., 'openai', 'mistral') | required | 
| batch_id | str | The ID of the batch to retrieve | required | 
| api_key | str | None | API key for the provider | None | 
| api_base | str | None | Base URL for the provider API | None | 
| client_args | dict[str, Any] | None | Additional provider-specific arguments for client instantiation | None | 
| **kwargs | Any | Additional provider-specific arguments | {} | 
Returns:
| Type | Description | 
|---|---|
| Batch | The batch object | 
Source code in src/any_llm/api.py
              
            any_llm.api.cancel_batch(provider, batch_id, *, api_key=None, api_base=None, client_args=None, **kwargs)
    Cancel a batch job.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| provider | str | LLMProvider | Provider name to use for the request (e.g., 'openai', 'mistral') | required | 
| batch_id | str | The ID of the batch to cancel | required | 
| api_key | str | None | API key for the provider | None | 
| api_base | str | None | Base URL for the provider API | None | 
| client_args | dict[str, Any] | None | Additional provider-specific arguments for client instantiation | None | 
| **kwargs | Any | Additional provider-specific arguments | {} | 
Returns:
| Type | Description | 
|---|---|
| Batch | The cancelled batch object | 
Source code in src/any_llm/api.py
              
            any_llm.api.acancel_batch(provider, batch_id, *, api_key=None, api_base=None, client_args=None, **kwargs)
  
      async
  
    Cancel a batch job asynchronously.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| provider | str | LLMProvider | Provider name to use for the request (e.g., 'openai', 'mistral') | required | 
| batch_id | str | The ID of the batch to cancel | required | 
| api_key | str | None | API key for the provider | None | 
| api_base | str | None | Base URL for the provider API | None | 
| client_args | dict[str, Any] | None | Additional provider-specific arguments for client instantiation | None | 
| **kwargs | Any | Additional provider-specific arguments | {} | 
Returns:
| Type | Description | 
|---|---|
| Batch | The cancelled batch object | 
Source code in src/any_llm/api.py
              
            any_llm.api.list_batches(provider, *, after=None, limit=None, api_key=None, api_base=None, client_args=None, **kwargs)
    List batch jobs.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| provider | str | LLMProvider | Provider name to use for the request (e.g., 'openai', 'mistral') | required | 
| after | str | None | A cursor for pagination. Returns batches after this batch ID. | None | 
| limit | int | None | Maximum number of batches to return (default: 20) | None | 
| api_key | str | None | API key for the provider | None | 
| api_base | str | None | Base URL for the provider API | None | 
| client_args | dict[str, Any] | None | Additional provider-specific arguments for client instantiation | None | 
| **kwargs | Any | Additional provider-specific arguments | {} | 
Returns:
| Type | Description | 
|---|---|
| Sequence[Batch] | A list of batch objects | 
Source code in src/any_llm/api.py
              
            any_llm.api.alist_batches(provider, *, after=None, limit=None, api_key=None, api_base=None, client_args=None, **kwargs)
  
      async
  
    List batch jobs asynchronously.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| provider | str | LLMProvider | Provider name to use for the request (e.g., 'openai', 'mistral') | required | 
| after | str | None | A cursor for pagination. Returns batches after this batch ID. | None | 
| limit | int | None | Maximum number of batches to return (default: 20) | None | 
| api_key | str | None | API key for the provider | None | 
| api_base | str | None | Base URL for the provider API | None | 
| client_args | dict[str, Any] | None | Additional provider-specific arguments for client instantiation | None | 
| **kwargs | Any | Additional provider-specific arguments | {} | 
Returns:
| Type | Description | 
|---|---|
| Sequence[Batch] | A list of batch objects |