Skip to content

AnyLLM

any_guardrail.guardrails.any_llm.any_llm

DEFAULT_MODEL_ID = 'openai/gpt-5-nano' module-attribute

Will be used as default argument for model_id

DEFAULT_SYSTEM_PROMPT = "\nYou are a guardrail designed to ensure that the input text adheres to a specific policy.\nYour only task is to validate the input_text, don't try to answer the user query.\n\nHere is the policy: {policy}\n\nYou must return the following:\n\n- valid: bool\n If the input text provided by the user doesn't adhere to the policy, you must reject it (mark it as valid=False).\n\n- explanation: str\n A clear explanation of why the input text was rejected or not.\n\n- score: float (0-1)\n How confident you are about the validation.\n" module-attribute

Will be used as default argument for system_prompt

AnyLlm

Bases: Guardrail

A guardrail using any-llm.

Source code in src/any_guardrail/guardrails/any_llm/any_llm.py
class AnyLlm(Guardrail):
    """A guardrail using `any-llm`."""

    def validate(
        self,
        input_text: str,
        policy: str,
        model_id: str = DEFAULT_MODEL_ID,
        system_prompt: str = DEFAULT_SYSTEM_PROMPT,
        **kwargs: Any,
    ) -> GuardrailOutput:
        """Validate the `input_text` against the given `policy`.

        Args:
            input_text (str): The text to validate.
            policy (str): The policy to validate against.
            model_id (str, optional): The model ID to use.
            system_prompt (str, optional): The system prompt to use.
                Expected to have a `{policy}` placeholder.
            **kwargs: Additional keyword arguments to pass to `any_llm.completion` function.

        Returns:
            GuardrailOutput: The output of the validation.

        """
        result: ChatCompletion = completion(  # type: ignore[assignment]
            model=model_id,
            messages=[
                {"role": "system", "content": system_prompt.format(policy=policy)},
                {"role": "user", "content": input_text},
            ],
            response_format=GuardrailOutput,
            **kwargs,
        )
        return GuardrailOutput(**json.loads(result.choices[0].message.content))  # type: ignore[arg-type]
validate(input_text, policy, model_id=DEFAULT_MODEL_ID, system_prompt=DEFAULT_SYSTEM_PROMPT, **kwargs)

Validate the input_text against the given policy.

Parameters:

Name Type Description Default
input_text str

The text to validate.

required
policy str

The policy to validate against.

required
model_id str

The model ID to use.

DEFAULT_MODEL_ID
system_prompt str

The system prompt to use. Expected to have a {policy} placeholder.

DEFAULT_SYSTEM_PROMPT
**kwargs Any

Additional keyword arguments to pass to any_llm.completion function.

{}

Returns:

Name Type Description
GuardrailOutput GuardrailOutput

The output of the validation.

Source code in src/any_guardrail/guardrails/any_llm/any_llm.py
def validate(
    self,
    input_text: str,
    policy: str,
    model_id: str = DEFAULT_MODEL_ID,
    system_prompt: str = DEFAULT_SYSTEM_PROMPT,
    **kwargs: Any,
) -> GuardrailOutput:
    """Validate the `input_text` against the given `policy`.

    Args:
        input_text (str): The text to validate.
        policy (str): The policy to validate against.
        model_id (str, optional): The model ID to use.
        system_prompt (str, optional): The system prompt to use.
            Expected to have a `{policy}` placeholder.
        **kwargs: Additional keyword arguments to pass to `any_llm.completion` function.

    Returns:
        GuardrailOutput: The output of the validation.

    """
    result: ChatCompletion = completion(  # type: ignore[assignment]
        model=model_id,
        messages=[
            {"role": "system", "content": system_prompt.format(policy=policy)},
            {"role": "user", "content": input_text},
        ],
        response_format=GuardrailOutput,
        **kwargs,
    )
    return GuardrailOutput(**json.loads(result.choices[0].message.content))  # type: ignore[arg-type]