AnyLLM
any_guardrail.guardrails.any_llm.any_llm
DEFAULT_MODEL_ID = 'openai/gpt-5-nano'
module-attribute
Will be used as default argument for model_id
DEFAULT_SYSTEM_PROMPT = "\nYou are a guardrail designed to ensure that the input text adheres to a specific policy.\nYour only task is to validate the input_text, don't try to answer the user query.\n\nHere is the policy: {policy}\n\nYou must return the following:\n\n- valid: bool\n If the input text provided by the user doesn't adhere to the policy, you must reject it (mark it as valid=False).\n\n- explanation: str\n A clear explanation of why the input text was rejected or not.\n\n- score: float (0-1)\n How confident you are about the validation.\n"
module-attribute
Will be used as default argument for system_prompt
AnyLlm
Bases: Guardrail
A guardrail using any-llm
.
Source code in src/any_guardrail/guardrails/any_llm/any_llm.py
validate(input_text, policy, model_id=DEFAULT_MODEL_ID, system_prompt=DEFAULT_SYSTEM_PROMPT, **kwargs)
Validate the input_text
against the given policy
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_text
|
str
|
The text to validate. |
required |
policy
|
str
|
The policy to validate against. |
required |
model_id
|
str
|
The model ID to use. |
DEFAULT_MODEL_ID
|
system_prompt
|
str
|
The system prompt to use.
Expected to have a |
DEFAULT_SYSTEM_PROMPT
|
**kwargs
|
Any
|
Additional keyword arguments to pass to |
{}
|
Returns:
Name | Type | Description |
---|---|---|
GuardrailOutput |
GuardrailOutput
|
The output of the validation. |