Usage
1) Run the Agent-Factory Server¶
You can run the Agent-Factory server either directly from a terminal ("locally") or using Docker.
a. Locally¶
To run the server locally, execute the following command from the src/agent_factory
directory:
cd src/agent_factory && uv run . --host 0.0.0.0 --port 8080 --nochat
The server you start makes Agent-Factory's functionalities available via the A2A protocol. You can reach the Manufacturing Agent's Card at http://localhost:8080/.well-known/agent.json.
In addition to host
and port
, you can also pass the following arguments:
--chat
vs--nochat
:chat
mode enables multi-turn conversations, whilenochat
mode is for one-shot tasks (default:chat
).--framework
: The Agent framework to use (default:openai
).--model
: The model ID to use (default:o3
).--log_level
: The logging level (default:info
).
[!NOTE] Visit the any-agent documentation for more details on the supported frameworks.
b. Docker¶
The Makefile enables you to run the server using Docker. Before starting, make sure that Docker Desktop is installed and running.
Run the server (this will also build the image if needed):
make run
http://localhost:8080/.well-known/agent.json
.
[!NOTE] You can modify the behavior of the server by passing environment variables to the
make run
command. For example, to run the server with thetinyagent
framework and a specific model, in chat mode, you can use:make run FRAMEWORK=tinyagent MODEL=mistral/mistral-small-latest CHAT=1
[!NOTE] After generating your agents using the
agent-factory
command (described below), don't forget to stop the server using:make stop
(Optional) Setting Up Storage Backends¶
By default, agent outputs are saved to the local filesystem. You can configure the agent factory to use AWS S3 or a local MinIO instance for storage.
S3 & MinIO Configuration¶
To customize the S3/MinIO configuration, you can create a .env
file in the project root and override the following values from .default.env
:
STORAGE_BACKEND
: Set tos3
for AWS S3 orminio
for MinIO.AWS_ACCESS_KEY_ID
: Your AWS or MinIO access key.AWS_SECRET_ACCESS_KEY
: Your AWS or MinIO secret key.AWS_REGION
: The AWS region for your S3 bucket.S3_BUCKET_NAME
: The name of the S3 or MinIO bucket.AWS_ENDPOINT_URL
: (For MinIO only) The URL of your MinIO instance (e.g.,http://localhost:9000
).
When using these settings, the application will automatically create the specified bucket if it does not already exist.
Running a Local MinIO Instance¶
If you do not have access to AWS S3, you can run a local MinIO instance using Docker:
docker run -p 9000:9000 -p 9091:9091 \
--name agent-factory-minio-dev \
-e "MINIO_ROOT_USER=agent-factory" \
-e "MINIO_ROOT_PASSWORD=agent-factory" \
-v agent-factory-minio:/data \
quay.io/minio/minio server /data --console-address ":9091"
agent-factory-minio
volume is used to persist the MinIO server's data, ensuring it is not lost when the container is stopped or removed.
Once the container is running, you can access the MinIO console at http://localhost:9091
. The login credentials are agent-factory
for both the username and password.
2) Generate an Agentic Workflow¶
[!IMPORTANT] Always run the server in non-chat mode (
--nochat
) when generating agents using theagent-factory
command. For multi-turn conversations, see the section on Multi-Turn Conversations.
Once the server is running, from the project root directory, run the agent-factory
CLI tool with your desired workflow prompt:
uv run agent-factory "Summarize text content from a given webpage URL"
The client will send the message to the server, print the response, and save the generated agent's files (agent.py
,
README.md
, requirements.txt
, and agent_parameters.json
) into a new directory inside the generated_workflows
directory.
3) Setup your Credentials¶
Depending on the agent generated, there might be some credentials required to connect to the MCP servers that the agent will use.
Set the environment variables in the .env
file that has been created for you inside the target agent's directory. Add other environment variables as needed, for example, environment variables for your LLM provider.
4) Start the mcpd daemon¶
In the target agent's directory, run the mcpd daemon to initialize and connect to the local MCP servers required. With the command below, environment variables from the .env
file are also exported to make them available to the mcpd daemon.
export $(cat .env | xargs) && mcpd daemon --log-level=DEBUG --log-path=$(pwd)/mcpd.log --dev --runtime-file secrets.prod.toml
5) Run the Generated Workflow¶
To run the generated agent, navigate to the directory where the agent was saved and execute:
uv run --with-requirements requirements.txt --python 3.13 python agent.py --help
From the output you can see if there are any POSITIONAL ARGUMENTS required.
Then adjust the command below:
uv run --with-requirements requirements.txt --python 3.13 python agent.py --arg1 "value1"
[!NOTE] The agent-factory has been instructed to set the max_turns (the max number of steps that the generated agent can take to complete the workflow) to 20. Please inspect the generated agent code and override this value if needed (if you see the generated agent run failing due to AgentRunError caused by MaxTurnsExceeded).
6) Use Criteria Agent to create an Evaluation Case¶
Run the Criteria Agent, from the project root directory, with your desired evaluation case prompt:
uv run -m eval.generate_evaluation_case path/to/the/generated/agent
This will generate a JSON file in the generated agent's directory with evaluation criteria.
7) Use Agent Judge to Evaluate the Agent against the Evaluation Case¶
Finally, evaluate the agent's execution trace against the generated evaluation case:
uv run -m eval.run_generated_agent_evaluation path/to/the/generated/agent
This command will display the evaluation criteria and show how the agent performed on each.