Use this file to discover all available pages before exploring further.
A single math task — solve a competition problem with a calculator tool — built four ways in one cookbook. Use this to compare LangGraph, OpenAI Agents SDK, smolagents, and Strands on the same dataset, or as a template for plugging your own framework into rLLM.The point of this cookbook is to make the AgentFlow + model-gateway architecture concrete. Every framework integration collapses to ~6 lines of agent body that points the framework’s LLM client at config.base_url and returns None. The gateway captures every LLM call by URL-routed session, the framework auto-builds an Episode from those captured traces, and the evaluator parses the answer out of the resulting trajectory. No callback handler, no traced chat client, no manual Step / Trajectory construction.
Create agentflow/<framework>.py with one @rllm.rollout(name="<framework>-math") function that wires the framework’s LLM client to config.base_url, runs the agent on task.instruction, and return Nones.
Add it to pyproject.toml’s [project.entry-points."rllm.agents"] and [tool.setuptools] py-modules lists; declare the framework’s package in [project.optional-dependencies].<framework>.
Reinstall the cookbook with uv pip install --no-deps -e "cookbooks/agent_frameworks[<framework>]" and your agent shows up under rllm agent list.