Documentation Index Fetch the complete documentation index at: https://docs.rllm-project.com/llms.txt
Use this file to discover all available pages before exploring further.
This guide will help you set up rLLM on your system.
Prerequisites
rLLM requires Python >= 3.10 (Python 3.11 is required if using the tinker backend).
Install uv (recommended)
Starting with v0.2.1, rLLM’s recommended dependency manager is uv. To install uv, run:
curl -LsSf https://astral.sh/uv/install.sh | sh
Install Python 3.11
Ensure that your system has a suitable installation of Python:
Installation methods
Choose one of the following installation methods based on your needs.
Quick install with uv Install rLLM directly from GitHub with a single command: uv pip install "rllm[verl] @ git+https://github.com/rllm-org/rllm.git"
Replace verl with tinker to install with the tinker backend instead: uv pip install "rllm[tinker] @ git+https://github.com/rllm-org/rllm.git"
Clone and install from source For development or to access the latest features:
Clone the repository
git clone https://github.com/rllm-org/rllm.git
cd rllm
Create a virtual environment
uv venv --python 3.11
source .venv/bin/activate
Install rLLM with a training backend
rLLM supports two training backends: verl and tinker . Choose one based on your needs. verl (GPU)
tinker (CPU/GPU)
Install with the verl backend for GPU-accelerated training: uv pip install -e .[verl]
For CUDA 12.8 specifically: uv pip install -e .[verl] --torch-backend=cu128
The verl extra installs vLLM by default. If you prefer SGLang for sampling rollouts: uv pip install sglang --torch-backend=cu128
For AMD ROCm or Huawei Ascend accelerators, we strongly recommend installing rLLM on top of verl’s official Docker containers: Install with the tinker backend for flexible training: uv pip install -e .[tinker]
For CPU-only machines: uv pip install -e .[tinker] --torch-backend=cpu
The tinker backend requires Python >= 3.11 .
Activate your environment Be sure to activate the virtual environment before running any scripts: source .venv/bin/activate
python your_script.py
Containerized installation For a containerized setup with GPU support:
Create and start the container
docker create --runtime=nvidia --gpus all --net=host \
--shm-size= "10g" --cap-add=SYS_ADMIN \
-v .:/workspace/rllm -v /tmp:/tmp \
--name rllm-container rllm sleep infinity
docker start rllm-container
Enter the container
docker exec -it rllm-container bash
The Docker setup includes GPU support via NVIDIA runtime and mounts the current directory to /workspace/rllm for easy development.
Install with pip and conda While rLLM can be installed without uv, this is not recommended and may cause issues if you don’t have a compatible PyTorch or CUDA version preinstalled. conda create -n rllm python= 3.11
conda activate rllm
pip install -e .[verl]
This installation method may fail if you don’t have compatible CUDA and PyTorch versions already installed. Use uv for reliable dependency resolution.
Optional dependencies
rLLM provides additional optional dependencies for specific agent domains and framework integrations:
sdk : LiteLLM proxy integration
smolagents : Hugging Face SmolAgents integration
strands : Strands agents framework
web : Web agents (BrowserGym, Selenium, Firecrawl)
code-tools : Sandboxed code execution (E2B, Together)
swe : Software engineering tools (Docker, Kubernetes, SWEBench)
verifiers : Verifiers integration for validation
Development and utilities
dev : Development tools (pytest, ruff, mypy, mkdocs)
ui : UI components (httpx, python-multipart)
opentelemetry : OpenTelemetry SDK for observability
Install optional dependencies by adding them to the installation command:
# Install with web and code-tools extras
uv pip install -e .[verl,web,code-tools]
Advanced: Editable verl installation
If you wish to make changes to the verl backend itself:
git clone https://github.com/volcengine/verl.git
cd verl
git checkout v0.6.1
uv pip install -e .
Verify installation
Verify that rLLM is installed correctly:
python -c "import rllm; print(rllm.__version__)"
You should see the version number printed (e.g., 0.2.1).
Next steps
Quick start Build your first math reasoning agent in 10 minutes
Core concepts Learn about the key components of rLLM
Troubleshooting
If you encounter issues during installation:
Check Python version : Ensure you’re using Python >= 3.10 (3.11 for tinker)
GPU compatibility : Verify CUDA version matches PyTorch requirements
Dependency conflicts : Use uv instead of pip for better dependency resolution
GitHub issues : Check the GitHub issues page for known issues
For additional help, join our Slack community .