Skip to main content
Repository: rllm-org/rllm-ui
Web interface for monitoring and analyzing rLLM training runs in real time. Think of wandb dedicated to rLLM, with powerful features such as episode/trajectory search, observability AI agent and more.
rLLM UI training overview showing real-time metrics and episode inspection

Getting started

There are two ways to access rLLM UI:
  1. Cloud — Use our hosted service at ui.rllm-project.com (see below).
  2. Self-hosted — Run locally from the repository (see below).

Cloud setup

1

Run rllm login

Run rllm login in your terminal.
2

Sign up

3

Copy your API key

Copy your API key (shown once at registration) and paste it in the terminal (or save it as RLLM_API_KEY in .env).
That’s it. No need to set up the database and other configurations.
VariableRequiredScopeDefaultDescription
RLLM_API_KEYYesTraining script envAPI key for authenticating training data ingestion (shown once at registration)
RLLM_UI_URLNoTraining script envhttps://ui.rllm-project.comDefaults to cloud URL when RLLM_API_KEY is set
The observability AI agent can be enabled by adding your ANTHROPIC_API_KEY in the Settings page in the UI — no extra configuration needed.

Self-hosted setup

git clone https://github.com/rllm-org/rllm-ui.git
cd rllm-ui

# Install dependencies
cd api && pip install -r requirements.txt
cd ../frontend && npm install

# Run (two terminals)
cd api && uvicorn main:app --reload --port 3000
cd frontend && npm run dev
Open http://localhost:5173 (or the port shown in the Vite output).
If you run the API on a port other than 3000, update both sides so they know where to find it:
  • rLLM training sideexport RLLM_UI_URL="http://localhost:<port>"
  • rllm-ui frontend — set VITE_API_URL=http://localhost:<port> in frontend/.env.development

Database

rLLM UI stores sessions, metrics, episodes, trajectories, and logs in a database so they persist across restarts and are searchable.
  • SQLite (default) — No setup required. A local file (api/rllm_ui.db) is created on first run.
  • PostgreSQL — Adds full-text search with stemming and relevance ranking. Set DATABASE_URL in api/.env:
DATABASE_URL="postgresql://user:pass@localhost:5432/rllm"

Observability AI agent

To enable the agent, set your Anthropic API key in api/.env:
ANTHROPIC_API_KEY="sk-ant-..."

Configuration

VariableRequiredScopeDefaultDescription
RLLM_UI_URLNoTraining script envhttp://localhost:3000URL of your local rllm-ui server
DATABASE_URLNoapi/.envSQLitePostgreSQL connection string. Defaults to SQLite if unset.
ANTHROPIC_API_KEYNoapi/.envEnables the built-in AI agent
VITE_API_URLNofrontend/.env.developmenthttp://localhost:3000Only needed if the API runs on a non-default port

Connecting rLLM to UI

Training runs with script

Regardless of the service (cloud or self-hosted) you use, add ui to your trainer’s logger list in your rLLM training script:
trainer.logger="['console','wandb','ui']"

Training / Evaluation runs with rLLM CLI

If using our cloud service and rLLM CLI, you can run training and eval runs as such:
rllm train [dataset name]
rllm eval [dataset name]
If logged in, traces will automatically stream to the UI.

How it works

rLLM connects to the UI via the UILogger backend, registered as "ui" in the Tracking class (rllm/utils/tracking.py). On init, the logger:
  1. Creates a training session via POST /api/sessions
  2. Starts a background heartbeat thread (for crash detection)
  3. Wraps stdout/stderr with TeeStream to capture training logs
During training, the logger sends data over HTTP. So the overall flow looks like:
rLLM UI architecture diagram showing the data flow from training script through UILogger to the UI frontend