LangGraph Integration
LangGraph is a library for building stateful, multi-actor applications with LLMs. The rLLM SDK tracks all LLM calls made through LangGraph agents.Basic Setup
Complete LangGraph Agent Example
- Pass both
clientandasync_clienttoChatOpenAI - Use
get_chat_client()andget_chat_client_async()from rLLM SDK - Wrap execution in
session()context to collect traces - All LLM calls are automatically tracked
examples/sdk/langgraph/search_agent_langgraph.py
SmolAgent Integration
SmolAgent is HuggingFace’s lightweight agent framework. The rLLM SDK can track SmolAgent executions.Basic Setup
Advanced SmolAgent Usage
Strands Integration
Strands is a framework for building production-grade AI agents. The rLLM SDK integrates via tracked chat clients.Basic Setup
Multi-Agent Strands Workflow
LiteLLM Proxy Integration
The rLLM SDK includes deep integration with LiteLLM proxy for metadata routing and trace collection.Proxy Architecture
The SDK uses metadata slug encoding to route session context through the proxy:Metadata Slug Encoding
The SDK encodes metadata into the URL path for proxy routing:Automatic Metadata Routing
The SDK automatically routes metadata whenuse_proxy=True:
Proxy Middleware
Add theMetadataRoutingMiddleware to your ASGI app to decode metadata slugs:
- Extracts metadata slug from request path
- Decodes metadata and injects into request scope
- Cleans the path before forwarding to LiteLLM
LiteLLM Callbacks
The SDK provides callbacks for LiteLLM proxy integration:Starting the Proxy
Start the LiteLLM proxy with rLLM configuration:litellm_proxy_config.yaml):
Proxy Manager (Subprocess Mode)
The SDK can automatically manage the proxy lifecycle:examples/sdk/README.md
OpenTelemetry Integration
For distributed tracing across multiple services, use the OpenTelemetry backend:Configuration
Editrllm/sdk/config.yaml:
Basic Usage
Cross-Service Context Propagation
OpenTelemetry sessions use W3C baggage for automatic context propagation:- Baggage is the single source of truth
- Automatic HTTP header propagation
- Compatible with OpenTelemetry observability tools
- Works across process boundaries
Framework Comparison
| Framework | Integration Method | Session Support | Trajectory Support | Use Case |
|---|---|---|---|---|
| LangGraph | Tracked chat client | ✅ | ✅ | Multi-agent workflows |
| SmolAgent | Tracked chat client | ✅ | ✅ | Code generation agents |
| Strands | Tracked chat client | ✅ | ✅ | Production AI apps |
| LiteLLM Proxy | Metadata routing | ✅ | ✅ | Infrastructure layer |
| OpenTelemetry | W3C baggage | ✅ | ✅ | Distributed tracing |
Best Practices
- Use tracked clients: Always initialize LLM clients with
get_chat_client()orget_chat_client_async() - Enable proxy for training: Set
use_proxy=Truewhen routing through LiteLLM proxy - Wrap execution in sessions: Use
session()context manager to collect traces - Leverage trajectories for RL: Use
@trajectorydecorator for reinforcement learning workflows - Configure OpenTelemetry for distributed systems: Use
otel_sessionfor multi-service architectures - Set metadata strategically: Add experiment tracking metadata at the session level

