RLM-Runtime Integration
RLM-Runtime is a sandboxed Python execution environment that integrates with Snipara for context-aware code generation, autonomous agents, and multi-step task execution.
When to Use RLM-Runtime
Use RLM-Runtime for complex multi-step tasks that require code execution, iteration, or multi-file changes. For simple documentation Q&A, use Snipara MCP tools directly (faster and cheaper).
Installation
Fastest Way: NPX Setup
Install RLM-Runtime + Snipara MCP with one command:
npx create-sniparaConfigures .mcp.json, prompts for execution environment (sandbox/docker/local), and sets up LLM provider API keys. Full guide →
Or install manually:
pip install rlm-runtime[all]Or install specific features:
| Package | Features |
|---|---|
rlm-runtime | Core runtime with local REPL |
rlm-runtime[docker] | Docker isolation (recommended for production) |
rlm-runtime[mcp] | MCP server for Claude Desktop/Code |
rlm-runtime[snipara] | Snipara context optimization |
rlm-runtime[visualizer] | Trajectory visualization dashboard |
rlm-runtime[all] | All features |
Quick Start
rlm initrlm run "Summarize the authentication flow"rlm run --env docker "Parse and analyze logs"rlm agent "Analyze all CSV files and generate a report"Claude Code / Claude Desktop Setup
Add RLM-Runtime as an MCP server to get sandboxed Python execution in Claude. No API keys required for the runtime itself.
Step 1: Install with MCP support
pip install rlm-runtime[mcp]Step 2: Add to your MCP configuration
Add to ~/.mcp.json (Claude Code) or ~/.claude/claude_desktop_config.json (Claude Desktop):
{ "mcpServers": { "rlm": { "command": "rlm", "args": ["mcp-serve"] } }}Step 3: Add Snipara for context (optional but recommended)
Combine RLM-Runtime with Snipara for context-aware code execution:
{ "mcpServers": { "rlm": { "command": "rlm", "args": ["mcp-serve"] }, "snipara": { "type": "http", "url": "https://api.snipara.com/mcp/YOUR_PROJECT", "headers": { "X-API-Key": "rlm_YOUR_API_KEY" } } }}Available MCP Tools
| Tool | Description |
|---|---|
execute_python | Run Python code in a sandboxed environment |
get_repl_context | Get current REPL context variables |
set_repl_context | Set a variable in REPL context |
clear_repl_context | Clear all REPL context |
list_sessions | List all active sessions with metadata |
destroy_session | Destroy a session and free resources |
rlm_agent_run | Start an autonomous agent that iteratively solves a task |
rlm_agent_status | Check the status of an autonomous agent run |
rlm_agent_cancel | Cancel a running autonomous agent |
Execution Environments
| Mode | Security | Startup Time | Best For |
|---|---|---|---|
local | Medium (RestrictedPython) | ~0ms | Development, trusted code |
docker | High (container isolation) | ~100-500ms | Production, untrusted code |
wasm | Medium-High (WebAssembly) | ~1-2s | Browser, portable |
Security Recommendation
Use docker mode for production and untrusted code. Local mode is only recommended for development and AI-generated code.
When to Use RLM-Runtime vs Direct MCP
Use Direct Snipara MCP Tools For:
- Documentation Q&A - "What's the tech stack?"
- Code lookup - "Where is auth handled?"
- Simple retrieval - "List all API endpoints"
Use RLM-Runtime For:
- Multi-step code tasks - "Implement OAuth integration"
- Complex reasoning - "Refactor auth to use JWT"
- Iterative refinement - "Optimize this function"
- Multi-file changes - "Add validation to all endpoints"
Python API
import asynciofrom rlm import RLMasync def main(): rlm = RLM( model="gpt-4o-mini", environment="docker", max_depth=4, ) result = await rlm.completion("Analyze and fix the auth bug") print(result.response)asyncio.run(main())Configuration
Create rlm.toml in your project:
[rlm]backend = "litellm"model = "gpt-4o-mini"environment = "docker"max_depth = 4# Snipara integrationsnipara_api_key = "rlm_..."snipara_project_slug = "your-project"# Docker settingsdocker_image = "python:3.11-slim"docker_memory = "512m"Or use environment variables:
export RLM_MODEL=gpt-4o-miniexport RLM_ENVIRONMENT=dockerexport SNIPARA_API_KEY=rlm_...export SNIPARA_PROJECT_SLUG=my-projectCLI Commands
| Command | Description |
|---|---|
rlm init | Create rlm.toml configuration |
rlm run "prompt" | Run a completion |
rlm run --env docker | Run with Docker isolation |
rlm agent "task" | Run an autonomous agent |
rlm logs | View execution trajectories |
rlm visualize | Launch visualization dashboard |
rlm mcp-serve | Start MCP server |
rlm doctor | Check setup and dependencies |
Safety Limits
| Limit | Default | Max |
|---|---|---|
| Recursion depth | 4 | 5 |
| Agent iterations | 10 | 50 |
| Cost limit | $2.00 | $10.00 |
| Timeout | 30s | 600s |
| Memory (Docker) | 512MB | Configurable |