RLM-Runtime Integration

RLM-Runtime is a sandboxed Python execution environment that integrates with Snipara for context-aware code generation, autonomous agents, and multi-step task execution.

When to Use RLM-Runtime

Use RLM-Runtime for complex multi-step tasks that require code execution, iteration, or multi-file changes. For simple documentation Q&A, use Snipara MCP tools directly (faster and cheaper).

Installation

Fastest Way: NPX Setup

Install RLM-Runtime + Snipara MCP with one command:

npx create-snipara

Configures .mcp.json, prompts for execution environment (sandbox/docker/local), and sets up LLM provider API keys. Full guide →

Or install manually:

pip install rlm-runtime[all]

Or install specific features:

PackageFeatures
rlm-runtimeCore runtime with local REPL
rlm-runtime[docker]Docker isolation (recommended for production)
rlm-runtime[mcp]MCP server for Claude Desktop/Code
rlm-runtime[snipara]Snipara context optimization
rlm-runtime[visualizer]Trajectory visualization dashboard
rlm-runtime[all]All features

Quick Start

rlm init
rlm run "Summarize the authentication flow"
rlm run --env docker "Parse and analyze logs"
rlm agent "Analyze all CSV files and generate a report"

Claude Code / Claude Desktop Setup

Add RLM-Runtime as an MCP server to get sandboxed Python execution in Claude. No API keys required for the runtime itself.

Step 1: Install with MCP support

pip install rlm-runtime[mcp]

Step 2: Add to your MCP configuration

Add to ~/.mcp.json (Claude Code) or ~/.claude/claude_desktop_config.json (Claude Desktop):

{
  "mcpServers": {
    "rlm": {
      "command": "rlm",
      "args": ["mcp-serve"]
    }
  }
}

Step 3: Add Snipara for context (optional but recommended)

Combine RLM-Runtime with Snipara for context-aware code execution:

{
  "mcpServers": {
    "rlm": {
      "command": "rlm",
      "args": ["mcp-serve"]
    },
    "snipara": {
      "type": "http",
      "url": "https://api.snipara.com/mcp/YOUR_PROJECT",
      "headers": {
        "X-API-Key": "rlm_YOUR_API_KEY"
      }
    }
  }
}

Available MCP Tools

ToolDescription
execute_pythonRun Python code in a sandboxed environment
get_repl_contextGet current REPL context variables
set_repl_contextSet a variable in REPL context
clear_repl_contextClear all REPL context
list_sessionsList all active sessions with metadata
destroy_sessionDestroy a session and free resources
rlm_agent_runStart an autonomous agent that iteratively solves a task
rlm_agent_statusCheck the status of an autonomous agent run
rlm_agent_cancelCancel a running autonomous agent

Execution Environments

ModeSecurityStartup TimeBest For
localMedium (RestrictedPython)~0msDevelopment, trusted code
dockerHigh (container isolation)~100-500msProduction, untrusted code
wasmMedium-High (WebAssembly)~1-2sBrowser, portable

Security Recommendation

Use docker mode for production and untrusted code. Local mode is only recommended for development and AI-generated code.

When to Use RLM-Runtime vs Direct MCP

Use Direct Snipara MCP Tools For:

  • Documentation Q&A - "What's the tech stack?"
  • Code lookup - "Where is auth handled?"
  • Simple retrieval - "List all API endpoints"

Use RLM-Runtime For:

  • Multi-step code tasks - "Implement OAuth integration"
  • Complex reasoning - "Refactor auth to use JWT"
  • Iterative refinement - "Optimize this function"
  • Multi-file changes - "Add validation to all endpoints"

Python API

import asyncio
from rlm import RLM
async def main():
    rlm = RLM(
        model="gpt-4o-mini",
        environment="docker",
        max_depth=4,
    )
    result = await rlm.completion("Analyze and fix the auth bug")
    print(result.response)
asyncio.run(main())

Configuration

Create rlm.toml in your project:

[rlm]
backend = "litellm"
model = "gpt-4o-mini"
environment = "docker"
max_depth = 4
# Snipara integration
snipara_api_key = "rlm_..."
snipara_project_slug = "your-project"
# Docker settings
docker_image = "python:3.11-slim"
docker_memory = "512m"

Or use environment variables:

export RLM_MODEL=gpt-4o-mini
export RLM_ENVIRONMENT=docker
export SNIPARA_API_KEY=rlm_...
export SNIPARA_PROJECT_SLUG=my-project

CLI Commands

CommandDescription
rlm initCreate rlm.toml configuration
rlm run "prompt"Run a completion
rlm run --env dockerRun with Docker isolation
rlm agent "task"Run an autonomous agent
rlm logsView execution trajectories
rlm visualizeLaunch visualization dashboard
rlm mcp-serveStart MCP server
rlm doctorCheck setup and dependencies

Safety Limits

LimitDefaultMax
Recursion depth45
Agent iterations1050
Cost limit$2.00$10.00
Timeout30s600s
Memory (Docker)512MBConfigurable

Links

Next Steps