create-snipara (NPX)

One-command setup for Snipara MCP + RLM-Runtime — context optimization, semantic memory, and safe code execution for AI agents.

npx create-snipara

What It Does

  • Installs snipara-mcp — MCP server for context-optimized documentation queries
  • Installs rlm-runtime — Safe code execution with Docker isolation
  • Configures .mcp.json — Ready for Claude Code, Cursor, Claude Desktop
  • Sets up hooks — Session memory automation (Claude Code)
  • Updates environment files — Adds API key configuration

Interactive Setup

Run npx create-snipara in your project directory. You'll be prompted for:

PromptDescription
Project slugAuto-detected from git remote or directory name
Project IDOptional, for advanced use cases
API key typeProject key, Team key, Sign up, or Skip
API keyYour Snipara API key
AI clientClaude Code, Cursor, Claude Desktop, or other
PackagesChoose what to install (snipara-mcp, rlm-runtime)
LLM providerOpenAI, Anthropic, or None (for rlm run/rlm agent CLI)
Run rlm initOptional — configure execution environment (sandbox/docker/local)

Execution Environments

When you select RLM-Runtime during setup, you'll be asked if you want to run rlm init to configure the execution environment:

EnvironmentDescriptionUse Case
sandboxRestrictedPython, safe stdlib onlyDefault, most secure
dockerFull Python in isolated containerRecommended for full features
localFull access, no isolationDevelopment only

Security Recommendation

Use docker mode for production and untrusted code. local mode is only recommended for development and AI-generated code.

You can also configure the environment later by running rlm init manually.

API Key Requirements

ToolSnipara API KeyLLM API Key (OpenAI/Anthropic)
execute_python MCPNot neededNot needed (your AI client is the LLM)
rlm_context_query MCPRequiredNot needed
rlm_remember/rlm_recallRequiredNot needed
rlm run / rlm agent CLIOptional (for context)Required

Key Types

TypeDescription
Project API keyAccess to a single project
Team API keyAccess to all projects in your team

Command Line Options

Basic usage
npx create-snipara
With project API key
npx create-snipara --api-key rlm_your_project_key
With team API key (access all team projects)
npx create-snipara --team-key rlm_your_team_key
Specify project slug
npx create-snipara --slug my-project
Runtime only - no Snipara API key needed
npx create-snipara --runtime-only
Skip specific installations
npx create-snipara --skip-mcp # Skip snipara-mcp
npx create-snipara --skip-runtime # Skip rlm-runtime
npx create-snipara --skip-hooks # Skip Claude Code hooks
npx create-snipara --skip-test # Skip connection test
Accept all defaults (non-interactive)
npx create-snipara -y --api-key rlm_xxx --slug my-project

What Gets Created

.mcp.json

{
  "mcpServers": {
    "snipara": {
      "type": "http",
      "url": "https://api.snipara.com/mcp/your-project",
      "headers": {
        "X-API-Key": "rlm_your_key"
      }
    },
    "rlm-runtime": {
      "type": "http",
      "url": "http://localhost:8765/mcp",
      "headers": {}
    }
  }
}

Claude Code Hooks (if selected)

  • snipara-startup.sh — Restores session context
  • snipara-session.sh — Auto-remembers commits
  • snipara-compact.sh — Saves context before compaction

Environment Files

Updates .env.example and .env.local with:

Snipara Configuration
SNIPARA_API_KEY=your_api_key
SNIPARA_PROJECT_SLUG=your-project
RLM-Runtime LLM Provider (if configured)
OPENAI_API_KEY=sk-...
or
ANTHROPIC_API_KEY=sk-ant-...

After Installation

For Claude Code / Cursor

  1. Restart your AI client
  2. MCP tools are automatically available

For Claude Desktop

  1. Restart Claude Desktop
  2. Config is at ~/Library/Application Support/Claude/claude_desktop_config.json

RLM-Runtime Usage

MCP Tools (no LLM API key needed):

Your AI client (Claude, GPT, etc.) provides the LLM — no additional API key required.

ToolDescription
execute_pythonRun Python in sandbox
get_repl_contextGet session variables
set_repl_contextSet session variables
clear_repl_contextClear session

CLI Commands (requires LLM API key):

For rlm run and rlm agent, you need an LLM provider API key:

Set your LLM provider
export OPENAI_API_KEY=sk-...
or
export ANTHROPIC_API_KEY=sk-ant-...
Run commands
rlm init # Initialize configuration
rlm run --env docker # Run with Docker isolation
rlm agent "task" # Autonomous agent mode
rlm visualize # Launch trajectory dashboard

Available MCP Tools

After setup, you have access to 43+ MCP tools:

CategoryTools
Contextrlm_context_query, rlm_ask, rlm_search, rlm_sections
Planningrlm_plan, rlm_decompose, rlm_multi_query
Memoryrlm_remember, rlm_recall, rlm_memories, rlm_forget
Executionexecute_python, get_repl_context (via RLM-Runtime)
Swarmsrlm_swarm_create, rlm_claim, rlm_task_create

Requirements

  • Node.js 18+
  • Python 3.10+ (for snipara-mcp and rlm-runtime)
  • Docker (optional, for RLM-Runtime isolation)

Links

Next Steps