create-snipara (NPX)

One-command setup and maintenance for Snipara MCP + snipara-companion, with optional RLM-Runtime for local execution.

npx create-snipara

What It Does

  • Installs snipara-companion — Local companion for query, plan, upload, chunk, session bootstrap, event inspection, and task workflows
  • Installs snipara-mcp — MCP server for context-optimized documentation queries
  • Installs rlm-runtime — Safe code execution with Docker isolation
  • Configures .mcp.json — Ready for Claude Code, Cursor, Claude Desktop
  • Sets up hooks — Thin edge runtime bootstrap for session memory automation
  • Updates environment files — Adds API key configuration
  • Adds maintenance commandsdoctor, repair, upgrade, and print-config

Install Profiles

ProfileWhat it installsBest for
hosted-onlyHosted MCP config onlySaaS-first setups without local workflows
hosted-companionHosted MCP + snipara-companionRecommended default
full-stackHosted MCP + snipara-companion + rlm-runtimeHosted context plus local execution
runtime-onlyrlm-runtime onlyPure local execution without hosted API

Hosted Core + Thin Edge Runtime

create-snipara installs a hosted-first setup. Snipara stays the source of truth for memory, review, policy, and orchestration. The local install only adds a thin edge layer for hook capture, context restore, and optional safe execution.

  • Hosted core: 102-tool MCP surface, reviewed memory, orchestration, automation policies
  • Thin edge runtime: local hooks, compatibility CLI flows, and optional rlm-runtime
  • Companion workflows: rlm-hook query, plan, multi-query, orchestrate, load-document, upload, chunk, events recent, session-bootstrap, and task-commit
  • Generated companion pack: .snipara/companion with client-aware command presets, local usage guidance, and a doctor report
  • Design rule: local adapters capture and forward signals; they do not own durable memory policy

Interactive Setup

Run npx create-sniparain your project directory. You'll be prompted for:

PromptDescription
Project slugAuto-detected from git remote or directory name
Project IDOptional, for advanced use cases
API key typeProject key, Team key, Sign up, or Skip
API keyYour Snipara API key
AI clientClaude Code, Cursor, Claude Desktop, or other
Install profileHosted only, hosted + companion, full stack, or runtime only
HooksWhether local hooks should be generated when supported
LLM providerOpenAI, Anthropic, or None (for rlm run/rlm agent CLI)
Run rlm initOptional — configure execution environment (sandbox/docker/local)

Maintenance Commands

doctor

Validates local wiring and writes .snipara/companion/doctor.json.

repair

Rebuilds local configuration, companion pack, and hooks.

upgrade

Upgrades installed local pieces and refreshes generated assets.

print-config

Shows the inferred local setup and install profile.

npx create-snipara doctor
npx create-snipara repair

Execution Environments

When you select RLM-Runtime during setup, you'll be asked if you want to run rlm init to configure the execution environment:

EnvironmentDescriptionUse Case
sandboxRestrictedPython, safe stdlib onlyDefault, most secure
dockerFull Python in isolated containerRecommended for full features
localFull access, no isolationDevelopment only

Security Recommendation

Use docker mode for production and untrusted code. local mode is only recommended for development and AI-generated code.

You can also configure the environment later by running rlm init manually.

API Key Requirements

ToolSnipara API KeyLLM API Key (OpenAI/Anthropic)
execute_python MCPNot neededNot needed (your AI client is the LLM)
rlm_context_query MCPRequiredNot needed
rlm_remember/rlm_recallRequiredNot needed
rlm run / rlm agent CLIOptional (for context)Required

Key Types

TypeDescription
Project API keyAccess to a single project
Team API keyAccess to all projects in your team

Command Line Options

Basic usage
npx create-snipara
With project API key
npx create-snipara --api-key rlm_your_project_key
With team API key (access all team projects)
npx create-snipara --team-key rlm_your_team_key
Specify project slug
npx create-snipara --slug my-project
Runtime only - no Snipara API key needed
npx create-snipara --runtime-only
Skip local companion CLI
npx create-snipara --skip-companion
Skip specific installations
npx create-snipara --skip-mcp # Skip snipara-mcp
npx create-snipara --skip-runtime # Skip rlm-runtime
npx create-snipara --skip-hooks # Skip Claude Code hooks
npx create-snipara --skip-test # Skip connection test
Accept all defaults (non-interactive)
npx create-snipara -y --api-key rlm_xxx --slug my-project

What Gets Created

.mcp.json

{
  "mcpServers": {
    "snipara": {
      "type": "http",
      "url": "https://api.snipara.com/mcp/your-project",
      "headers": {
        "X-API-Key": "rlm_your_key"
      }
    },
    "rlm-runtime": {
      "type": "http",
      "url": "http://localhost:8765/mcp",
      "headers": {}
    }
  }
}

Claude Code Hooks (if selected)

  • snipara-startup.sh - Restores session context
  • snipara-session.sh — Auto-remembers commits
  • snipara-compact.sh — Saves context before compaction

Local Companion Pack

If you install the local companion, create-snipara also generates a small project-local starter pack under .snipara/companion.

  • README.md - client-aware usage guidance and starter commands
  • commands.json - machine-readable command presets for local workflows

Companion Workflows

The companion CLI is a thin local facade over hosted Snipara workflows. It is useful when you want repeatable local commands in addition to MCP access.

rlm-hook query --query "recent auth decisions"
rlm-hook plan --query "implement webhook retry hardening"
rlm-hook multi-query --queries "recent incidents" "open decisions"
rlm-hook orchestrate --query "map auth architecture"
rlm-hook load-document --path docs/architecture/auth.md
rlm-hook events recent --limit 20
rlm-hook task-commit --summary "Shipped retry hardening"

These commands print human-readable output by default. Add --json when you need the raw hosted response.

Environment Files

Updates .env.example and .env.local with:

Snipara Configuration
SNIPARA_API_KEY=your_api_key
SNIPARA_PROJECT_SLUG=your-project
RLM-Runtime LLM Provider (if configured)
OPENAI_API_KEY=sk-...
or
ANTHROPIC_API_KEY=sk-ant-...

After Installation

For Claude Code / Cursor

  1. Restart your AI client
  2. MCP tools are automatically available

For Claude Desktop

  1. Restart Claude Desktop
  2. Config is at ~/Library/Application Support/Claude/claude_desktop_config.json

RLM-Runtime Usage

MCP Tools (no LLM API key needed):

Your AI client (Claude, GPT, etc.) provides the LLM — no additional API key required.

ToolDescription
execute_pythonRun Python in sandbox
get_repl_contextGet session variables
set_repl_contextSet session variables
clear_repl_contextClear session

CLI Commands (requires LLM API key):

For rlm run and rlm agent, you need an LLM provider API key:

Set your LLM provider
export OPENAI_API_KEY=sk-...
or
export ANTHROPIC_API_KEY=sk-ant-...
Run commands
rlm init # Initialize configuration
rlm run --env docker # Run with Docker isolation
rlm agent "task" # Autonomous agent mode
rlm visualize # Launch trajectory dashboard

Available MCP Tools

If you enable hook-compatible local tooling, the install can also forward canonical lifecycle events into Snipara's automation API. That lets local adapters feel closer to a local-memory workflow while keeping review and persistence centralized.

After setup, you have access to the current 102-tool MCP contract across context, memory, automation, analytics, and orchestration:

CategoryTools
Contextrlm_context_query, rlm_ask, rlm_search, rlm_sections
Planningrlm_plan, rlm_decompose, rlm_multi_query
Memoryrlm_remember, rlm_recall, rlm_memories, rlm_forget, rlm_memory_attach_source, rlm_memory_verify, rlm_memory_invalidate, rlm_memory_supersede
Executionexecute_python, get_repl_context (via RLM-Runtime)
Swarmsrlm_swarm_create, rlm_claim, rlm_task_create

Requirements

  • Node.js 18+
  • Python 3.10+ (for snipara-mcp and rlm-runtime)
  • Docker (optional, for RLM-Runtime isolation)

Links

Next Steps