create-snipara (NPX)
One-command setup for Snipara MCP + RLM-Runtime — context optimization, semantic memory, and safe code execution for AI agents.
npx create-sniparaWhat It Does
- Installs snipara-mcp — MCP server for context-optimized documentation queries
- Installs rlm-runtime — Safe code execution with Docker isolation
- Configures .mcp.json — Ready for Claude Code, Cursor, Claude Desktop
- Sets up hooks — Session memory automation (Claude Code)
- Updates environment files — Adds API key configuration
Interactive Setup
Run npx create-snipara in your project directory. You'll be prompted for:
| Prompt | Description |
|---|---|
| Project slug | Auto-detected from git remote or directory name |
| Project ID | Optional, for advanced use cases |
| API key type | Project key, Team key, Sign up, or Skip |
| API key | Your Snipara API key |
| AI client | Claude Code, Cursor, Claude Desktop, or other |
| Packages | Choose what to install (snipara-mcp, rlm-runtime) |
| LLM provider | OpenAI, Anthropic, or None (for rlm run/rlm agent CLI) |
| Run rlm init | Optional — configure execution environment (sandbox/docker/local) |
Execution Environments
When you select RLM-Runtime during setup, you'll be asked if you want to run rlm init to configure the execution environment:
| Environment | Description | Use Case |
|---|---|---|
sandbox | RestrictedPython, safe stdlib only | Default, most secure |
docker | Full Python in isolated container | Recommended for full features |
local | Full access, no isolation | Development only |
Security Recommendation
Use docker mode for production and untrusted code. local mode is only recommended for development and AI-generated code.
You can also configure the environment later by running rlm init manually.
API Key Requirements
| Tool | Snipara API Key | LLM API Key (OpenAI/Anthropic) |
|---|---|---|
execute_python MCP | Not needed | Not needed (your AI client is the LLM) |
rlm_context_query MCP | Required | Not needed |
rlm_remember/rlm_recall | Required | Not needed |
rlm run / rlm agent CLI | Optional (for context) | Required |
Key Types
| Type | Description |
|---|---|
| Project API key | Access to a single project |
| Team API key | Access to all projects in your team |
Command Line Options
Basic usagenpx create-sniparaWith project API keynpx create-snipara --api-key rlm_your_project_keyWith team API key (access all team projects)npx create-snipara --team-key rlm_your_team_keySpecify project slugnpx create-snipara --slug my-projectRuntime only - no Snipara API key needednpx create-snipara --runtime-onlySkip specific installationsnpx create-snipara --skip-mcp # Skip snipara-mcpnpx create-snipara --skip-runtime # Skip rlm-runtimenpx create-snipara --skip-hooks # Skip Claude Code hooksnpx create-snipara --skip-test # Skip connection testAccept all defaults (non-interactive)npx create-snipara -y --api-key rlm_xxx --slug my-projectWhat Gets Created
.mcp.json
{ "mcpServers": { "snipara": { "type": "http", "url": "https://api.snipara.com/mcp/your-project", "headers": { "X-API-Key": "rlm_your_key" } }, "rlm-runtime": { "type": "http", "url": "http://localhost:8765/mcp", "headers": {} } }}Claude Code Hooks (if selected)
snipara-startup.sh— Restores session contextsnipara-session.sh— Auto-remembers commitssnipara-compact.sh— Saves context before compaction
Environment Files
Updates .env.example and .env.local with:
Snipara ConfigurationSNIPARA_API_KEY=your_api_keySNIPARA_PROJECT_SLUG=your-projectRLM-Runtime LLM Provider (if configured)OPENAI_API_KEY=sk-...orANTHROPIC_API_KEY=sk-ant-...After Installation
For Claude Code / Cursor
- Restart your AI client
- MCP tools are automatically available
For Claude Desktop
- Restart Claude Desktop
- Config is at
~/Library/Application Support/Claude/claude_desktop_config.json
RLM-Runtime Usage
MCP Tools (no LLM API key needed):
Your AI client (Claude, GPT, etc.) provides the LLM — no additional API key required.
| Tool | Description |
|---|---|
execute_python | Run Python in sandbox |
get_repl_context | Get session variables |
set_repl_context | Set session variables |
clear_repl_context | Clear session |
CLI Commands (requires LLM API key):
For rlm run and rlm agent, you need an LLM provider API key:
Set your LLM providerexport OPENAI_API_KEY=sk-...orexport ANTHROPIC_API_KEY=sk-ant-...Run commandsrlm init # Initialize configurationrlm run --env docker # Run with Docker isolationrlm agent "task" # Autonomous agent moderlm visualize # Launch trajectory dashboardAvailable MCP Tools
After setup, you have access to 43+ MCP tools:
| Category | Tools |
|---|---|
| Context | rlm_context_query, rlm_ask, rlm_search, rlm_sections |
| Planning | rlm_plan, rlm_decompose, rlm_multi_query |
| Memory | rlm_remember, rlm_recall, rlm_memories, rlm_forget |
| Execution | execute_python, get_repl_context (via RLM-Runtime) |
| Swarms | rlm_swarm_create, rlm_claim, rlm_task_create |
Requirements
- Node.js 18+
- Python 3.10+ (for snipara-mcp and rlm-runtime)
- Docker (optional, for RLM-Runtime isolation)