MCP Tools Reference
Snipara exposes a set of MCP (Model Context Protocol) tools that your LLM client can use to query your documentation efficiently. These tools are the core of our context optimization service.
Key Takeaways
- 46 MCP tools — From basic queries to multi-agent swarms and RLM orchestration
- Token budgeting — Control exactly how much context you receive
- Hybrid search — Keyword + semantic for best results
- Plan-based access — Free tier gets core tools, Pro+ unlocks advanced features
How It Works
MCP tools return optimized context, not LLM responses. Your client LLM uses this context to generate intelligent answers.
Quick Reference
All 46 MCP tools at a glance. Click any tool name to jump to its detailed documentation.
| Tool | Category | Plan | Description |
|---|---|---|---|
| rlm_context_query | Primary | Free | Main context optimization tool with token budgeting |
| rlm_help | Primary | Free | Tool discovery and recommendations based on your query |
| rlm_decompose | Recursive | Pro+ | Break complex queries into sub-queries |
| rlm_multi_query | Recursive | Pro+ | Execute multiple queries in one call |
| rlm_plan | Recursive | Team+ | Generate full execution plans |
| rlm_multi_project_query | Recursive | Team+ | Query across all team projects |
| rlm_ask | Supporting | Free | Legacy keyword search (use rlm_context_query) |
| rlm_search | Supporting | Free | Regex pattern search |
| rlm_inject | Session | Free | Inject persistent session context |
| rlm_context | Session | Free | View current session context |
| rlm_clear_context | Session | Free | Clear session context |
| rlm_stats | Info | Free | Documentation statistics |
| rlm_sections | Info | Free | List all documentation sections |
| rlm_read | Info | Free | Read specific line ranges |
| rlm_settings | Info | Free | Get project settings |
| rlm_store_summary | Summary | Pro+ | Store LLM-generated summaries |
| rlm_get_summaries | Summary | Pro+ | Retrieve stored summaries |
| rlm_delete_summary | Summary | Pro+ | Delete stored summaries |
| rlm_shared_context | Shared | Pro+ | Get team best practices |
| rlm_list_templates | Shared | Pro+ | List prompt templates |
| rlm_get_template | Shared | Pro+ | Get and render templates |
| rlm_list_collections | Shared | Free | List accessible shared collections |
| rlm_upload_shared_document | Shared | Team+ | Upload document to shared collection |
| rlm_upload_document | Sync | Free | Upload or update a document |
| rlm_sync_documents | Sync | Free | Bulk sync multiple documents |
| RLM Orchestration Tools — Bridge between Snipara and RLM Runtime | |||
| rlm_load_document | Orchestration | Pro+ | Load document content for RLM runtime |
| rlm_load_project | Orchestration | Pro+ | Load entire project context for RLM runtime |
| rlm_orchestrate | Orchestration | Team+ | Orchestrate multi-step document operations |
| rlm_repl_context | Orchestration | Pro+ | Bridge between Snipara MCP and RLM REPL |
| Snipara Agents Tools — Requires Agents plan: Starter+ Pro+ Team+ | |||
| rlm_remember | Agent Memory | Starter+ | Store a memory for semantic recall |
| rlm_remember_bulk | Agent Memory | Starter+ | Store multiple memories in a single call (batch) |
| rlm_recall | Agent Memory | Starter+ | Semantically recall relevant memories |
| rlm_memories | Agent Memory | Starter+ | List memories with filters |
| rlm_forget | Agent Memory | Starter+ | Delete memories by ID or filter |
| rlm_swarm_create | Swarm | Pro+ | Create a new agent swarm |
| rlm_swarm_join | Swarm | Pro+ | Join a swarm as an agent |
| rlm_claim | Swarm | Pro+ | Claim exclusive resource access |
| rlm_release | Swarm | Pro+ | Release claimed resource |
| rlm_state_get | Swarm | Pro+ | Read shared swarm state |
| rlm_state_set | Swarm | Pro+ | Write shared state (optimistic locking) |
| rlm_broadcast | Swarm | Team+ | Send event to all swarm agents |
| rlm_task_create | Swarm | Pro+ | Create task in distributed queue |
| rlm_task_claim | Swarm | Pro+ | Claim task from queue |
| rlm_task_complete | Swarm | Pro+ | Complete or fail a task |
Primary Tool
rlm_context_query
FreeThe main context optimization tool. Returns the most relevant documentation sections for a query, respecting your token budget.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| query | string | Yes | The question or topic to search for |
| max_tokens | number | No | Maximum tokens to return (default: 4000) |
| search_mode | string | No | Search mode: 'keyword', 'semantic', or 'hybrid' (default: 'hybrid') |
| include_metadata | boolean | No | Include file paths and line numbers (default: true) |
Response Format
{
"sections": [
{
"title": "Authentication Flow",
"content": "...",
"file": "docs/auth.md",
"lines": [45, 120],
"relevance_score": 0.94,
"token_count": 1200
}
],
"total_tokens": 3800,
"suggestions": ["Also check: docs/security.md"]
}Example
// Query example
{
"query": "How does user authentication work?",
"max_tokens": 4000,
"search_mode": "hybrid"
}rlm_help
FreeDiscover the right MCP tool for your task. Get recommendations based on your query, detailed tool info, or browse tools by tier. Perfect for new users or when you're unsure which tool to use.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| query | string | No | Describe what you want to do (e.g., 'search across all team projects') |
| tool | string | No | Get detailed info about a specific tool (e.g., 'rlm_context_query') |
| tier | string | No | List tools by tier: PRIMARY, POWER_USER, TEAM, UTILITY, or ADVANCED |
| limit | number | No | Maximum recommendations to return (default: 5) |
Response Format
{
"recommendations": [
{
"tool": "rlm_multi_project_query",
"score": 92,
"tier": "TEAM",
"description": "Query across all team projects",
"use_cases": ["Cross-project search", "Find implementations"],
"example": "rlm_multi_project_query(query='authentication')"
}
],
"total_tools": 46,
"tip": "Use tier='PRIMARY' to see essential tools"
}Example
// Get tool recommendations
rlm_help({ query: "I want to search across all my team projects" })
// Get info about a specific tool
rlm_help({ tool: "rlm_context_query" })
// List all primary tools
rlm_help({ tier: "PRIMARY" })Recursive Context Tools
These tools enable near-infinite context by allowing your LLM to orchestrate multiple queries. Your client LLM decomposes complex questions, makes multiple calls, and synthesizes the results.
rlm_decompose
Pro+Break a complex query into sub-queries with an execution plan. Your LLM can then call rlm_multi_query to get context for each sub-query.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| query | string | Yes | The complex question to decompose |
| max_subqueries | number | No | Maximum number of sub-queries to generate (default: 5) |
Response Format
{
"original_query": "Explain the full authentication system",
"subqueries": [
"How does login flow work?",
"How are JWT tokens generated?",
"How are sessions managed?",
"How does logout work?"
],
"execution_plan": "sequential",
"estimated_tokens": 12000
}rlm_multi_query
Pro+Execute multiple queries in a single call with per-query token budgets. Efficient for parallel context retrieval.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| queries | string[] | Yes | Array of queries to execute |
| tokens_per_query | number | No | Token budget per query (default: 2000) |
Response Format
{
"results": [
{
"query": "How does login flow work?",
"sections": [...],
"token_count": 1800
},
{
"query": "How are JWT tokens generated?",
"sections": [...],
"token_count": 1500
}
],
"total_tokens": 6500
}rlm_plan
Team+Generate a full execution plan for complex questions, including query decomposition, dependencies, and optimal execution order.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| query | string | Yes | The complex question to plan for |
| depth | number | No | Maximum recursion depth (default: 2) |
Response Format
{
"plan": {
"steps": [
{ "id": 1, "query": "...", "depends_on": [] },
{ "id": 2, "query": "...", "depends_on": [1] }
],
"parallel_groups": [[1, 2], [3]],
"estimated_tokens": 15000
}
}rlm_multi_project_query
Team+Query across all projects in a team with a single call. Returns context from multiple projects ranked by relevance. Requires a team API key.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| query | string | Yes | The question to search for |
| max_tokens | number | No | Total token budget across all projects (default: 4000) |
| per_project_limit | number | No | Maximum sections per project (default: 3) |
| project_ids | string[] | No | Include only these project IDs/slugs |
| exclude_project_ids | string[] | No | Exclude these project IDs/slugs |
Response Format
{
"query": "How does authentication work?",
"projects_queried": 5,
"projects_skipped": 0,
"results": [
{
"project_slug": "api-service",
"sections": [...],
"tokens": 800
},
{
"project_slug": "web-app",
"sections": [...],
"tokens": 600
}
],
"total_tokens": 3200
}Supporting Tools
rlm_ask
FreeLegacy query tool using keyword search. Use rlm_context_query instead for better results.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| question | string | Yes | The question to search for |
Response Format
{
"content": "Relevant documentation content...",
"sources": ["docs/guide.md:45-120"]
}rlm_search
FreeSearch documentation using regex patterns. Useful for finding specific function names, configuration keys, or code patterns.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| pattern | string | Yes | Regex pattern to search for |
| max_results | number | No | Maximum results to return (default: 20) |
Response Format
{
"matches": [
{
"file": "docs/config.md",
"line": 45,
"content": "API_KEY=your-key-here",
"context": "..."
}
],
"total_matches": 15
}rlm_inject
FreeInject session context that persists across queries. Useful for setting task-specific focus areas.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| context | string | Yes | Context to inject (task description, focus areas) |
| append | boolean | No | Append to existing context instead of replacing (default: false) |
Response Format
{
"success": true,
"context": "Task: Fix authentication bug. Focus: auth.ts, middleware.ts"
}rlm_context
FreeShow the current session context. Useful for checking what context is active.
Parameters
| Name | Type | Required | Description |
|---|
Response Format
{
"context": "Task: Fix authentication bug. Focus: auth.ts, middleware.ts",
"injected_at": "2024-01-15T10:30:00Z"
}rlm_clear_context
FreeClear the session context. Use when switching tasks.
Parameters
| Name | Type | Required | Description |
|---|
Response Format
{
"success": true,
"message": "Session context cleared"
}rlm_stats
FreeGet statistics about the indexed documentation.
Parameters
| Name | Type | Required | Description |
|---|
Response Format
{
"total_documents": 42,
"total_sections": 156,
"total_tokens": 125000,
"last_updated": "2024-01-15T10:30:00Z"
}rlm_sections
FreeList all documentation sections. Useful for understanding what topics are covered.
Parameters
| Name | Type | Required | Description |
|---|
Response Format
{
"sections": [
{ "id": "auth-overview", "title": "Authentication Overview", "file": "docs/auth.md" },
{ "id": "api-keys", "title": "API Keys", "file": "docs/auth.md" }
]
}rlm_read
FreeRead specific line ranges from documentation. Useful after searching to get full context.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| start_line | number | Yes | Starting line number |
| end_line | number | Yes | Ending line number |
Response Format
{
"content": "Full content of lines 100-150...",
"file": "docs/auth.md",
"lines": [100, 150]
}rlm_settings
FreeGet project settings from the Snipara dashboard. Returns configuration like auto-inject preferences, default search mode, etc.
Parameters
| Name | Type | Required | Description |
|---|
Response Format
{
"projectId": "abc123",
"name": "My Project",
"defaultSearchMode": "hybrid",
"autoInjectInstructions": true,
"maxTokensDefault": 4000
}Summary Storage Tools
Store and retrieve LLM-generated summaries for your documents. Your LLM generates summaries, we store them for faster future queries.
rlm_store_summary
Pro+Store an LLM-generated summary for a document. Later queries can use stored summaries for faster, more token-efficient responses.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| document_path | string | Yes | Path to the document (relative to project root) |
| summary | string | Yes | The summary text to store |
| summary_type | string | No | Type: 'concise', 'detailed', 'technical', 'keywords', or 'custom' |
| section_id | string | No | Optional section identifier for partial summaries |
| generated_by | string | No | Model that generated the summary (e.g., 'claude-3.5-sonnet') |
Response Format
{
"summary_id": "sum_abc123",
"document_path": "docs/auth.md",
"summary_type": "concise",
"token_count": 150,
"created": true,
"message": "Summary stored successfully"
}rlm_get_summaries
Pro+Retrieve stored summaries with optional filters. Use to check what summaries exist or to use them in context.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| document_path | string | No | Filter by document path |
| summary_type | string | No | Filter by summary type |
| include_content | boolean | No | Include summary content in response (default: true) |
Response Format
{
"summaries": [
{
"summary_id": "sum_abc123",
"document_path": "docs/auth.md",
"summary_type": "concise",
"token_count": 150,
"content": "Authentication uses JWT tokens...",
"created_at": "2024-01-15T10:30:00Z"
}
],
"total_count": 1,
"total_tokens": 150
}rlm_delete_summary
Pro+Delete stored summaries by ID, document path, or type.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| summary_id | string | No | Specific summary ID to delete |
| document_path | string | No | Delete all summaries for this document |
| summary_type | string | No | Delete summaries of this type |
Response Format
{
"deleted_count": 3,
"message": "Deleted 3 summaries"
}Shared Context Tools
Access team-wide coding standards, best practices, and prompt templates that are shared across projects. Perfect for maintaining consistency across your organization.
rlm_shared_context
Pro+Get merged context from linked shared collections. Returns team coding standards and best practices with token budget allocation by category priority.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| max_tokens | number | No | Maximum tokens to return (default: 4000) |
| categories | string[] | No | Filter by categories: 'MANDATORY', 'BEST_PRACTICES', 'GUIDELINES', 'REFERENCE' |
| include_content | boolean | No | Include merged document content (default: true) |
Response Format
{
"documents": [
{
"id": "doc_123",
"title": "TypeScript Standards",
"category": "MANDATORY",
"token_count": 800,
"collection_name": "Team Coding Standards"
}
],
"merged_content": "# TypeScript Standards\n...",
"total_tokens": 2400,
"collections_loaded": 2
}rlm_list_templates
Pro+List available prompt templates from linked shared collections. Templates are reusable prompts for common tasks.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| category | string | No | Filter by template category |
Response Format
{
"templates": [
{
"id": "tpl_123",
"name": "Security Review",
"slug": "security-review",
"description": "Review code for security issues",
"category": "review",
"collection_name": "Team Templates"
}
],
"total_count": 5,
"categories": ["review", "refactoring", "testing"]
}rlm_get_template
Pro+Get a specific prompt template and optionally render it with variable substitution.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| template_id | string | No | Template ID |
| slug | string | No | Template slug (alternative to ID) |
| variables | object | No | Variable values to substitute in the template |
Response Format
{
"template": {
"id": "tpl_123",
"name": "Security Review",
"slug": "security-review",
"prompt": "Review the following code for {{focus_area}}:\n{{code}}",
"variables": ["focus_area", "code"]
},
"rendered_prompt": "Review the following code for SQL injection:\n...",
"missing_variables": []
}rlm_list_collections
FreeList all shared context collections accessible to you. Returns collections you own, team collections you're a member of, and public collections. Use this to discover collection IDs for uploading documents.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| include_public | boolean | No | Include public collections in results (default: true) |
Response Format
{
"collections": [
{
"id": "col_abc123",
"name": "Team Coding Standards",
"slug": "team-coding-standards",
"description": "Shared coding guidelines for all projects",
"scope": "team",
"access_type": "team_member",
"_count": {
"documents": 12,
"templates": 5
}
},
{
"id": "col_def456",
"name": "Public TypeScript Guide",
"slug": "public-ts-guide",
"scope": "public",
"access_type": "public",
"_count": {
"documents": 8,
"templates": 0
}
}
],
"count": 2
}Example
// List all accessible collections
rlm_list_collections()
// Exclude public collections
rlm_list_collections({ include_public: false })rlm_upload_shared_document
Team+Upload or update a document in a shared context collection. Use for team best practices, coding standards, and guidelines that should be available across projects.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| collection_id | string | Yes | The shared collection ID (get from rlm_list_collections) |
| title | string | Yes | Document title |
| content | string | Yes | Document content (markdown) |
| category | string | No | Category: 'MANDATORY', 'BEST_PRACTICES', 'GUIDELINES', or 'REFERENCE' (default: BEST_PRACTICES) |
| priority | number | No | Priority within category, 0-100 (default: 0, higher = more important) |
| tags | string[] | No | Tags for filtering and organization |
Response Format
{
"success": true,
"document_id": "doc_xyz789",
"collection_id": "col_abc123",
"title": "Error Handling Standards",
"category": "BEST_PRACTICES",
"action": "created"
}Example
// Upload a new coding standard
rlm_upload_shared_document({
collection_id: "col_abc123",
title: "Error Handling Standards",
content: "# Error Handling\n\nAlways use custom error classes...",
category: "BEST_PRACTICES",
priority: 50,
tags: ["errors", "typescript"]
})Document Sync Tools
Upload and synchronize documents directly from your LLM client. Keep your documentation up-to-date without leaving your editor.
rlm_upload_document
FreeUpload or update a single document. Creates the document if it doesn't exist, updates if it does.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| path | string | Yes | Document path (e.g., 'docs/getting-started.md') |
| content | string | Yes | Document content |
| title | string | No | Document title (extracted from content if not provided) |
Response Format
{
"success": true,
"document_id": "doc_abc123",
"path": "docs/getting-started.md",
"action": "created",
"token_count": 1500
}rlm_sync_documents
FreeBulk sync multiple documents in a single call. Efficient for initial project setup or major updates.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| documents | array | Yes | Array of {path, content, title?} objects |
| delete_missing | boolean | No | Delete documents not in the list (default: false) |
Response Format
{
"success": true,
"created": 5,
"updated": 2,
"deleted": 0,
"total_documents": 7,
"total_tokens": 15000
}RLM Orchestration Tools
Orchestration tools bridge Snipara's context optimization with RLM Runtime execution environments. Load documents and projects into the RLM REPL, orchestrate multi-step operations, and share context between MCP and runtime sessions.
rlm_load_document
Pro+Load a specific document's content from your Snipara project into the RLM runtime environment. Returns optimized content ready for REPL injection.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| document_path | string | Yes | Path to the document in your Snipara project |
| max_tokens | integer | No | Maximum tokens to return (default: 4000) |
Response Format
{
"success": true,
"document_path": "docs/api.md",
"content": "# API Reference\n...",
"tokens": 3200,
"sections": 12
}rlm_load_project
Pro+Load the entire project context into the RLM runtime. Returns a structured overview of all indexed documents with section summaries for broad context injection.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| max_tokens | integer | No | Maximum tokens for the project overview (default: 8000) |
| include_summaries | boolean | No | Include stored summaries if available (default: true) |
Response Format
{
"success": true,
"project_slug": "my-project",
"documents": 15,
"total_sections": 142,
"content": "# Project: my-project\n...",
"tokens": 7500
}rlm_orchestrate
Team+Orchestrate multi-step document operations combining queries, decomposition, and context loading into a single coordinated execution plan.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| task | string | Yes | The task to orchestrate (e.g., 'Analyze authentication flow') |
| strategy | string | No | Execution strategy: relevance_first, breadth_first, depth_first (default: relevance_first) |
| max_tokens | integer | No | Total token budget for the orchestration (default: 16000) |
Response Format
{
"success": true,
"steps_executed": 4,
"context_gathered": "# Orchestration Result\n...",
"tokens_used": 12400,
"sub_queries": ["auth middleware", "JWT validation", "session management"]
}rlm_repl_context
Pro+Bridge between Snipara MCP and the RLM Runtime REPL. Inject Snipara context into REPL variables or extract REPL state back to Snipara for persistence.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
| action | string | Yes | Action: 'inject' (Snipara → REPL) or 'extract' (REPL → Snipara) |
| key | string | Yes | Variable name in REPL context |
| query | string | No | Snipara query to resolve and inject (for 'inject' action) |
| value | string | No | Value to extract from REPL (for 'extract' action) |
Response Format
{
"success": true,
"action": "inject",
"key": "auth_docs",
"tokens": 2800,
"repl_session": "default"
}Snipara Agents Tools
Snipara Agents provides AI agent infrastructure with persistent memory, multi-agent swarms, and distributed task coordination. Agent tools require a separate Snipara Agents subscription.
Separate Subscription Required
Agent tools require a Snipara Agents subscription (starting at $15/month). See the Agents Documentation for full details.
Agent Memory (4 tools)
rlm_remember, rlm_recall, rlm_memories, rlm_forget
Persistent semantic memory for individual agents. Store facts, decisions, and preferences with confidence decay and automatic relevance-based recall.
Agents Starter+Multi-Agent Swarms (10 tools)
rlm_swarm_create, rlm_claim, rlm_state_set, rlm_task_create, and more
Build agent swarms with resource locking, shared state, and distributed task queues. Coordinate multiple agents working on complex tasks.
Agents Pro+Practical Recipes
Here are common workflows combining multiple MCP tools for real-world tasks:
Recipe 1: Debug an Auth Flow
Find and understand authentication issues in your codebase.
// Step 1: Find auth-related code
rlm_search({ pattern: "authenticate|login|session|jwt" })
// Step 2: Get detailed context on the auth system
rlm_context_query({
query: "authentication flow and session management",
max_tokens: 6000,
search_mode: "hybrid"
})
// Step 3: Check team security standards
rlm_shared_context({ categories: ["MANDATORY", "BEST_PRACTICES"] })Recipe 2: Generate Feature with Repo Context
Break down a complex feature and get relevant context for each part.
// Step 1: Break down the task into subtasks
rlm_decompose({
query: "implement password reset with email verification",
max_subqueries: 5
})
// Step 2: Get context for each subtask in parallel
rlm_multi_query({
queries: [
"email service configuration and templates",
"password hashing and validation",
"reset token generation and expiry"
],
tokens_per_query: 2000
})
// Step 3: Check existing patterns
rlm_shared_context({ categories: ["BEST_PRACTICES"] })Recipe 3: Onboard a New Developer
Get a quick overview of a codebase for new team members.
// Step 1: Get project statistics
rlm_stats()
// Step 2: List all documentation sections
rlm_sections()
// Step 3: Get high-level summaries of key areas
rlm_get_summaries({
summary_type: "overview",
include_content: true
})
// Step 4: Check team coding standards
rlm_shared_context({
categories: ["MANDATORY", "BEST_PRACTICES", "GUIDELINES"]
})Expected Usage Per RLM Completion
When using RLM with your LLM for agentic coding tasks, the number of Snipara queries varies based on task complexity. Here's what to expect:
| Task Complexity | Queries per Completion | Examples |
|---|---|---|
| Simple | ~3-5 queries | Fix a typo, add a simple function, update configuration |
| Medium | ~8-12 queries | Add a new API endpoint, implement a feature with tests, refactor a module |
| Complex | ~15-25 queries | Implement full auth system, build multi-file feature, major refactoring |
Planning Your Query Budget
With the Free plan (100 queries/month), you can complete approximately 10-30 medium complexity tasks. Pro plan (5,000 queries) supports ~400-600 medium tasks per month.
Feature Availability by Plan
| Feature | Free | Pro ($19/mo) | Team ($49/mo) | Enterprise |
|---|---|---|---|---|
| Keyword Search | Yes | Yes | Yes | Yes |
| Semantic Search | - | Yes | Yes | Yes |
| Hybrid Search | - | Yes | Yes | Yes |
| Token Budgeting | Yes | Yes | Yes | Yes |
| Query Decomposition | - | Yes | Yes | Yes |
| Multi-Query Batching | - | Yes | Yes | Yes |
| Context Caching | - | - | Yes | Yes |
| Summary Storage | - | Yes | Yes | Yes |
| Shared Context | - | Yes | Yes | Yes |
| Prompt Templates | - | Yes | Yes | Yes |
| Document Sync | Yes | Yes | Yes | Yes |
| Queries/Month | 100 | 5,000 | 20,000 | Unlimited |
| Snipara Agents (separate subscription) | ||||
| Agent Memory | Requires Agents Starter+ ($15/mo) | |||
| Multi-Agent Swarms | Requires Agents Pro+ ($39/mo) | |||
Using with RLM Runtime
For complex multi-step tasks, use rlm-runtime — a Python CLI that orchestrates LLM completions with sandboxed code execution and automatic Snipara context retrieval.
pip install rlm-runtime[snipara]rlm run --model claude-sonnet-4-20250514 "Refactor auth to use JWT"RLM Runtime automatically calls Snipara MCP tools (rlm_context_query, rlm_shared_context, etc.) to retrieve relevant documentation for each sub-task, then executes generated code in a sandboxed environment.
| Feature | Direct MCP | RLM Runtime |
|---|---|---|
| Context Retrieval | Manual tool calls | Automatic per sub-task |
| Task Decomposition | LLM-driven | Built-in orchestration |
| Code Execution | Not available | Sandboxed REPL |
| Best For | Simple queries, chat | Complex refactoring, multi-file changes |
Error Handling
All MCP tools return consistent error responses. Handle these in your LLM workflows:
| Error Code | HTTP Status | Cause | Solution |
|---|---|---|---|
| INVALID_API_KEY | 401 | Missing or invalid API key | Check X-API-Key header matches your project key |
| PROJECT_NOT_FOUND | 404 | Project slug doesn't exist | Verify project slug in MCP endpoint URL |
| QUOTA_EXCEEDED | 429 | Monthly query limit reached | Upgrade plan or wait for reset |
| FEATURE_UNAVAILABLE | 403 | Tool requires higher plan | Upgrade to Pro+ for recursive tools |
| RATE_LIMITED | 429 | Too many requests per minute | Add backoff/retry logic |
| VALIDATION_ERROR | 400 | Invalid parameters | Check parameter types and required fields |
Error Response Format
{
"error": {
"code": "QUOTA_EXCEEDED",
"message": "Monthly query limit reached (100/100)",
"details": {
"limit": 100,
"used": 100,
"reset_at": "2025-02-01T00:00:00Z"
}
}
}