MCP Tools Reference

Snipara exposes a set of MCP (Model Context Protocol) tools that your LLM client can use to query your documentation efficiently. These tools are the core of our context optimization service.

Key Takeaways

  • 46 MCP tools — From basic queries to multi-agent swarms and RLM orchestration
  • Token budgeting — Control exactly how much context you receive
  • Hybrid search — Keyword + semantic for best results
  • Plan-based access — Free tier gets core tools, Pro+ unlocks advanced features

How It Works

MCP tools return optimized context, not LLM responses. Your client LLM uses this context to generate intelligent answers.

Quick Reference

All 46 MCP tools at a glance. Click any tool name to jump to its detailed documentation.

ToolCategoryPlanDescription
rlm_context_queryPrimaryFreeMain context optimization tool with token budgeting
rlm_helpPrimaryFreeTool discovery and recommendations based on your query
rlm_decomposeRecursivePro+Break complex queries into sub-queries
rlm_multi_queryRecursivePro+Execute multiple queries in one call
rlm_planRecursiveTeam+Generate full execution plans
rlm_multi_project_queryRecursiveTeam+Query across all team projects
rlm_askSupportingFreeLegacy keyword search (use rlm_context_query)
rlm_searchSupportingFreeRegex pattern search
rlm_injectSessionFreeInject persistent session context
rlm_contextSessionFreeView current session context
rlm_clear_contextSessionFreeClear session context
rlm_statsInfoFreeDocumentation statistics
rlm_sectionsInfoFreeList all documentation sections
rlm_readInfoFreeRead specific line ranges
rlm_settingsInfoFreeGet project settings
rlm_store_summarySummaryPro+Store LLM-generated summaries
rlm_get_summariesSummaryPro+Retrieve stored summaries
rlm_delete_summarySummaryPro+Delete stored summaries
rlm_shared_contextSharedPro+Get team best practices
rlm_list_templatesSharedPro+List prompt templates
rlm_get_templateSharedPro+Get and render templates
rlm_list_collectionsSharedFreeList accessible shared collections
rlm_upload_shared_documentSharedTeam+Upload document to shared collection
rlm_upload_documentSyncFreeUpload or update a document
rlm_sync_documentsSyncFreeBulk sync multiple documents
RLM Orchestration Tools — Bridge between Snipara and RLM Runtime
rlm_load_documentOrchestrationPro+Load document content for RLM runtime
rlm_load_projectOrchestrationPro+Load entire project context for RLM runtime
rlm_orchestrateOrchestrationTeam+Orchestrate multi-step document operations
rlm_repl_contextOrchestrationPro+Bridge between Snipara MCP and RLM REPL
Snipara Agents Tools — Requires Agents plan: Starter+ Pro+ Team+
rlm_rememberAgent MemoryStarter+Store a memory for semantic recall
rlm_remember_bulkAgent MemoryStarter+Store multiple memories in a single call (batch)
rlm_recallAgent MemoryStarter+Semantically recall relevant memories
rlm_memoriesAgent MemoryStarter+List memories with filters
rlm_forgetAgent MemoryStarter+Delete memories by ID or filter
rlm_swarm_createSwarmPro+Create a new agent swarm
rlm_swarm_joinSwarmPro+Join a swarm as an agent
rlm_claimSwarmPro+Claim exclusive resource access
rlm_releaseSwarmPro+Release claimed resource
rlm_state_getSwarmPro+Read shared swarm state
rlm_state_setSwarmPro+Write shared state (optimistic locking)
rlm_broadcastSwarmTeam+Send event to all swarm agents
rlm_task_createSwarmPro+Create task in distributed queue
rlm_task_claimSwarmPro+Claim task from queue
rlm_task_completeSwarmPro+Complete or fail a task

Primary Tool

rlm_context_query

Free

The main context optimization tool. Returns the most relevant documentation sections for a query, respecting your token budget.

Parameters

NameTypeRequiredDescription
querystringYesThe question or topic to search for
max_tokensnumberNoMaximum tokens to return (default: 4000)
search_modestringNoSearch mode: 'keyword', 'semantic', or 'hybrid' (default: 'hybrid')
include_metadatabooleanNoInclude file paths and line numbers (default: true)

Response Format

{
  "sections": [
    {
      "title": "Authentication Flow",
      "content": "...",
      "file": "docs/auth.md",
      "lines": [45, 120],
      "relevance_score": 0.94,
      "token_count": 1200
    }
  ],
  "total_tokens": 3800,
  "suggestions": ["Also check: docs/security.md"]
}

Example

// Query example
{
  "query": "How does user authentication work?",
  "max_tokens": 4000,
  "search_mode": "hybrid"
}

rlm_help

Free

Discover the right MCP tool for your task. Get recommendations based on your query, detailed tool info, or browse tools by tier. Perfect for new users or when you're unsure which tool to use.

Parameters

NameTypeRequiredDescription
querystringNoDescribe what you want to do (e.g., 'search across all team projects')
toolstringNoGet detailed info about a specific tool (e.g., 'rlm_context_query')
tierstringNoList tools by tier: PRIMARY, POWER_USER, TEAM, UTILITY, or ADVANCED
limitnumberNoMaximum recommendations to return (default: 5)

Response Format

{
  "recommendations": [
    {
      "tool": "rlm_multi_project_query",
      "score": 92,
      "tier": "TEAM",
      "description": "Query across all team projects",
      "use_cases": ["Cross-project search", "Find implementations"],
      "example": "rlm_multi_project_query(query='authentication')"
    }
  ],
  "total_tools": 46,
  "tip": "Use tier='PRIMARY' to see essential tools"
}

Example

// Get tool recommendations
rlm_help({ query: "I want to search across all my team projects" })

// Get info about a specific tool
rlm_help({ tool: "rlm_context_query" })

// List all primary tools
rlm_help({ tier: "PRIMARY" })

Recursive Context Tools

These tools enable near-infinite context by allowing your LLM to orchestrate multiple queries. Your client LLM decomposes complex questions, makes multiple calls, and synthesizes the results.

rlm_decompose

Pro+

Break a complex query into sub-queries with an execution plan. Your LLM can then call rlm_multi_query to get context for each sub-query.

Parameters

NameTypeRequiredDescription
querystringYesThe complex question to decompose
max_subqueriesnumberNoMaximum number of sub-queries to generate (default: 5)

Response Format

{
  "original_query": "Explain the full authentication system",
  "subqueries": [
    "How does login flow work?",
    "How are JWT tokens generated?",
    "How are sessions managed?",
    "How does logout work?"
  ],
  "execution_plan": "sequential",
  "estimated_tokens": 12000
}

rlm_multi_query

Pro+

Execute multiple queries in a single call with per-query token budgets. Efficient for parallel context retrieval.

Parameters

NameTypeRequiredDescription
queriesstring[]YesArray of queries to execute
tokens_per_querynumberNoToken budget per query (default: 2000)

Response Format

{
  "results": [
    {
      "query": "How does login flow work?",
      "sections": [...],
      "token_count": 1800
    },
    {
      "query": "How are JWT tokens generated?",
      "sections": [...],
      "token_count": 1500
    }
  ],
  "total_tokens": 6500
}

rlm_plan

Team+

Generate a full execution plan for complex questions, including query decomposition, dependencies, and optimal execution order.

Parameters

NameTypeRequiredDescription
querystringYesThe complex question to plan for
depthnumberNoMaximum recursion depth (default: 2)

Response Format

{
  "plan": {
    "steps": [
      { "id": 1, "query": "...", "depends_on": [] },
      { "id": 2, "query": "...", "depends_on": [1] }
    ],
    "parallel_groups": [[1, 2], [3]],
    "estimated_tokens": 15000
  }
}

rlm_multi_project_query

Team+

Query across all projects in a team with a single call. Returns context from multiple projects ranked by relevance. Requires a team API key.

Parameters

NameTypeRequiredDescription
querystringYesThe question to search for
max_tokensnumberNoTotal token budget across all projects (default: 4000)
per_project_limitnumberNoMaximum sections per project (default: 3)
project_idsstring[]NoInclude only these project IDs/slugs
exclude_project_idsstring[]NoExclude these project IDs/slugs

Response Format

{
  "query": "How does authentication work?",
  "projects_queried": 5,
  "projects_skipped": 0,
  "results": [
    {
      "project_slug": "api-service",
      "sections": [...],
      "tokens": 800
    },
    {
      "project_slug": "web-app",
      "sections": [...],
      "tokens": 600
    }
  ],
  "total_tokens": 3200
}

Supporting Tools

rlm_ask

Free

Legacy query tool using keyword search. Use rlm_context_query instead for better results.

Parameters

NameTypeRequiredDescription
questionstringYesThe question to search for

Response Format

{
  "content": "Relevant documentation content...",
  "sources": ["docs/guide.md:45-120"]
}

rlm_search

Free

Search documentation using regex patterns. Useful for finding specific function names, configuration keys, or code patterns.

Parameters

NameTypeRequiredDescription
patternstringYesRegex pattern to search for
max_resultsnumberNoMaximum results to return (default: 20)

Response Format

{
  "matches": [
    {
      "file": "docs/config.md",
      "line": 45,
      "content": "API_KEY=your-key-here",
      "context": "..."
    }
  ],
  "total_matches": 15
}

rlm_inject

Free

Inject session context that persists across queries. Useful for setting task-specific focus areas.

Parameters

NameTypeRequiredDescription
contextstringYesContext to inject (task description, focus areas)
appendbooleanNoAppend to existing context instead of replacing (default: false)

Response Format

{
  "success": true,
  "context": "Task: Fix authentication bug. Focus: auth.ts, middleware.ts"
}

rlm_context

Free

Show the current session context. Useful for checking what context is active.

Parameters

NameTypeRequiredDescription

Response Format

{
  "context": "Task: Fix authentication bug. Focus: auth.ts, middleware.ts",
  "injected_at": "2024-01-15T10:30:00Z"
}

rlm_clear_context

Free

Clear the session context. Use when switching tasks.

Parameters

NameTypeRequiredDescription

Response Format

{
  "success": true,
  "message": "Session context cleared"
}

rlm_stats

Free

Get statistics about the indexed documentation.

Parameters

NameTypeRequiredDescription

Response Format

{
  "total_documents": 42,
  "total_sections": 156,
  "total_tokens": 125000,
  "last_updated": "2024-01-15T10:30:00Z"
}

rlm_sections

Free

List all documentation sections. Useful for understanding what topics are covered.

Parameters

NameTypeRequiredDescription

Response Format

{
  "sections": [
    { "id": "auth-overview", "title": "Authentication Overview", "file": "docs/auth.md" },
    { "id": "api-keys", "title": "API Keys", "file": "docs/auth.md" }
  ]
}

rlm_read

Free

Read specific line ranges from documentation. Useful after searching to get full context.

Parameters

NameTypeRequiredDescription
start_linenumberYesStarting line number
end_linenumberYesEnding line number

Response Format

{
  "content": "Full content of lines 100-150...",
  "file": "docs/auth.md",
  "lines": [100, 150]
}

rlm_settings

Free

Get project settings from the Snipara dashboard. Returns configuration like auto-inject preferences, default search mode, etc.

Parameters

NameTypeRequiredDescription

Response Format

{
  "projectId": "abc123",
  "name": "My Project",
  "defaultSearchMode": "hybrid",
  "autoInjectInstructions": true,
  "maxTokensDefault": 4000
}

Summary Storage Tools

Store and retrieve LLM-generated summaries for your documents. Your LLM generates summaries, we store them for faster future queries.

rlm_store_summary

Pro+

Store an LLM-generated summary for a document. Later queries can use stored summaries for faster, more token-efficient responses.

Parameters

NameTypeRequiredDescription
document_pathstringYesPath to the document (relative to project root)
summarystringYesThe summary text to store
summary_typestringNoType: 'concise', 'detailed', 'technical', 'keywords', or 'custom'
section_idstringNoOptional section identifier for partial summaries
generated_bystringNoModel that generated the summary (e.g., 'claude-3.5-sonnet')

Response Format

{
  "summary_id": "sum_abc123",
  "document_path": "docs/auth.md",
  "summary_type": "concise",
  "token_count": 150,
  "created": true,
  "message": "Summary stored successfully"
}

rlm_get_summaries

Pro+

Retrieve stored summaries with optional filters. Use to check what summaries exist or to use them in context.

Parameters

NameTypeRequiredDescription
document_pathstringNoFilter by document path
summary_typestringNoFilter by summary type
include_contentbooleanNoInclude summary content in response (default: true)

Response Format

{
  "summaries": [
    {
      "summary_id": "sum_abc123",
      "document_path": "docs/auth.md",
      "summary_type": "concise",
      "token_count": 150,
      "content": "Authentication uses JWT tokens...",
      "created_at": "2024-01-15T10:30:00Z"
    }
  ],
  "total_count": 1,
  "total_tokens": 150
}

rlm_delete_summary

Pro+

Delete stored summaries by ID, document path, or type.

Parameters

NameTypeRequiredDescription
summary_idstringNoSpecific summary ID to delete
document_pathstringNoDelete all summaries for this document
summary_typestringNoDelete summaries of this type

Response Format

{
  "deleted_count": 3,
  "message": "Deleted 3 summaries"
}

Shared Context Tools

Access team-wide coding standards, best practices, and prompt templates that are shared across projects. Perfect for maintaining consistency across your organization.

rlm_shared_context

Pro+

Get merged context from linked shared collections. Returns team coding standards and best practices with token budget allocation by category priority.

Parameters

NameTypeRequiredDescription
max_tokensnumberNoMaximum tokens to return (default: 4000)
categoriesstring[]NoFilter by categories: 'MANDATORY', 'BEST_PRACTICES', 'GUIDELINES', 'REFERENCE'
include_contentbooleanNoInclude merged document content (default: true)

Response Format

{
  "documents": [
    {
      "id": "doc_123",
      "title": "TypeScript Standards",
      "category": "MANDATORY",
      "token_count": 800,
      "collection_name": "Team Coding Standards"
    }
  ],
  "merged_content": "# TypeScript Standards\n...",
  "total_tokens": 2400,
  "collections_loaded": 2
}

rlm_list_templates

Pro+

List available prompt templates from linked shared collections. Templates are reusable prompts for common tasks.

Parameters

NameTypeRequiredDescription
categorystringNoFilter by template category

Response Format

{
  "templates": [
    {
      "id": "tpl_123",
      "name": "Security Review",
      "slug": "security-review",
      "description": "Review code for security issues",
      "category": "review",
      "collection_name": "Team Templates"
    }
  ],
  "total_count": 5,
  "categories": ["review", "refactoring", "testing"]
}

rlm_get_template

Pro+

Get a specific prompt template and optionally render it with variable substitution.

Parameters

NameTypeRequiredDescription
template_idstringNoTemplate ID
slugstringNoTemplate slug (alternative to ID)
variablesobjectNoVariable values to substitute in the template

Response Format

{
  "template": {
    "id": "tpl_123",
    "name": "Security Review",
    "slug": "security-review",
    "prompt": "Review the following code for {{focus_area}}:\n{{code}}",
    "variables": ["focus_area", "code"]
  },
  "rendered_prompt": "Review the following code for SQL injection:\n...",
  "missing_variables": []
}

rlm_list_collections

Free

List all shared context collections accessible to you. Returns collections you own, team collections you're a member of, and public collections. Use this to discover collection IDs for uploading documents.

Parameters

NameTypeRequiredDescription
include_publicbooleanNoInclude public collections in results (default: true)

Response Format

{
  "collections": [
    {
      "id": "col_abc123",
      "name": "Team Coding Standards",
      "slug": "team-coding-standards",
      "description": "Shared coding guidelines for all projects",
      "scope": "team",
      "access_type": "team_member",
      "_count": {
        "documents": 12,
        "templates": 5
      }
    },
    {
      "id": "col_def456",
      "name": "Public TypeScript Guide",
      "slug": "public-ts-guide",
      "scope": "public",
      "access_type": "public",
      "_count": {
        "documents": 8,
        "templates": 0
      }
    }
  ],
  "count": 2
}

Example

// List all accessible collections
rlm_list_collections()

// Exclude public collections
rlm_list_collections({ include_public: false })

rlm_upload_shared_document

Team+

Upload or update a document in a shared context collection. Use for team best practices, coding standards, and guidelines that should be available across projects.

Parameters

NameTypeRequiredDescription
collection_idstringYesThe shared collection ID (get from rlm_list_collections)
titlestringYesDocument title
contentstringYesDocument content (markdown)
categorystringNoCategory: 'MANDATORY', 'BEST_PRACTICES', 'GUIDELINES', or 'REFERENCE' (default: BEST_PRACTICES)
prioritynumberNoPriority within category, 0-100 (default: 0, higher = more important)
tagsstring[]NoTags for filtering and organization

Response Format

{
  "success": true,
  "document_id": "doc_xyz789",
  "collection_id": "col_abc123",
  "title": "Error Handling Standards",
  "category": "BEST_PRACTICES",
  "action": "created"
}

Example

// Upload a new coding standard
rlm_upload_shared_document({
  collection_id: "col_abc123",
  title: "Error Handling Standards",
  content: "# Error Handling\n\nAlways use custom error classes...",
  category: "BEST_PRACTICES",
  priority: 50,
  tags: ["errors", "typescript"]
})

Document Sync Tools

Upload and synchronize documents directly from your LLM client. Keep your documentation up-to-date without leaving your editor.

rlm_upload_document

Free

Upload or update a single document. Creates the document if it doesn't exist, updates if it does.

Parameters

NameTypeRequiredDescription
pathstringYesDocument path (e.g., 'docs/getting-started.md')
contentstringYesDocument content
titlestringNoDocument title (extracted from content if not provided)

Response Format

{
  "success": true,
  "document_id": "doc_abc123",
  "path": "docs/getting-started.md",
  "action": "created",
  "token_count": 1500
}

rlm_sync_documents

Free

Bulk sync multiple documents in a single call. Efficient for initial project setup or major updates.

Parameters

NameTypeRequiredDescription
documentsarrayYesArray of {path, content, title?} objects
delete_missingbooleanNoDelete documents not in the list (default: false)

Response Format

{
  "success": true,
  "created": 5,
  "updated": 2,
  "deleted": 0,
  "total_documents": 7,
  "total_tokens": 15000
}

RLM Orchestration Tools

Orchestration tools bridge Snipara's context optimization with RLM Runtime execution environments. Load documents and projects into the RLM REPL, orchestrate multi-step operations, and share context between MCP and runtime sessions.

rlm_load_document

Pro+

Load a specific document's content from your Snipara project into the RLM runtime environment. Returns optimized content ready for REPL injection.

Parameters

NameTypeRequiredDescription
document_pathstringYesPath to the document in your Snipara project
max_tokensintegerNoMaximum tokens to return (default: 4000)

Response Format

{
  "success": true,
  "document_path": "docs/api.md",
  "content": "# API Reference\n...",
  "tokens": 3200,
  "sections": 12
}

rlm_load_project

Pro+

Load the entire project context into the RLM runtime. Returns a structured overview of all indexed documents with section summaries for broad context injection.

Parameters

NameTypeRequiredDescription
max_tokensintegerNoMaximum tokens for the project overview (default: 8000)
include_summariesbooleanNoInclude stored summaries if available (default: true)

Response Format

{
  "success": true,
  "project_slug": "my-project",
  "documents": 15,
  "total_sections": 142,
  "content": "# Project: my-project\n...",
  "tokens": 7500
}

rlm_orchestrate

Team+

Orchestrate multi-step document operations combining queries, decomposition, and context loading into a single coordinated execution plan.

Parameters

NameTypeRequiredDescription
taskstringYesThe task to orchestrate (e.g., 'Analyze authentication flow')
strategystringNoExecution strategy: relevance_first, breadth_first, depth_first (default: relevance_first)
max_tokensintegerNoTotal token budget for the orchestration (default: 16000)

Response Format

{
  "success": true,
  "steps_executed": 4,
  "context_gathered": "# Orchestration Result\n...",
  "tokens_used": 12400,
  "sub_queries": ["auth middleware", "JWT validation", "session management"]
}

rlm_repl_context

Pro+

Bridge between Snipara MCP and the RLM Runtime REPL. Inject Snipara context into REPL variables or extract REPL state back to Snipara for persistence.

Parameters

NameTypeRequiredDescription
actionstringYesAction: 'inject' (Snipara → REPL) or 'extract' (REPL → Snipara)
keystringYesVariable name in REPL context
querystringNoSnipara query to resolve and inject (for 'inject' action)
valuestringNoValue to extract from REPL (for 'extract' action)

Response Format

{
  "success": true,
  "action": "inject",
  "key": "auth_docs",
  "tokens": 2800,
  "repl_session": "default"
}

Snipara Agents Tools

Snipara Agents provides AI agent infrastructure with persistent memory, multi-agent swarms, and distributed task coordination. Agent tools require a separate Snipara Agents subscription.

Separate Subscription Required

Agent tools require a Snipara Agents subscription (starting at $15/month). See the Agents Documentation for full details.

Practical Recipes

Here are common workflows combining multiple MCP tools for real-world tasks:

Recipe 1: Debug an Auth Flow

Find and understand authentication issues in your codebase.

// Step 1: Find auth-related code
rlm_search({ pattern: "authenticate|login|session|jwt" })

// Step 2: Get detailed context on the auth system
rlm_context_query({
  query: "authentication flow and session management",
  max_tokens: 6000,
  search_mode: "hybrid"
})

// Step 3: Check team security standards
rlm_shared_context({ categories: ["MANDATORY", "BEST_PRACTICES"] })

Recipe 2: Generate Feature with Repo Context

Break down a complex feature and get relevant context for each part.

// Step 1: Break down the task into subtasks
rlm_decompose({
  query: "implement password reset with email verification",
  max_subqueries: 5
})

// Step 2: Get context for each subtask in parallel
rlm_multi_query({
  queries: [
    "email service configuration and templates",
    "password hashing and validation",
    "reset token generation and expiry"
  ],
  tokens_per_query: 2000
})

// Step 3: Check existing patterns
rlm_shared_context({ categories: ["BEST_PRACTICES"] })

Recipe 3: Onboard a New Developer

Get a quick overview of a codebase for new team members.

// Step 1: Get project statistics
rlm_stats()

// Step 2: List all documentation sections
rlm_sections()

// Step 3: Get high-level summaries of key areas
rlm_get_summaries({
  summary_type: "overview",
  include_content: true
})

// Step 4: Check team coding standards
rlm_shared_context({
  categories: ["MANDATORY", "BEST_PRACTICES", "GUIDELINES"]
})

Expected Usage Per RLM Completion

When using RLM with your LLM for agentic coding tasks, the number of Snipara queries varies based on task complexity. Here's what to expect:

Task ComplexityQueries per CompletionExamples
Simple~3-5 queriesFix a typo, add a simple function, update configuration
Medium~8-12 queriesAdd a new API endpoint, implement a feature with tests, refactor a module
Complex~15-25 queriesImplement full auth system, build multi-file feature, major refactoring

Planning Your Query Budget

With the Free plan (100 queries/month), you can complete approximately 10-30 medium complexity tasks. Pro plan (5,000 queries) supports ~400-600 medium tasks per month.

Feature Availability by Plan

FeatureFreePro ($19/mo)Team ($49/mo)Enterprise
Keyword SearchYesYesYesYes
Semantic Search-YesYesYes
Hybrid Search-YesYesYes
Token BudgetingYesYesYesYes
Query Decomposition-YesYesYes
Multi-Query Batching-YesYesYes
Context Caching--YesYes
Summary Storage-YesYesYes
Shared Context-YesYesYes
Prompt Templates-YesYesYes
Document SyncYesYesYesYes
Queries/Month1005,00020,000Unlimited
Snipara Agents (separate subscription)
Agent MemoryRequires Agents Starter+ ($15/mo)
Multi-Agent SwarmsRequires Agents Pro+ ($39/mo)

Using with RLM Runtime

For complex multi-step tasks, use rlm-runtime — a Python CLI that orchestrates LLM completions with sandboxed code execution and automatic Snipara context retrieval.

pip install rlm-runtime[snipara]
rlm run --model claude-sonnet-4-20250514 "Refactor auth to use JWT"

RLM Runtime automatically calls Snipara MCP tools (rlm_context_query, rlm_shared_context, etc.) to retrieve relevant documentation for each sub-task, then executes generated code in a sandboxed environment.

FeatureDirect MCPRLM Runtime
Context RetrievalManual tool callsAutomatic per sub-task
Task DecompositionLLM-drivenBuilt-in orchestration
Code ExecutionNot availableSandboxed REPL
Best ForSimple queries, chatComplex refactoring, multi-file changes

Error Handling

All MCP tools return consistent error responses. Handle these in your LLM workflows:

Error CodeHTTP StatusCauseSolution
INVALID_API_KEY401Missing or invalid API keyCheck X-API-Key header matches your project key
PROJECT_NOT_FOUND404Project slug doesn't existVerify project slug in MCP endpoint URL
QUOTA_EXCEEDED429Monthly query limit reachedUpgrade plan or wait for reset
FEATURE_UNAVAILABLE403Tool requires higher planUpgrade to Pro+ for recursive tools
RATE_LIMITED429Too many requests per minuteAdd backoff/retry logic
VALIDATION_ERROR400Invalid parametersCheck parameter types and required fields

Error Response Format

{
  "error": {
    "code": "QUOTA_EXCEEDED",
    "message": "Monthly query limit reached (100/100)",
    "details": {
      "limit": 100,
      "used": 100,
      "reset_at": "2025-02-01T00:00:00Z"
    }
  }
}

Next Steps