Use Snipara with AI agents

This is the canonical setup page for making Codex, Claude Code, Cursor, ChatGPT, and other MCP-compatible agents understand Snipara and use the hosted context layer correctly.

Primary path
For LLM agents: use Hosted MCP
Use Hosted MCP first. Local stdio packages and companion workflows are compatibility or workflow helpers, not the main agent integration path.
Public answer

Snipara is a hosted MCP, context, and reviewed memory layer for AI coding agents. It gives agents the right code, docs, business context, client context, diagrams, and durable decisions across sessions without locking customers to one LLM provider.

Endpoint

Connect agents to https://api.snipara.com/mcp/YOUR_PROJECT_SLUG and authenticate with a Snipara project API key. Prefer environment variables over inline secrets.

Snipara workflow

Treat Snipara as the agent task lifecycle, not only as a search box. Agents should retrieve project truth, execute against the current working tree, verify locally, then persist only durable outcomes.

Start

Recall durable memory with rlm_recall, then retrieve source truth with a targeted rlm_context_query.

Ground

Use rlm_get_chunk for cited sections and rlm_code_* tools for structural code questions.

Plan

Use rlm_plan or rlm_decompose only for complex work and only when the hosted MCP server exposes them.

Execute

Use local files, rg, git, tests, lint, and type-checks for exact working-tree truth.

Runtime

Use RLM Runtime only when sandboxed execution, repeatable validation, or isolated transformations materially help.

Persist

Prefer rlm_end_of_task_commit for substantial outcomes; use rlm_remember_if_novel or rlm_remember for narrow durable memories.

For end-of-task persistence, prefer rlm_end_of_task_commit for a substantial task summary. Use rlm_remember_if_novel for one reusable memory when duplicate avoidance matters, and rlm_remember only for explicit direct memory writes.

Copy-ready templates

These snippets keep agent behavior consistent across the main AI development clients. Public copies are also available under /templates/ai-agents/ for direct linking.

Codex: AGENTS.md

Codex reads AGENTS.md before work starts. Put this at the repository root or merge it into an existing project instruction file.

# AGENTS.md

## Snipara Context Workflow

This project uses Snipara Hosted MCP for project context and reviewed memory.

- Endpoint: https://api.snipara.com/mcp/YOUR_PROJECT_SLUG
- Auth: use `SNIPARA_API_KEY` or `RLM_API_KEY` from the environment. Never commit keys.

Agent task lifecycle:

1. Start substantial work with `rlm_recall` and a targeted `rlm_context_query`.
2. Use `rlm_context_query` for docs, business context, client/project truth, architecture notes, runbooks, and narrative source material.
3. Use `rlm_get_chunk` to load cited source sections returned by reference-based queries before relying on precise wording.
4. Use `rlm_code_callers`, `rlm_code_imports`, `rlm_code_neighbors`, or `rlm_code_shortest_path` for structural code questions.
5. Use `rlm_plan` or `rlm_decompose` only for complex work and only when the hosted server exposes them.
6. Use local file reads, `rg`, git commands, and test commands for exact edits, current working-tree state, and verification.
7. Use RLM Runtime only when sandboxed execution, repeatable validation, or isolated transformations materially help.
8. If a broad query is slow, retry once with a narrow keyword query before falling back to local search.
9. End substantial work with `rlm_end_of_task_commit` when available. Use `rlm_remember_if_novel` for one reusable memory while avoiding duplicates, and use `rlm_remember` for explicit direct memory writes.
10. Do not store secrets, tokens, passwords, private keys, raw logs, transient command output, or unreviewed guesses in memory.

Claude Code: CLAUDE.md

Claude Code can use CLAUDE.md as a compatibility layer while keepingAGENTS.md as the canonical cross-agent instruction file.

# CLAUDE.md

Canonical project context is in AGENTS.md. This file exists for Claude Code compatibility.

## Snipara

Use Snipara Hosted MCP before answering project-specific questions.

- Endpoint: https://api.snipara.com/mcp/YOUR_PROJECT_SLUG
- Start substantial work with rlm_recall and a targeted rlm_context_query.
- Use rlm_context_query for source truth and rlm_get_chunk for exact cited sections.
- Use rlm_code_callers, rlm_code_imports, rlm_code_neighbors, or rlm_code_shortest_path for structural code questions.
- Use RLM Runtime only when sandboxed execution or repeatable validation materially helps.
- End substantial work with rlm_end_of_task_commit when available; use rlm_remember_if_novel or rlm_remember only for narrow durable memories.
- Do not store secrets, one-off command output, raw logs, or unreviewed guesses in memory.
- Use the Snipara Claude Code plugin only when slash commands, skills, or hooks are helpful; Hosted MCP remains the normal agent path.

Cursor: project rule

Cursor project rules live in .cursor/rules. Use an always-applied rule when every agent chat should know that Snipara is the source for context and reviewed memory.

---
description: Use Snipara Hosted MCP for project context and reviewed memory.
alwaysApply: true
---

# Snipara Agent Workflow

This workspace uses Snipara Hosted MCP for current project context, business/client context, and reviewed memory.

- Check the configured Snipara MCP server before answering project-specific questions.
- Use `rlm_recall` for durable decisions, preferences, and validated learnings.
- Use `rlm_context_query` for docs, business context, current client/project truth, and narrative source material.
- Use `rlm_get_chunk` before relying on exact cited wording from reference-based results.
- Use `rlm_code_callers`, `rlm_code_imports`, `rlm_code_neighbors`, or `rlm_code_shortest_path` for structural code questions.
- Use `rlm_plan` or `rlm_decompose` only for complex work and only when available.
- Use local file reads and tests for exact code edits and current working-tree state.
- Use RLM Runtime only when sandboxed execution or repeatable validation materially helps.
- End substantial work with `rlm_end_of_task_commit` when available; use `rlm_remember_if_novel` or `rlm_remember` only for narrow durable memories. Never store secrets.

ChatGPT and OpenAI MCP

Use the Hosted MCP endpoint directly when the client supports HTTP MCP. For Codex config, use bearer auth from an environment variable. For clients that accept custom headers, useX-API-Key.

Codex config.toml
[mcp_servers.snipara]
type = "streamable_http"
url = "https://api.snipara.com/mcp/YOUR_PROJECT_SLUG"
bearer_token_env_var = "SNIPARA_API_KEY"
Generic HTTP MCP
{
  "name": "snipara",
  "url": "https://api.snipara.com/mcp/YOUR_PROJECT_SLUG",
  "headers": {
    "X-API-Key": "YOUR_SNIPARA_PROJECT_API_KEY"
  }
}

Generate automatically

Run npx create-snipara in a project to generate Hosted MCP config, local Snipara helper files, agent instruction templates, a Cursor rule, and a Codex config snippet. The CLI avoids overwriting existing instruction files and stores mergeable copies under .snipara/templates when a file already exists.

npx create-snipara --profile hosted-only
npx create-snipara doctor

MCP and app metadata

Snipara tool metadata should be precise enough for model discovery. Tool descriptions should say when to use rlm_context_query, rlm_recall, and code graph tools, and read-only tools should expose MCP annotations so clients can rank and approve calls correctly.

Description contract

Describe Snipara as context and reviewed memory infrastructure for AI agents, not as an LLM provider. State that customers keep Claude, Cursor, ChatGPT, Codex, or any MCP-compatible client.

Monitoring

Public AI discovery only improves if you measure it. Track crawler and referral behavior separately from product usage.

  • Confirm /llms.txt, /llms-full.txt, /robots.txt, and /sitemap.xml return 200 in production.
  • Watch server logs for OAI-SearchBot, ChatGPT-User, GPTBot, ClaudeBot, Claude-User, and Claude-SearchBot.
  • Review referrals and direct traffic to /docs/integration/ai-agents, /docs/integration/openai, /docs/integration/claude-code, and /docs/integration/cursor.
  • Replay prompts monthly: What is Snipara?, How do I use Snipara with Cursor?, and Does Snipara replace Claude or ChatGPT?
  • Update tool descriptions and public docs when recurring wrong-tool calls or stale answers appear.