Your AI Editor Forgets Everything Between Sessions
You open Cursor on Monday morning. Your AI assistant has no idea what you built on Friday — the architectural decisions, the constraints you explained, the patterns you established. Every new session starts from zero.
This is the core limitation of AI-assisted development: your AI has no persistent memory. An MCP server changes that by giving AI editors a standardized way to access external data, history, and tools. RL4 is an MCP server purpose-built for one thing: making sure your AI never forgets your project context.
What Is an MCP Server?
Model Context Protocol (MCP) is an open standard created by Anthropic that defines how AI applications communicate with external data sources. Think of an MCP server as a bridge between your AI editor and real data — files, databases, project history, and tools.
MCP defines three primitives:
- **Tools** — functions the AI can call (run a query, search files, trigger a snapshot)
- **Resources** — data the AI can read (file contents, database schemas, project history)
- **Prompts** — pre-built templates for common workflows
The protocol uses JSON-RPC 2.0 over standard I/O. Any language can implement an MCP server — TypeScript, Python, Go, Rust — as long as it speaks the protocol. Cursor, VS Code, and dozens of other tools now support MCP natively.
How RL4 Works as an MCP Server
RL4 is an IDE extension for Cursor and VS Code that runs as an MCP server. When installed, it exposes 14 specialized tools that your AI assistant can call directly during conversations.
Here is the architecture:
Cursor / VS Code (Host)
└── MCP Client (built into the IDE)
└── RL4 MCP Server (the extension)
├── Content Store (SHA-256 checksums)
├── Timeline Enricher
├── Pattern Detector
├── KPI Aggregator
└── Supabase Auto-SyncWhen your AI needs context, it calls a tool on RL4's MCP server. RL4 retrieves the relevant data — your chat history, file changes, decisions, or a compressed snapshot — and returns it to the AI. No copy-pasting. No manual context loading.
The key insight: the AI decides when to call which tool. You ask "what did we decide about the auth architecture?" and Cursor automatically calls RL4's `search_context` tool to find the answer in your development history.
The 14 MCP Tools RL4 Exposes
RL4's MCP server provides tools across four categories:
Snapshot & Lifecycle
- **`run_snapshot`** — scans all chat sources, builds an evidence pack, and returns a "Continue Development" prompt with full project context
- **`finalize_snapshot`** — cleans up temporary files after you have used a snapshot
Search & Retrieval
- **`search_context`** — RAG-powered search across evidence, timeline, decisions, chat history, and CLI commands
- **`search_chats`** — search specifically through your conversation history with citations
- **`search_cli`** — search your command history (git, npm, docker) with exit codes and output previews
- **`rl4_ask`** — natural language question answering with cited sources, like Perplexity for your codebase
Structured Data
- **`get_evidence`** — retrieve the full activity journal with sessions, threads, and file activity
- **`get_timeline`** — read the enriched project timeline (append-only, forensic history)
- **`get_decisions`** — list all logged decisions with intent, chosen option, and confidence gates
Workspace Management
- **`list_workspaces`** — see all your workspaces synced to Supabase
- **`set_workspace`** — switch between local and remote workspace contexts
- **`get_content_store_index`** — browse the file-to-checksum mapping for content reconstruction
- **`read_rl4_blob`** — read specific content blobs by SHA-256 checksum
- **`rl4_guardrail`** — validate queries and responses against proof-backed citation requirements
For a detailed reference on each tool, see the complete RL4 MCP tools guidecomplete RL4 MCP tools guide/cursor/blog/rl4-mcp-tools-cursor-complete-guide.
RCEP Compression: How RL4 Keeps Context Small
Raw development history is massive — thousands of chat messages, hundreds of file changes, dozens of decisions. Feeding all of that into an AI context window would be wasteful and expensive.
RL4 uses RCEP (Reversible Context Extraction Protocol) to compress your development context without losing meaning:
- **Semantic compression** — extracts key decisions, constraints, and patterns rather than storing raw transcripts
- **SHA-256 checksums** — every piece of evidence gets a cryptographic checksum for integrity verification
- **Integrity seals** — ECDSA P-256 device-only signatures ensure context has not been tampered with
The result: RL4 compresses hundreds of messages into a compact, verifiable snapshot that fits within any AI's context window. Your AI gets the full picture without the token bloat.
Content Store and Supabase Sync
Every piece of context RL4 captures goes into a content store — a local database indexed by SHA-256 checksums. This gives you:
- **Deduplication** — identical content is stored once, regardless of how many times it appears
- **Reconstruction** — any file or conversation can be rebuilt from its checksum
- **Portability** — content can be synced to Supabase for cross-device persistence
RL4 auto-syncs to Supabase in the background. Your development context is not trapped on one machine. Open the same project on your laptop, your desktop, or a CI runner — RL4's MCP server connects to the same Supabase workspace and serves the same context.
Timeline Enrichment and Pattern Detection
Beyond raw storage, RL4 actively enriches your development history:
- **Timeline Enricher** — transforms raw file events and chat messages into a structured, append-only timeline with sessions, bursts, and causal links
- **Pattern Detector** — identifies recurring patterns in your development (common error types, frequently modified files, architectural patterns)
- **KPI Aggregator** — tracks development metrics (session duration, files touched, conversation depth) to surface productivity insights
When your AI calls `get_timeline` or `search_context`, it gets enriched data — not just raw logs. This means better answers, more relevant context, and fewer follow-up questions.
Setting Up RL4 as Your MCP Server
Setup takes under 2 minutes:
- **Install RL4** from the VS Code Marketplace (works in Cursor and VS Code)
- **Open any project** — RL4 creates the `.rl4/` directory automatically
- **Verify MCP connection** — in Cursor, check Settings → MCP for a green "rl4" status indicator
After your first snapshot, RL4 starts capturing evidence, building your timeline, and making everything searchable via MCP.
.rl4/
├── evidence.md # Activity journal
├── timeline.md # Enriched project timeline
├── evidence/
│ ├── chat_history.jsonl # All captured messages
│ ├── chat_threads.jsonl # Thread summaries
│ ├── activity.jsonl # File events
│ └── sessions.jsonl # Work sessions
└── snapshots/
└── file_index.json # Content store indexYour AI assistant can now call any of the 14 tools to access this data. No configuration beyond installing the extension.
Why an MCP Server Matters for Your Workflow
Without an MCP server like RL4, your AI editor operates in isolation. Every session starts from zero. Every context switch means re-explanation. Every team handoff means lost knowledge.
With RL4's MCP server:
- **Monday morning** — your AI knows what you built on Friday
- **After a model switch** — your context [transfers seamlessly](/cursor/blog/switch-llm-without-losing-context)
- **During code review** — your AI can cite the decisions that led to the implementation
- **When onboarding** — new developers get [full project context](/cursor/blog/onboard-developer-ai-context-guide) from day one
The MCP server is the infrastructure that makes AI-assisted development actually productive over time, not just in single sessions.
Get Started with RL4
RL4 turns your IDE into an AI workspace with persistent memory. Install the extension, run your first snapshot, and never re-explain your codebase again.
**Install RL4 free**Install RL4 free/cursor/form — set up in 2 minutes, get persistent AI memory across every session.