The Cross-LLM Problem
You use Cursor IDE with GPT-4 for daily coding. Then you switch to Claude Code for a complex refactor. Context? Gone. You spend 20 minutes re-explaining your architecture, decisions, and constraints.
RL4 solves this. One context, multiple LLMs, zero re-explanation.
How It Works
RL4 captures your development context in a standard format (`.rl4/` directory) and exposes it via MCP (Model Context Protocol). Any LLM that supports MCP can read the same context.
Cursor IDE ──→ RL4 MCP Server ←── Claude Code
↓ ↓ ↓
Same evidence, timeline, decisions, chat historyYour context lives in one place. Both tools read from it.
Step 1: Install RL4 in Cursor IDE
If you haven't already:
- Open VS Code Marketplace in Cursor
- Search "RL4"
- Install the extension
- Open any project → RL4 creates the `.rl4/` directory automatically
After your first snapshot, you'll have evidence, timeline, and decisions captured.
Step 2: Add RL4 MCP Server to Claude Code
Claude Code supports MCP servers natively. Add RL4 to your configuration:
// In your Claude Code MCP settings (.mcp.json at project root)
{
"mcpServers": {
"rl4": {
"command": "node",
"args": ["/path/to/rl4-mcp-server/dist/index.js"],
"env": {
"RL4_WORKSPACE": "/path/to/your/project"
}
}
}
}Once configured, Claude Code has access to the same 14 MCP tools as Cursor.
Step 3: Verify the Connection
In Claude Code, try:
Search my RL4 context for recent architectural decisionsIf configured correctly, Claude Code calls `search_context` and returns results from your `.rl4/` data — the same data Cursor IDE reads.
What Each Tool Does in Claude Code
All 14 MCP tools14 MCP tools/cursor/blog/rl4-mcp-tools-cursor-complete-guide work identically in Claude Code:
| Tool | Works in Claude Code? | Notes |
|------|----------------------|-------|
| `run_snapshot` | ⚠️ Cursor only | Requires VSIX extension UI |
| `search_context` | ✅ Full support | RAG search across all data |
| `rl4_ask` | ✅ Full support | Natural language Q&A |
| `search_chats` | ✅ Full support | Search Cursor chat history |
| `search_cli` | ✅ Full support | Search CLI commands |
| `get_evidence` | ✅ Full support | Read evidence pack |
| `get_timeline` | ✅ Full support | Read project timeline |
| `get_decisions` | ✅ Full support | Read structured decisions |
| `list_workspaces` | ✅ Full support | List cloud workspaces |
| `set_workspace` | ✅ Full support | Switch workspace |
| `get_content_store_index` | ✅ Full support | File index |
| `read_rl4_blob` | ✅ Full support | Read stored files |
| `rl4_guardrail` | ✅ Full support | Validate Q&A quality |
| `finalize_snapshot` | ✅ Full support | Cleanup after snapshot |
Key note: `run_snapshot` requires the Cursor VSIX extension to scan chat sources. All other tools work anywhere MCP is supported.
Real Workflow: Cursor → Claude Code
Here's a typical cross-LLM workflow:
1. Morning: Work in Cursor IDE
You spend the morning implementing a new feature in Cursor. RL4 automatically captures your chat history, file changes, and decisions.
2. Afternoon: Switch to Claude Code for a Complex Task
You need Claude's deep reasoning for an architectural decision. In Claude Code:
Ask rl4: What have I been working on today? What decisions were made?Claude Code calls `rl4_ask` and instantly knows:
- Files you modified
- Conversations you had
- Decisions you made
- Current project state
No re-explanation needed.
3. Back to Cursor
Next session in Cursor, run a snapshot. It captures everything — including what Claude Code did while you were working there.
Cloud Sync: Context Across Machines
With RL4's Supabase sync, your context isn't limited to one machine:
Laptop (Cursor) ──→ Supabase Cloud ←── Desktop (Claude Code)Use `list_workspaces` and `set_workspace` to access any project from any machine.
Cross-LLM Comparison: What Changes, What Doesn't
| Aspect | Same Across LLMs | Different Per LLM |
|--------|-------------------|-------------------|
| Evidence | ✅ Identical | — |
| Timeline | ✅ Identical | — |
| Decisions | ✅ Identical | — |
| Chat history | ✅ Searchable | Stored per-source |
| Skills/Rules | ✅ In `.cursor/rules/` | LLM reads differently |
| Snapshot trigger | — | Cursor VSIX only |
| MCP support | — | Native in both |
The context is identical. Only the way each LLM processes it differs — and that's the strength. Use Cursor's speed for rapid iteration, Claude Code's depth for complex reasoning, same context throughout.
Troubleshooting
"MCP server not found" in Claude Code
- Verify the path to the RL4 MCP server binary
- Check that `RL4_WORKSPACE` points to a directory with `.rl4/`
- Restart Claude Code after config changes
"No evidence found"
- Run at least one snapshot in Cursor first
- Verify `.rl4/evidence.md` exists in the project
- Check workspace ID matches with `list_workspaces`
Stale context in Claude Code
- RL4 reads from `.rl4/` on each tool call — it's always fresh
- If using cloud sync, check `last_active_at` on the workspace
Beyond Two LLMs
RL4's MCP server works with any MCP-compatible client. As the ecosystem grows:
- **Codex** — OpenAI's CLI with MCP support
- **Gemini CLI** — Google's CLI with MCP
- **Windsurf** — MCP-compatible IDE
- **Any future MCP client** — your context is ready
You're not locked into one AI. Your context is portable, structured, and always accessible.
Get Started
- [Install RL4 in Cursor](/cursor/form) and create your first snapshot
- Configure the MCP server in Claude Code
- Try `rl4_ask` in Claude Code — see your Cursor context instantly
Learn more about the 14 MCP tools available14 MCP tools available/cursor/blog/rl4-mcp-tools-cursor-complete-guide or how to switch LLMs without losing contextswitch LLMs without losing context/cursor/blog/switch-llm-without-losing-context.
**Try RL4**Try RL4/cursor/form — one context, every LLM. Install in Cursor, use everywhere.