How-To6 min read

Your First AI Snapshot: A 5-Minute Tutorial (2 Methods)

Create portable AI context in 5 minutes — via the Wizard UI or MCP chat command. Step-by-step guide for Cursor IDE, Claude Code, and any LLM.

·

What You'll Build

Welcome to this cursor snapshot guide. By the end of this tutorial, you'll have:

  • A portable AI context package of your entire Cursor conversation history
  • Two methods mastered: the **Wizard UI** and the **MCP chat command**
  • A snapshot ready to use in Cursor, Claude Code, or any LLM

Time required: 5 minutes

Prerequisites: Cursor IDE with some chat history

If you're wondering why context loss matterswhy context loss matters/cursor/blog/cursor-context-loss-killing-productivity, read that first. Otherwise, let's dive in.

Step 0: Install RL4 Snapshot

From the VS Code Marketplace (recommended):

  1. Open Cursor → Extensions panel (`Cmd/Ctrl + Shift + X`)
  2. Search **"RL4 Snapshot"**
  3. Click **Install**
  4. Reload Cursor when prompted

You'll see the RL4 icon in your sidebar, and a notification confirming activation.

What happens on first launch: RL4 automatically scans your workspace and begins capturing file activity, chat history, and git commits. This is the InitialCapture — it records a full baseline of your project so that context is available from day one. Everything stays local in a `.rl4/` folder at your workspace root.

Choose Your Method

RL4 v2.0 gives you two ways to create a snapshot:

| Method | Best for | How |

|--------|----------|-----|

| Method A: MCP (chat command) | Daily workflow, hands-free | Type in Cursor chat |

| Method B: Wizard UI | First time, visual control | Click through sidebar |

Both produce the same output. Method A is faster once you're familiar.

Method A: MCP Snapshot (Recommended)

This is the fastest path. RL4 exposes an MCP server that your Cursor AI can call directly from the chat.

Step 1: Verify MCP is connected

Check that `.cursor/mcp.json` exists in your workspace (RL4 creates it automatically). If not, run `Cmd/Ctrl + Shift + P` → "RL4: Connect".

Step 2: Ask your AI to snapshot

In any Cursor chat, type:

Use the run_snapshot tool to capture my development context.

The AI calls `run_snapshot` via MCP. RL4 scans your entire workspace:

  • All Cursor chat history (retroactive — from your very first prompt)
  • File activity (saves, creates, deletes, renames with SHA-256 checksums)
  • Git commits and decision records
  • Work sessions and burst patterns (feature/fix/refactor detection)
  • Causal links between conversations and code changes

Step 3: Follow the Phase Protocol

The snapshot returns a structured "Continue Development" prompt. Your AI then follows 4 phases automatically:

  1. **Phase 1** — Uses the activity summary as a resume point
  2. **Phase 2** — Appends to `.rl4/timeline.md` (your development journal)
  3. **Phase 2b** — Updates `.cursor/rules/Rl4-Skills.mdc` with learned DO/DON'T/CONSTRAINTS
  4. **Phase 3** — Calls `finalize_snapshot` to clean up temporary files

Step 4: You're done

Your context is now captured. The AI knows your full project history and can answer questions like:

You: "What was I working on yesterday?"
AI: "Based on your timeline, you had a 6.5-hour session focused on
authentication refactoring. You modified 14 files, made 3 key decisions
(JWT rotation, session TTL, RBAC middleware), and the hot file was
auth.service.ts with 12 saves."

Method B: Wizard UI (Visual)

Prefer clicking? Use the built-in wizard.

Step 1: Open the Wizard

Click the RL4 icon in your sidebar, or run `Cmd/Ctrl + Shift + P` → "RL4: Open Wizard".

Step 2: Configure your snapshot

Choose your settings:

  • **Snapshot Goal:** Continue (resume work), Review (code audit), Handoff (share with teammate), Time Machine (replay history), or Document (generate docs)
  • **Time range:** Since first prompt / Last 7 days / Last 30 days / Custom
  • **Target provider:** Cursor, Claude Code, Perplexity, or Custom

Step 3: Generate

Click Generate. RL4 processes your workspace and produces an evidence pack:

SNAPSHOT GENERATED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Sessions: 12 work sessions detected
Threads: 23 conversations captured
Messages: 847 total (retroactive scan)
Files tracked: 68 with SHA-256 checksums
Causal links: 15 chat → code correlations
Decisions: 6 extracted
Hot files: api.ts (18 saves), auth.ts (12 saves)

Step 4: Copy & Use

Click Copy to get your portable context prompt. Paste it into any LLM to continue working with full context.

What RL4 Captures

Here's the evidence structure RL4 creates — mechanically, with no AI inference:

.rl4/
├── evidence.md                 # Structured activity journal
├── timeline.md                 # Append-only development narrative
├── evidence/
│   ├── activity.jsonl          # Every file save/create/delete
│   ├── chat_history.jsonl      # All Cursor conversations
│   ├── chat_threads.jsonl      # Thread summaries with topics
│   ├── sessions.jsonl          # Work sessions (6h gap detection)
│   ├── burst_stats.jsonl       # Work burst pattern recognition
│   ├── causal_links.jsonl      # Chat → code correlations
│   ├── commits.jsonl           # Git commit history
│   ├── decisions.jsonl         # Extracted decision records
│   └── cli_history.jsonl       # Terminal commands
└── snapshots/
  ├── file_index.json         # File → SHA-256 checksum mapping
  └── {checksum}.content      # File content blobs

Every piece of evidence is timestamped, sourced, and verifiable. This is proof-grade capture.

After Your First Snapshot: Search Your Context

Once captured, you unlock powerful MCP search tools:

Search your chat history:

Use search_chats to find conversations about "authentication"

Ask questions with cited answers:

Use rl4_ask: "Why did we choose JWT over sessions?"

Search terminal commands:

Use search_cli to find recent docker commands

These tools use a RAG pipeline (BM25 + Reciprocal Rank Fusion) to find relevant context and always return citation-first formatting.

Tips for Better Snapshots

Snapshot regularly:

  • End of each work session
  • Before switching models or tools
  • Before major architectural decisions

Use MCP for speed:

Once comfortable, the MCP method (`run_snapshot`) takes seconds vs minutes for the wizard. Make it a habit.

Check your learned skills:

After each snapshot, RL4 updates `.cursor/rules/Rl4-Skills.mdc` with patterns like:

  • **DO:** Use Zod for runtime validation (learned from 3 conversations)
  • **DON'T:** Mutate state in server actions (caused regression on Jan 15)
  • **CONSTRAINT:** All API responses must be < 100ms

Cursor reads this file automatically, making your AI smarter over time.

Monitor the Live Feed:

Run `Cmd/Ctrl + Shift + P` → "RL4: Show Live Feed" to watch real-time activity capture.

Next Steps

Now that you have your first snapshot:

  1. **Try `rl4_ask`** — Ask questions about your development history with citations
  2. **Make it a habit** — Snapshot at end of each session via MCP
  3. **[Explore all 5 goals](/cursor/blog/using-ai-context-goals-guide)** — Continue, Review, Handoff, Time Machine, Document
  4. **[Switch LLMs freely](/cursor/blog/switch-llm-without-losing-context)** — Use Claude Code or ChatGPT with full context
  5. **Check your evidence** — Open `.rl4/evidence.md` to see what was captured

Ready to Go Deeper?

You've created your first portable AI context. Want to learn more?

  • [Understand context loss](/cursor/blog/cursor-context-loss-killing-productivity)
  • [Master multi-LLM workflows](/cursor/blog/switch-llm-without-losing-context)
  • [Learn all 5 snapshot goals](/cursor/blog/using-ai-context-goals-guide)
  • [Join the beta](/cursor/form) for early access to new features

Your AI context is now portable, searchable, and proof-backed. Use it well.

Ready to preserve your AI context?

Join the RL4 beta and never lose context again. Free during beta.

Join Beta — Free

Related Articles