Back to RL4 Cursor

RL4 Cursor — FAQ

ProofLayer/Evidence, integrity, privacy, pricing gates, and troubleshooting—so you can hand off long IDE sessions without drift.

Pricing gates local features (modes, retention, evidence depth). No cloud sync. No account required.

Filter by topic

#1

What is RL4 (Reasoning Layer 4)?

RL4, short for Reasoning Layer 4, is a local-first developer memory protocol. It captures every prompt, decision, and file change behind AI-driven dev work into a `.rl4/` folder inside your repo. Any LLM (Cursor, Claude Code, Codex, VS Code) can read this memory via MCP. Published by ATAWAI.

#2

How do I save Cursor chat history permanently?

Install RL4. On first launch, RL4 ghost-scans Cursor's local SQLite database (`workspaceStorage/`) and copies every conversation into `.rl4/evidence/chat_history.jsonl` — append-only, survives Cursor updates, folder renames, and reinstalls.

#3

How do I share AI agent context across Cursor and Claude Code?

Both IDEs read the same `.rl4/` folder in your repo. RL4 writes a shared skills file at `.cursor/rules/Rl4-Skills.mdc` that Cursor loads as a system rule and Claude Code loads via CLAUDE.md import. Switching IDE no longer drops context.

#4

How much does RL4 cost?

RL4 is free during the Founders Beta (through end of September 2026). When pricing launches, Founders Beta members keep 50% off for life. No payment method is required to install or use it today.

#5

How do I install RL4?

RL4 works with **any LLM in any supported IDE or CLI** — Cursor, VS Code, Claude Code CLI, Codex CLI. 1. Open `https://rl4.ai/start` and follow the install card for your environment. 2. Sign in once with your email — RL4 connects your tool to the dashboard. 3. Open any project. RL4 starts capturing automatically. No MCP setup, no config files, no buttons to click. The Project Board fills itself as you work.

#6

Do I need to do anything to start capturing?

No. Once installed and signed in, RL4 captures continuously in the background: - Every chat with your AI assistant (Cursor, Claude Code, Codex) - Every file save, create, delete, rename - Every commit and command You never run a snapshot, never click a sync button. Just code — RL4 watches.

#7

What is the Project Board?

The Project Board is the main RL4 surface inside your IDE. It's where the captured activity becomes structured: - **Projects** — auto-derived chunks of work (a feature, a bug fix, a refactor) clustered from your real activity - **Timeline** — day-by-day reconstruction of what happened - **Hot files** — the files that were most touched - **Chats** — every AI conversation linked to its files Open it from the RL4 sidebar icon. It's read-only — you don't edit it, you read it.

#8

What gets created in my project?

RL4 creates a single folder at your workspace root: `{workspace_root}/.rl4/` Plus one file in `.cursor/rules/Rl4-Skills.mdc` so your AI sees your project's learned rules. You never edit any of it manually. See the *Storage & Files* section below for a full file-by-file breakdown of `.rl4/`.

#9

I already have chat history — does RL4 see it?

Yes. On first launch RL4 ghost-scans Cursor's local SQLite database and Claude Code transcripts for that workspace, plus your git log. Every existing conversation is copied to `.rl4/evidence/chat_history.jsonl`. This survives Cursor updates, folder renames, and reinstalls. The Project Board populates immediately. If the ghost-scan finds nothing (fresh clone, no .git, no past chats), the board starts empty and fills as you work.

#10

I'm starting from scratch — what should I do?

Just code. Open Cursor / Claude Code, talk to your AI, save files, commit. After your first session, open the Project Board sidebar and you'll already see: - Captured chats grouped into projects - Files touched - Timeline of the session No button to press. The whole point is zero manual input.

#11

Which IDEs and CLIs does RL4 support?

**Any LLM in any supported IDE or CLI.** Today RL4 captures from four surfaces in parallel: - **Cursor** — chat threads, file events, commands - **VS Code** — file events + Claude Code if installed - **Claude Code CLI** — full chat history via the official transcript files - **Codex CLI** — config + session transcripts All four feed the same `.rl4/` folder. Switch IDE or CLI mid-project — your context follows you. Use any LLM (Claude, GPT, Gemini, Llama…) on top — the memory belongs to the project, not the model or the tool.

#12

What's a Snapshot in this new flow?

A snapshot is the **result** of RL4's capture — a structured, compact memory layer of your project at a moment in time. It's produced automatically every time RL4 rebuilds (after activity bursts, on important events). You can also force-refresh manually if you want, via the MCP command `RL4: Snapshot` or by asking your AI "run snapshot". Useful before switching models or handing off to a teammate. But for normal use you never need to.

#13

What is `timeline.md`?

A structured development journal at `{workspace_root}/.rl4/timeline.md`. RL4 enriches it after each capture rebuild with: - A narrative of what changed and why - Mermaid architecture diagrams (append-only) - Sessions and bursts - Decisions and their reasoning Readable by humans, parseable by your AI. It's not a commit log — it's a reasoning-aware timeline.

#14

What is `Rl4-Skills.mdc`?

A skills file at `{workspace_root}/.cursor/rules/Rl4-Skills.mdc`. RL4 maintains it automatically and your AI reads it on every chat: - **DO** rules — patterns that worked, validated approaches - **AVOID** rules — extracted from real regressions - **CONSTRAINTS** — architectural rules learned from the codebase - **INSIGHTS** — non-obvious things about the project This is how RL4 stops your AI from repeating the same mistake twice. You never write rules manually — RL4 builds them from evidence.

#15

Where do I see what RL4 captured?

Two places: 1. **Project Board** in your IDE — main view, real-time, your daily surface 2. **Web dashboard** at `https://rl4.ai/dashboard` — overview across all your workspaces, share controls, team activity They both read from the same `.rl4/` folder synced to your account.

#16

What is a Project on the Project Board?

A Project is an auto-derived chunk of work — a feature, a bug fix, a refactor. RL4 clusters your real activity (chats + files + commits) into projects so the board reads like a Linear-style task list, but built from what actually happened. You never create a project by hand. The clustering is automatic and updates as you work.

#17

Hot files — why do they matter?

Hot files are the files RL4 saw modified the most during a session. They represent your **implementation spine** — the ground truth of what you actually built, not just what you discussed. If you talked about a topic but no corresponding hot file appears, that's a GAP — an intention that didn't get implemented. Useful when you resume a project and forgot what's left.

#18

When should I open the Project Board?

- **Resuming work** after a break — see what was done, what's still open - **Before a stand-up** — the Timeline gives you yesterday's narrative - **Switching LLMs** — copy the latest snapshot context into your new chat - **Onboarding a teammate** — share the workspace, they see everything For pure coding flow, you don't need to open it. RL4 keeps capturing in the background regardless.

#19

Does my AI see RL4 data automatically?

Yes — through two channels: 1. **`.cursor/rules/Rl4-Skills.mdc`** is loaded by Cursor / Claude Code into every chat as a system rule. 2. **MCP tools** (`run_snapshot`, `search_context`, `rl4_ask`, etc.) let your AI query the full evidence base on demand. When RL4 is installed, your AI gains memory of the project without you doing anything else.

#20

What does RL4 actually capture?

Continuously, in background: - **Chats** — every message exchanged with your AI assistant, with thread metadata - **File events** — save, create, delete, rename (with SHA-256 hashes) - **Commits and branch changes** — full git activity - **Commands** — terminal commands run during sessions - **IDE signals** — which files were open, focused, edited together No selection step, no time-range picker, no buttons. Capture is on by default and event-driven.

#21

What sources does RL4 read from?

- **Cursor local storage** (workspace/global SQLite DB) - **Claude Code transcripts** (official `.claude/` files) - **Codex sessions** (`.codex/` configs and transcripts) - **Git history** (commits, branches, diffs) - **File system events** (via the IDE extension) Read-only. RL4 never modifies any of these sources.

#22

Can I switch models mid-project without re-briefing?

Yes — that's the core promise. Open a fresh chat in Claude / ChatGPT / Gemini / any LLM, paste the latest snapshot context (from the Project Board's Copy button or via MCP `run_snapshot`), and the new model gets everything: what you built, what you decided, what constraints apply, what to avoid. No cloud lock-in. Your memory moves with your repo, not with the model.

#23

What is headless (MCP) vs the Project Board?

- **Project Board (visual)** — the IDE sidebar, for humans. You read what was captured. - **MCP (headless)** — tool calls like `run_snapshot()`, `search_context()`, `rl4_ask()`. For AI agents that query RL4 programmatically. Both read the same `.rl4/` folder. Use the Board to look at things, MCP when you want your AI to act on them.

#24

What makes RL4 different from Git history?

Git tracks code diffs. RL4 tracks the reasoning behind those diffs: prompts, decisions, regressions, constraints, chat history, IDE activity, and the causal links between them. Git tells you *what* changed. RL4 tells you *what happened — and why*. Together they form a complete record of what changed and why. They're complementary, not competing.

#25

Why do you need ProofLayer / Evidence at all?

IDE sessions are long, technical, and messy (refactors, regressions, silent changes). ProofLayer reduces drift by grounding handoffs in concrete evidence: what files actually changed, what the IDE did, what the workspace looked like. This helps the next LLM understand not just what you discussed, but what you actually implemented. Without it, AI summaries hallucinate.

#26

What's in the Evidence Pack?

- **File events** (append-only): save / create / delete / rename, timestamps, SHA-256 hashes - **IDE activity**: which files were open, focused, edited together - **Causal links**: correlations between chat messages and file changes - **Burst patterns**: detected work types (feature, fix, refactor, test) - **Daily ledgers**: day-by-day reconstruction of the project's evolution

#27

Is the Evidence Pack append-only?

Yes by design. Events are only added, never modified or deleted. This improves traceability and detects drift. It's not cryptographically tamper-proof (they're local files), but it dramatically reduces ambiguity — and lets RL4 *replay* your project state at any point in the past.

#28

Does Evidence tracking start retroactively?

Partially. The **ghost-scan** at first install mines existing Cursor and Claude Code chat history + the full git log, so you get a starting point even on a project that already exists. File-level evidence (save / create / delete events) starts at install — it cannot reconstruct what happened before. For full forensic coverage, install RL4 from day one.

#29

What's inside `.rl4/` — file by file?

The whole memory layer lives at `{workspace_root}/.rl4/`. Top-level: - **`evidence.md`** — auto-overwritten markdown summary of mechanical facts (chapters, sessions, hot files, stats). The first file your AI reads when it queries the project. - **`timeline.md`** — append-only narrative + Mermaid architecture diagrams. The human-readable journal of how the project evolved. - **`intent_graph.json`** — aggregated coupling, hot scores, file reversal patterns (used by RL4's MIL retrieval). - **`evidence/`** — raw event streams (see next entry). - **`snapshots/`** — content-addressed blob store (chunked, gzipped). Lets RL4 replay any past state. - **`.internal/`** — append-only ledgers, gate decisions, skills file, archive manifest. Internal RL4 state.

#30

What's inside `.rl4/evidence/`?

Raw event streams that feed everything else. Each line is one event (JSONL): - **`activity.jsonl`** — file save / create / delete / rename events with SHA-256 hashes - **`chat_history.jsonl`** — every captured AI chat message (Cursor + Claude Code + Codex) - **`chat_threads.jsonl`** — thread-level summaries with titles and topics - **`sessions.jsonl`** — detected work sessions and activity bursts - **`intent_chains.jsonl`** — real-time causal chains linking prompts → file edits - **`agent_actions.jsonl`** — gatekeeper observations of what the LLM did - **`.enrichment_state.json`** — pointer of what's been processed into evidence.md Append-only by design. Old data never gets overwritten.

#31

What's inside `.rl4/.internal/`?

RL4's own bookkeeping. You normally never look here, but for transparency: - **`skills.mdc`** — canonical project skills (DO / AVOID / CONSTRAINTS / INSIGHTS), portable across IDEs. Mirrored to `.cursor/rules/Rl4-Skills.mdc` for Cursor. - **`agent_compliance.jsonl`** — log of when the AI followed (or violated) the skills file - **`agent_gate.jsonl`** — gatekeeper decisions (which prompts triggered context injection) - **`block_why_index.json`** — explanations of any blocked or warned LLM operations - **`archive_manifest.json`** + **`archives/`** — historical snapshots rotated out of the main store - **`bootstrap_state.json`** — first-run state machine - **`append_only_stats.json`** — health/integrity counters

#32

What's inside `.rl4/snapshots/`?

A content-addressed blob store. Each file is named by the SHA-256 of its contents and gzipped. RL4 chunks captured chats, file states, and event windows into these blobs. Why: lets RL4 deduplicate (the same chat fragment is stored once even if it appears in 10 sessions), survive partial crashes, and replay any past state by walking the snapshot index. For a busy project this folder grows to thousands of small files (~200KB each median). Safe to ignore — RL4 manages compaction and archives automatically.

#33

Can I reconstruct my project from `.rl4/` alone?

For **memory and reasoning** — yes. The `.rl4/` folder contains every chat, every decision, every file event, every snapshot. A teammate (or your future self) cloning the repo with `.rl4/` intact gets the full project history without you saying a word. For **source code** — no, that lives in your tracked files. RL4 does not duplicate source; it tracks what *happened to* the source. This is why `.rl4/` should be committed to git for team workspaces (or kept locally for solo work).

#34

Should I commit `.rl4/` to git?

**Solo work**: optional. `.rl4/` rebuilds itself from your activity, so losing it just resets the timeline. **Team work**: yes — committing `.rl4/` shares the memory with everyone who clones. Teammates get instant context without you explaining anything. **With Team Sharing on the dashboard**: you don't need to commit `.rl4/` — sync is handled by RL4's metadata layer. Either approach works.

#35

Does RL4 respect `.gitignore`?

Yes. RL4 reads your `.gitignore` and skips: - Files Git ignores (build outputs, `node_modules`, `.env`, etc.) - Dotfiles in ignored paths - Large binary assets in ignored locations Nothing sensitive (like `.env` files) is ever logged, hashed, or sent to the dashboard. If you have a custom convention, add the path to `.gitignore` — RL4 picks it up automatically.

#36

How precise is RL4's history search?

Line-level. Every line of every file change gets a unique hash, indexed in the snapshot store. When you ask *"who introduced this regression?"*, RL4 can pinpoint: - The exact line that changed - The exact chat message that proposed the change - The exact session it happened in Even after huge refactors, the original line is findable by hash. Git tells you *the file changed*; RL4 tells you *which line, by which prompt, in which chat*.

#37

Can RL4 auto-generate code style rules from my corrections?

Yes. RL4 watches what you accept vs reject in AI-suggested edits. When the same correction appears repeatedly (e.g., you keep renaming AI-generated `hasUser` → `isUser` for booleans), RL4 detects the pattern and writes a rule into `Rl4-Skills.mdc`: > *AVOID: prefix booleans with `has` / `can` — use `is` instead. (Inferred from 7 corrections across 3 sessions.)* Next time the AI suggests a `has*` boolean, it sees the rule and adapts. Your style emerges from your real corrections, not from a config file you maintain.

#38

How fast is RL4's memory lookup?

Under 200ms for most queries. The Memory Index Layer (MIL) is a local RAG built on **BM25 + TF-IDF** running entirely on your machine — no remote calls, no API costs. When your AI asks RL4 for context (via MCP), the lookup hits the local index, ranks chunks, and returns the top results in milliseconds. Even on a 6-month project with 50k+ events, p95 stays under 200ms.

#39

How does RL4 reduce my LLM token costs?

Pre-formatted context vs raw chat dumps = **~10× token reduction** in real-world tests. When your AI needs project context, RL4 returns a structured snapshot (decisions, constraints, hot files, recent intent chains) instead of raw chat history. The model gets exactly what it needs to act, nothing more. On a typical Cursor session: ~3-5k tokens of injected RL4 context vs 30-50k tokens if you pasted the raw chat backlog. Faster responses, lower API bills, fewer context-limit errors.

#40

What is the gatekeeper / HTTPS pre-hook?

A small local HTTPS server RL4 spins up on your machine. It sits between your AI and your codebase as a **pre-hook**: - Before any code-editing operation, the LLM is required to ingest the relevant RL4 context for the file(s) it wants to touch. - This forces consistency: your AI can't write code that contradicts a constraint it just learned, because the gatekeeper makes it re-read the rules first. - Logged in `.rl4/.internal/agent_gate.jsonl` — every gate decision is auditable. The server runs only on `localhost`, never accepts external connections.

#41

What are the RL4 modes? (slash commands)

RL4 ships 8 modes you can trigger from any chat — either as `/rl4-<mode>` (Claude Code) or by starting your prompt with `<MODE> MODE` (any LLM): | Mode | Trigger | Use it for | |---|---|---| | **Ask** | `/rl4-ask` or `ASK MODE` | Why / how / what — explanatory, causal, evidence-grounded answers with Sources & Proofs format | | **Debug** | `/rl4-debug` or `DEBUG MODE` | Diagnose a bug — repro + evidence + code forensics, hypothesis/verify output | | **Discovery** | `/rl4-discovery` or `DISCOVERY MODE` | Explore / map the codebase — architecture, data flow, coupling, entrypoints (read-only) | | **Plan** | `/rl4-plan` or `PLAN MODE` | Plan changes, structure commits, design a refactor — Context/Commits/Tests/Rollback output | | **Audit** | `/rl4-audit` or `AUDIT MODE` | Security/quality audit of recently modified files — P0/P1/P2 severity findings | | **Team** | `/rl4-team` or `TEAM MODE` | Cross-workspace team activity — contributors, collision zones, owners | | **Next** | `/rl4-next` or `NEXT MODE` | Resume your last session — where you left off, hot files, suggested next steps | | **Undone** | `/rl4-undone` or `UNDONE MODE` | Surface likely unfinished work fronts (heuristic, read-only) | Each mode runs an RL4 discovery baseline first (loads evidence, timeline, intent graph), then branches into its specialized output format.

#42

When should I use `/rl4-ask` vs `/rl4-debug` vs `/rl4-discovery`?

Easy to confuse — here's the decision matrix: - **`/rl4-ask`** → you have a *question* about the past. *"Why did we choose JWT over sessions last sprint?"* Returns Answer + cited evidence. - **`/rl4-debug`** → something *broke right now* and you need forensics. *"The auth flow returns 500 — when did it start failing?"* Returns hypothesis + verify steps. - **`/rl4-discovery`** → you're *new to a codebase* and want to map it. *"How does the snapshot pipeline work?"* Returns architecture + data flow + entrypoints. Rule of thumb: Ask = past, Debug = present bug, Discovery = current architecture.

#43

Can I read RL4 data directly via MCP?

Yes. The most useful read commands your AI (or you) can call: - **`get_evidence(workspace_id)`** — returns the structured `evidence.md` (chapters, sessions, hot files, stats) - **`get_timeline(workspace_id, date_from?, date_to?)`** — returns the dev timeline with optional date range - **`get_intent_graph(workspace_id)`** — returns coupling, hot scores, file reversal patterns - **`read_source_file(path, line_start?, line_end?)`** — reads any file from any workspace by path - **`read_rl4_blob(workspace_id, sha)`** — reads a content-addressed blob from `.rl4/snapshots/` - **`restore_version(workspace_id, path, version)`** — restores a previous version of a file from the content store - **`search_context(workspace_id, query, source?)`** — semantic search across evidence/timeline/decisions/chat/cli - **`search_chats(workspace_id, query)`** — search past AI conversations - **`search_cli(workspace_id, query)`** — search past terminal commands All read-only. They feed your AI structured project memory without you copy-pasting anything.

#44

What does `RL4: Connect` do?

It's the IDE command palette entry that wires RL4's MCP server to your IDE. Run it once after install: 1. Open the command palette (`Cmd/Ctrl + Shift + P`) 2. Type **RL4: Connect** and hit Enter 3. RL4 writes `.cursor/mcp.json` (or equivalent) so your AI sees the MCP tools 4. The Project Board's MCP health dot turns green Use it again if the MCP dot goes red (rare, usually after an IDE update). For day-to-day work you never re-run it.

#45

What does the checksum / integrity prove?

The SHA-256 checksum proves the snapshot wasn't accidentally modified after generation. It's tamper-evidence. It does NOT prove: - That the content is true - Who authored it - That the LLM will follow it correctly

#46

Does this make the snapshot truthful?

No. Integrity + evidence provide tamper-evidence and traceability — not factual correctness. Models can still be wrong. RL4 helps them stay consistent with your project state and constraints, but it doesn't guarantee accuracy. Garbage in, structured garbage out.

#47

What if my LLM tries to rewrite the RL4 data?

Some LLMs try to helpfully reformat or summarize the captured data. Tell it explicitly: > *"Do not modify the RL4 snapshot. Read it and use it as context only."* The snapshot is structured for the model to parse, not for you to read. Let it consume the structure directly.

#48

Does RL4 send my code or chats to RL4 servers?

**No code, no chat content** ever leaves your machine. The `.rl4/` folder stays local. What we do sync (signed-in users only): - **Workspace metadata** — name, last-active timestamp, KPIs (file counts, hot file names, session counts) - **Aggregated activity** — number of chats / commits / hours captured per week (no content) - **Account events** — device-connected, workspace-shared, weekly recap triggers This powers the cross-workspace dashboard, team sharing, and the weekly recap email. You can disable sync per workspace in the dashboard.

#49

Does RL4 read my code?

It hashes some files (SHA-256) for evidence signals and tracks file events. Designed to skip noisy directories (`node_modules`, build outputs, `.git`) and large/binary assets. The hashes are for verification, not for reading content. Source code stays in your filesystem, never sent anywhere.

#50

Where is my data stored?

Everything stays on your machine: - **Extension storage** — snapshots, state - **`.rl4/`** in your workspace — evidence logs, timeline, content store - **`.cursor/rules/`** — `Rl4-Skills.mdc` (your project's learned rules) The only data on RL4 servers is the metadata listed above (workspace names, KPI counts, account events).

#51

How do I delete everything?

Locally: 1. Uninstall the RL4 extension 2. Delete the `.rl4/` folder in each workspace 3. Delete `.cursor/rules/Rl4-Skills.mdc` (and any `rl4-*.mdc`) 4. Clear extension storage in your IDE settings if needed On RL4 servers: 5. Open `https://rl4.ai/dashboard/account` and click *Delete account* — wipes all metadata and revokes all device keys.

#52

How do I revoke a device that's connected to RL4?

Two ways: - **From the welcome email** — every device-connect email contains a one-click *Revoke access* link. - **From the dashboard** — `https://rl4.ai/dashboard/devices` lists every connected device with a Revoke button. Revocation is immediate. The device loses access to the dashboard but local `.rl4/` data is untouched.

#53

Do you collect telemetry / analytics?

We collect the metadata listed in the *send my code* answer — workspace KPIs, activity counts, account events. **No content, no code, no chat text.** Product analytics (button clicks, feature usage) are not currently shipped. If we add them, they'll be opt-in and clearly documented.

#54

Can I use RL4 with sensitive or proprietary code?

Technically: yes — code never leaves your machine. Only metadata syncs. Policy-wise: check with your security team. Some orgs ban any extension that talks to a third-party server, even for metadata. The on-prem / self-hosted plan is on the roadmap.

#55

Does RL4 work offline?

**Capture works offline** — chats, file events, commits all land in `.rl4/` regardless of connectivity. **Sync, dashboard, team sharing, and welcome emails require online.** When you reconnect, RL4 syncs the queued metadata. No data is lost.

#56

Can I share my workspace memory with a teammate?

Yes. From `https://rl4.ai/dashboard/workspaces`, click *Share* on any workspace and enter your teammate's email. They receive a Team Invite email with a one-click link. They get **read-only access** to your captured chats, files, timeline, and Project Board.

#57

What does a teammate see when invited to my workspace?

Everything you've captured for that workspace: - Chat history (all AI conversations) - Files touched and hot files - Timeline narrative - Auto-derived projects - Skills file (`Rl4-Skills.mdc`) They can't modify anything. They can't see your other workspaces unless you share each one explicitly.

#58

How do I revoke a teammate's access?

Same dashboard page — `Workspaces → Share`, click *Remove* next to the teammate's email. Access is revoked immediately. Their local copy of any captured data is theirs to keep, but they lose live access to your future captures.

#59

How long does Cursor keep chat history?

Cursor doesn't officially document retention. In practice: recent chats (< 1 month) are usually available; older chats may be purged without warning. Privacy Mode affects what's stored. The safest approach: keep RL4 capturing in background — it persists chats to `.rl4/` so you don't depend on Cursor's retention.

#60

What if I rename or move my project folder?

⚠️ Renaming creates a new workspace ID in Cursor. Your old conversations become orphaned — they exist on disk but Cursor can't find them. Good news: **RL4 auto-detects orphans during ghost-scan** and recovers them. Open the Project Board after the rename — captured chats reappear. If you renamed before installing RL4, install it now and trigger a ghost-scan from the dashboard.

#61

Can Cursor updates break my chat history?

Sometimes. Updates can change how Cursor indexes conversations or migrates data. Some users report losing access to history after updates. With RL4 installed, you're insured: chats are persisted to `.rl4/` continuously, so a Cursor update can't take them away.

#62

What actions can cause me to lose chat history?

Without RL4: renaming/moving the project folder, Cursor updates, clearing app data, reinstalling Cursor, cleanup tools (CleanMyMac, CCleaner), switching machines (no cloud sync from Cursor itself). With RL4 running: none of the above wipes your captured chats. They live in `.rl4/` inside your repo.

#63

Why do I see fewer items than I expected?

Common causes: - Cursor already purged older conversations *before* RL4 was installed - You're in a different workspace than expected (check the workspace name in the Project Board) - The project was renamed/moved (run a ghost-scan to recover orphans) - You're signed in to a different RL4 account The Project Board shows an *Earliest captured item* line so you know what's actually available.

#64

The Project Board is empty — what now?

Walk through this: 1. Confirm you're **signed in** — top-right corner of the Project Board. 2. Confirm the workspace path matches the project you opened (sometimes Cursor opens a parent folder). 3. Generate some activity — open a chat, edit a file, save it. The board updates within a few seconds. 4. Force a refresh — run `RL4: Snapshot` from the IDE command palette. 5. Check `.rl4/evidence/activity.jsonl` exists in your workspace — that's where capture lands. Still empty? Reply to your welcome email, we'll investigate.

#65

I don't see new captures showing up — what should I check?

1. **MCP health** — the Project Board has a health bar at the top (MCP / HTTPS dots). Both should be green. 2. **Authentication** — sign out and back in if the dot is red. 3. **`.rl4/` folder permissions** — RL4 needs write access to your workspace root. 4. **Disk space** — RL4 stops capturing if the disk is full. 5. **Force refresh** — `RL4: Snapshot` from the command palette. If MCP is red, restart your IDE.

#66

I clicked install but nothing happens — what's wrong?

1. **Cursor / VS Code restart** — required after install of any extension. 2. **Sign-in flow** — RL4 opens a browser tab on first launch. If your default browser is misconfigured, copy the auth URL from the IDE notification and paste it manually. 3. **`.vsix` install on Cursor** — use `Cmd+Shift+P → Extensions: Install from VSIX...` 4. **Firewall / proxy** — RL4 needs to reach `rl4.ai` over HTTPS to sync metadata. Capture works offline but the dashboard won't update. Logs are in your IDE's Output panel under *RL4*.

#67

Can I recover orphan conversations from old folders?

Yes. RL4's ghost-scan auto-detects orphan workspaces in Cursor's `workspaceStorage/` and recovers their chat history. Triggered automatically on first install. To re-trigger: 1. Open the Project Board 2. Run `RL4: Snapshot` from the command palette — this rescans including orphans If orphans aren't detected, paste the path to the old workspace into the dashboard's *Custom workspace path* field.

#68

I get a `sqlite3` or database access error

RL4 reads Cursor's chat history from a SQLite database. If you see a sqlite3 error: - **Cursor running with the DB locked** — close Cursor, then re-trigger a snapshot. Or just keep coding — RL4 retries automatically. - **macOS / Linux**: `sqlite3` is usually pre-installed. Confirm with `which sqlite3`. - **Windows**: install Python (RL4 falls back to Python's built-in sqlite module) or grab the sqlite3 CLI from sqlite.org. - **Database corruption**: rare. Run `RL4: Repair Database` from the command palette — it's a safe SQLite VACUUM operation.

#69

My AI doesn't seem to use the RL4 context

Check three things: 1. **`.cursor/rules/Rl4-Skills.mdc` exists** in your workspace. If not, RL4 hasn't run a snapshot yet — start a chat and edit a file to trigger one. 2. **MCP server is connected** — Cursor settings → MCP servers → `rl4` should be green. If not, run `RL4: Reconnect MCP`. 3. **Tell your AI to use RL4** — try `"use RL4"` or `"check the project board"` to nudge it the first time. Once it sees the data, it relies on it automatically.

#70

Will this slow down my IDE?

No. RL4 is event-driven and idle when nothing's happening: - Capture is incremental — it only reacts to file/chat events as they occur - No periodic polling, no background indexing loops - Heavy work (snapshot rebuild, evidence aggregation) is debounced and budgeted - Skips noisy directories (`node_modules`, build outputs) Real-world: RL4 adds <1% CPU and ~30MB RAM during normal coding sessions.

#71

What are the known limitations?

- **Cursor retention** — RL4 can only capture what Cursor still has locally at install time. Earlier chats may already be purged. - **File evidence not retroactive** — file-level tracking starts at install (chats and git are recovered via ghost-scan). - **Integrity ≠ truth** — checksums verify tampering, not factual accuracy of LLM output. - **No on-prem yet** — metadata syncs to `rl4.ai`. Self-hosted plan on the roadmap.

#72

Can RL4 recover deleted conversations?

If they were captured by RL4 before deletion: yes — they live in `.rl4/evidence/chat_history.jsonl` and `.rl4/snapshots/`. If they were deleted before RL4 was installed: no. RL4 reads from Cursor's storage; if Cursor purged the data, it's gone. The point of RL4 is to be your insurance going forward.

#73

Why is capture sometimes partial?

Two reasons: 1. **Cursor retention** — if Cursor already purged old data, no tool can recover it. 2. **Evidence budgets** — for performance, file-event collection has time and size limits per session. Big projects with thousands of files in a single session may see a partial pass. The Project Board flags partial captures explicitly so you know what was missed.

#74

What is the Founders Beta?

RL4 is currently in Founders Beta — a 3-month closed program to harden the product with real users before the public launch. Founders Beta members get: - **Direct line to the founders** via a private Slack channel — bug reports, feature requests, design feedback go straight to us, no PR layer - **50% off for life** on every paid tier when pricing launches (locked in for the lifetime of your account) - **Co-builder status** — your input shapes the product roadmap In return, we ask you to actively use it and tell us what's broken.

#75

Is RL4 free during Beta?

Yes. The full product (capture, Project Board, dashboard, team sharing, MCP, weekly recap) is free during the Founders Beta — through end of September 2026. When pricing launches, Founders Beta members keep 50% off for life. You won't be billed without explicit confirmation.

#76

Why join the Beta Slack?

Three reasons: 1. **Faster help** — the founders read every message. Bugs get fixed in days, not weeks. 2. **Direct influence on the roadmap** — feature requests in Slack often ship that week. 3. **Community** — early users share workflows, find edge cases together, and shape the conventions. The Slack invite arrives with your welcome email. Joining is the *sine qua non* of Founders Beta — co-builder requires participation.

#77

How is RL4 different from Cursor's built-in memory or Continue?

Cursor memory and Continue store context inside their own tools. RL4 stores context in your repo at `.rl4/`. Any IDE or LLM with MCP can read it. Switching from Cursor to Claude Code keeps the same memory. It survives: - Switching IDE (Cursor → VS Code → Claude Code) - Switching model (Cursor's Claude → ChatGPT → Gemini) - Switching machine (the repo travels) - Sharing with teammates (the memory comes with the repo) It's anti-lock-in by design. The memory belongs to the project, not the vendor.

#78

Why use RL4 instead of asking the LLM to summarize?

You can ask for summaries manually. RL4 adds: - **Consistency** — same structured format every time, parseable by any LLM - **Evidence** — links to actual file changes, not just chat text - **Lessons** — extracted DO/AVOID rules from your real regressions - **Portability** — works across any LLM without re-formatting - **Zero effort** — auto-captured, no prompt engineering, no copy-paste

#79

Who is RL4 for?

Developers who: - Use AI assistants daily in Cursor / Claude Code / Codex - Work on complex, multi-day or multi-week projects - Hit context limits and lose continuity - Switch between LLMs (Claude → ChatGPT → Gemini) - Need to hand off context to teammates or to your future self - Want an audit trail of AI-assisted work If your projects are short or you rarely hit context issues, you might not need it yet. Come back when you feel the pain.

#80

What if I try it and don't see the value?

Then it's not for you right now — and that's okay. RL4 solves a specific pain: context loss during AI-assisted development. If your current workflow feels fine, you don't need to change it. Uninstall in 30 seconds, delete `.rl4/`, you're back to zero. No hard feelings.