The open reasoning layer for Cursor, Claude Code, and Codex

Git versions your code.
RL4 versions your reasoning.

AI models forget. Context windows reset. RL4 silently captures your AI's decisions in the background, creating a replayable, local-first memory of your entire project.

RL4 (Reasoning Layer 4) is a local-first developer memory protocol by ATAWAI. It captures prompts, decisions, and file changes into a .rl4/ folder readable by any LLM via MCP.

~/projects/api-platform · capturing
live
$ rl4 status
Watching Cursor · Claude Code · Codex
Captured 847 events across 29 days
Active threads: 3 · Last decision: 2m ago
$ rl4 ask "why is the cache layer gone?"
Decision 2026-04-19: LRU thrash on cold start, 92% miss rate
Sources: timeline.md:142 · chat:thr_a8f2 · commit:b41a09
$
/01 The Pain

Never explain your architecture twice.

Every new chat shouldn't start from zero. RL4 hooks natively into Cursor, Claude Code, and Codex. It captures the constraints, the rejected paths, and the exact prompt that shipped the feature.

You code. RL4 remembers.

Zero-config capture
Native MCP
Background daemon
Cursor
Cursor
Native VSIX · auto-captures chats, edits, decisions
Supported
VSCode
Same VSIX · same protocol
Supported
Claude Code
Plugin marketplace · MCP-native read + capture
Supported
Codex CLI
Auto-config via ~/.codex/config.toml
Supported
Aider · Continue · Gemini CLI
MCP read access via paired editor
Coming
.rl4 / timeline.md
live
2026-04-29
Migration to event-driven auth
7 commits · 4 chats · 1 reversal
14:02
Refactor auth middleware
Cursor · claude-sonnet-4 · +12 / -147
14:08
Decision: middleware-based JWT refresh
confidence: high · 3 alternatives evaluated
15:41
Reverted: cache TTL change
cold-start regression · linked to thr_b3c1
17:22
Handoff to Codex CLI
22 chat msgs + decisions exported · verified
2026-05-03
replay_state + recurrence_check shipped
6 files · 3 intent chains · citations enforced in MCP replies
/02 The Solution

Replay the "Why".

A Git commit tells you what changed. RL4 tells you why. Trace any bug back to the exact AI prompt that caused it.

Hand off your workspace to a teammate without losing a single drop of context.

In a team? Switch to TEAM MODE and query across your teammates' workspaces: what's everyone working on, which files are at collision risk, who owns the auth layer? You get a structured Now / Next / Risks / Owners snapshot with citations — observational, never modifies remote workspaces.

Causal links
Cited answers
Zero re-explaining
Team-ready
/03 The Moat

Your memory stays in your repo.

Local-first: your project's reasoning lives in a secure .rl4/ folder in the repo — no cloud required for solo use. Team mode can opt in to Supabase sync for shared workspace views only.

Cryptographically signed, append-only evidence — your machine stays the source of truth.

SHA-256 chained
Local-first
Team sync · opt-in
.rl4 / proof / verify · CLI
verified
$ rl4 verify --since 2026-04-01
Scanning 847 events across .rl4/
Daily Merkle roots: 29 verified · 0 mismatches
Hash chain integrity: OK
No silent rewrites detected
SHA-256 Verified
9f3a · b41c · 8d22 · e0f7 · daily root
SOC 2-aligned
GDPR · local-only
No telemetry

Your project memory. Across chats. Across models. Across time.

Stop re-explaining your architecture in every new chat.

AI coding tools feel powerful. Until they forget everything.

1. Persistent project memory

Never restart your project from scratch, again.

RL4 records what actually happened in your project — files, diffs, chats, decisions — so your next AI session knows exactly where you left off.

  • Works per workspace, across sessions and chats.
  • Any MCP-compatible agent can query your real project history — without flooding the context window.
  • Captures reality — not summaries, not docs, not assumptions.
Cursor
3
/rl4
RL4 Extension
memory
▾ .rl4/
timeline.md
evidence.md
RL4-Skills.mdc
Context initialized
Cursor
Context-aware
Snapshot
capture
analyze
7 convosyesterday
evidence.md
## Snapshot 2026-02-10
FIXAuth logic in middleware.ts
REFTailwind for mobile
ADDNew Toast component
3 files · 2 threads · 14:32
Cursor
3
Restore any moment
2. Time Machine

Replay exactly what happened. Understand why.

RL4 snapshots your project into a compact content store. From there, it can restore a working workspace and reconstruct what changed, when, and why — without digging through Git, scattered notes, or half-remembered threads.

  • Rebuild a working project state from a snapshot.
  • See the causal chain: prompt → change → reversal → final decision.
  • Ask "what broke auth last week?" and get an answer with proof.
3. Project-Specific Skills

Your AI stops repeating the same mistakes.

RL4 converts your real bugs and regressions into enforced project constraints — automatically injected into every AI call.

  • Auto-generated skills file per project, based on real history.
  • Injected via MCP so every agent call respects your constraints.
  • No fragile rules .mdc file. RL4 updates constraints from actual project history.
Cursor
3
Bugs & fixes
RL4 Skills
rules update
RL4-Skills.mdc
+NO alert() for errors
+USE <Toast /> component
DoDon't
Cursor
Rules applied
Cursor
3
New teammate
RL4 Extension
summary
ACTIVITY JOURNAL
Feb 10 — UI overhaul
Migrated to Shadcn/UI
Feb 9 — Auth fix
Fixed by Valentin
Human-readable narrative
Cursor
Ask anything
4. 30-Second Onboarding

Join a project and understand what's going on.

RL4 captures your real project history — so every new AI chat starts with context, not from zero.

  • Share RL4 context alongside the repo.
  • Ask "why / how / what broke" and get answers grounded in history.
  • Future-you benefits as much as new teammates.

Built on MCP

RL4 exposes your project history as a reasoning layer via MCP. Any MCP agent can query evidence, decisions, and timeline — with citations — instead of guessing from the current chat.

  • get_evidence, get_timeline, get_intent_graph, search_context — full content with citation source first.
  • search_context(query) — retrieve only what matters, without exploding your token budget.
  • Cross-workspace: in this chat, point at another RL4 workspace, load its evidence + timeline, and draft landing copy, docs, or handoffs grounded in what that project actually built.
Cursor
3
You ask
RL4 MCP
workspaces
3 workspaces via MCP:
Workspace 1
Workspace 2
Workspace 3 ←
Project's story
Cursor
Context in any chat

RL4 turns your project into a system with memory — not just a stream of prompts.

Founders Beta · open now

The protocol for AI memory
is being written right now.

Every model forgets what you taught it. Every agent rebuilds from zero. We're shipping the fix — local-first, model-agnostic, yours forever.

Protocols don't get built by founders alone. They get built by the developers who live the pain.

HOW TO PARTICIPATE

Install RL4. Use it for real work. Tell us where it breaks. Push the use cases that matter to you.

50% off, for life

Every plan, every tier. Locked the day you join.

Vote on what we build

Your priorities ship first.

We build the future. Together or not at all.

/FAQ The honest questions.

No surprises. No fine print.

Does this solve LLM context limits?+

No — context windows stay finite. RL4 makes resumption painless. Every new chat starts with a snapshot of what came before, so you don't re-explain.

Where does my data live?+

In a .rl4/ directory inside your repo. Append-only JSONL files + a markdown timeline. Nothing leaves your machine without explicit opt-in.

Which agents work?+

RL4 ships as a VSIX for Cursor and VS Code: every agent inside those IDEs can use RL4 on your workspace. Claude Code and Codex are supported as CLIs with MCP too. MCP-based — compliant tools read RL4 evidence with citations.

Is the Founders Beta capped?+

No artificial cap. The 50% lifetime offer is honored as long as the Beta is open. The cap is on commitment — co-builder participation in Slack is the real gate.

How is this different from chat history dumps?+

RL4 doesn't store text — it stores the link between a prompt, a decision, and a code change. The reasoning is replayable. Logs are a graveyard. RL4 is a record.