The open reasoning layer for Cursor, Claude Code, and Codex
Git versions your code.
RL4 versions your reasoning.
AI models forget. Context windows reset. RL4 silently captures your AI's decisions in the background, creating a replayable, local-first memory of your entire project.
RL4 (Reasoning Layer 4) is a local-first developer memory protocol by ATAWAI. It captures prompts, decisions, and file changes into a .rl4/ folder readable by any LLM via MCP.
Never explain your architecture twice.
Every new chat shouldn't start from zero. RL4 hooks natively into Cursor, Claude Code, and Codex. It captures the constraints, the rejected paths, and the exact prompt that shipped the feature.
You code. RL4 remembers.
Replay the "Why".
A Git commit tells you what changed. RL4 tells you why. Trace any bug back to the exact AI prompt that caused it.
Hand off your workspace to a teammate without losing a single drop of context.
In a team? Switch to TEAM MODE and query across your teammates' workspaces: what's everyone working on, which files are at collision risk, who owns the auth layer? You get a structured Now / Next / Risks / Owners snapshot with citations — observational, never modifies remote workspaces.
Your memory stays in your repo.
Local-first: your project's reasoning lives in a secure .rl4/ folder in the repo — no cloud required for solo use. Team mode can opt in to Supabase sync for shared workspace views only.
Cryptographically signed, append-only evidence — your machine stays the source of truth.
Your project memory. Across chats. Across models. Across time.
Stop re-explaining your architecture in every new chat.
AI coding tools feel powerful. Until they forget everything.
Never restart your project from scratch, again.
RL4 records what actually happened in your project — files, diffs, chats, decisions — so your next AI session knows exactly where you left off.
- Works per workspace, across sessions and chats.
- Any MCP-compatible agent can query your real project history — without flooding the context window.
- Captures reality — not summaries, not docs, not assumptions.
Replay exactly what happened. Understand why.
RL4 snapshots your project into a compact content store. From there, it can restore a working workspace and reconstruct what changed, when, and why — without digging through Git, scattered notes, or half-remembered threads.
- Rebuild a working project state from a snapshot.
- See the causal chain: prompt → change → reversal → final decision.
- Ask "what broke auth last week?" and get an answer with proof.
Your AI stops repeating the same mistakes.
RL4 converts your real bugs and regressions into enforced project constraints — automatically injected into every AI call.
- Auto-generated skills file per project, based on real history.
- Injected via MCP so every agent call respects your constraints.
- No fragile rules .mdc file. RL4 updates constraints from actual project history.
Join a project and understand what's going on.
RL4 captures your real project history — so every new AI chat starts with context, not from zero.
- Share RL4 context alongside the repo.
- Ask "why / how / what broke" and get answers grounded in history.
- Future-you benefits as much as new teammates.
Built on MCP
RL4 exposes your project history as a reasoning layer via MCP. Any MCP agent can query evidence, decisions, and timeline — with citations — instead of guessing from the current chat.
- get_evidence, get_timeline, get_intent_graph, search_context — full content with citation source first.
- search_context(query) — retrieve only what matters, without exploding your token budget.
- Cross-workspace: in this chat, point at another RL4 workspace, load its evidence + timeline, and draft landing copy, docs, or handoffs grounded in what that project actually built.
RL4 turns your project into a system with memory — not just a stream of prompts.
The protocol for AI memory
is being written right now.
Every model forgets what you taught it. Every agent rebuilds from zero. We're shipping the fix — local-first, model-agnostic, yours forever.
Protocols don't get built by founders alone. They get built by the developers who live the pain.
Install RL4. Use it for real work. Tell us where it breaks. Push the use cases that matter to you.
50% off, for life
Every plan, every tier. Locked the day you join.
Vote on what we build
Your priorities ship first.
We build the future. Together or not at all.
No surprises. No fine print.
Does this solve LLM context limits?+
No — context windows stay finite. RL4 makes resumption painless. Every new chat starts with a snapshot of what came before, so you don't re-explain.
Where does my data live?+
In a .rl4/ directory inside your repo. Append-only JSONL files + a markdown timeline. Nothing leaves your machine without explicit opt-in.
Which agents work?+
RL4 ships as a VSIX for Cursor and VS Code: every agent inside those IDEs can use RL4 on your workspace. Claude Code and Codex are supported as CLIs with MCP too. MCP-based — compliant tools read RL4 evidence with citations.
Is the Founders Beta capped?+
No artificial cap. The 50% lifetime offer is honored as long as the Beta is open. The cap is on commitment — co-builder participation in Slack is the real gate.
How is this different from chat history dumps?+
RL4 doesn't store text — it stores the link between a prompt, a decision, and a code change. The reasoning is replayable. Logs are a graveyard. RL4 is a record.