A Working Protocol for Claude and Codex
The problem was not that Claude and Codex were both useful. The problem was that they could not learn from each other. The fix was a simple protocol for shared memory, canonical files, and promotion of lessons into reusable rules.
shared workspace library
previously private and separate
all with blank event fields
the thing that made it compound
The Problem
Two brains, zero overlap
Useful work existed in both systems, but the judgment stayed trapped in whichever model discovered it first.
A fake learning loop
Telemetry was recording activity instead of meaning, which made the system feel alive while nothing was actually compounding.
Folder archaeology
When canonical files drift, every new session begins with reconstruction instead of execution.
The System
One loop, not two isolated sessions
Work updates memory. Memory updates the next round of work.
Why It Scales
Agent-agnostic
Any model can join if it can read markdown and follow file conventions.
Local before global
Project facts stay local. Only proven judgment gets promoted into shared skills.
Visible decay
Repeated fixes, stale skills, and missing logs can be measured instead of guessed.
How Someone Else Learns It
Explanation, then example, then repetition
Read the build entry
Understand the problem and the shape of the solution.
Read the guide
See the longer reasoning and tradeoffs behind the protocol.
Open the skills index
See how reusable judgment is organized across projects.
Inspect one real folder
Watch `_BUILD_LOG.md` and `_WORKSPACE.md` make the pattern concrete.
NotesRead the full write-up
Failure surface
The problem showed up in three forms at once. First, two agents were working on the same codebase with separate skill libraries and no shared memory. Second, the “learning loop” that was supposed to capture improvements was effectively fake: 89 signal records, blank event fields, no real transfer of judgment. Third, project folders kept turning into archaeology sites full of version drift, duplicate build environments, and no obvious canonical output.
Why it breaks
That combination does not scale. It works only as long as one person can remember who knows what, which file is current, what bug was fixed last week, and which lessons matter. The moment the work gets busy, or another agent joins, or another person touches the project, the whole thing collapses back into rediscovery.
The fix
The solution was deliberately low-tech. One shared skills library in `~/Desktop/~Working/skills/`. One append-only `_BUILD_LOG.md` in each project so every session starts by reading what happened and ends by writing what changed. One `_WORKSPACE.md` that defines canonical files, folder rules, and cleanup expectations. One promotion path from “Skill candidate: Yes” to a real reusable skill. That is enough to turn isolated sessions into a system.
Inline memory
The important architectural choice is that memory is inline with the work. The agent that does the work writes the record. No background hook pretending to infer what mattered. No side-channel telemetry that can fail silently. If the work happened, the evidence lives next to the work. That makes the system legible to both humans and models.
How it scales
This scales better than it looks because the protocol is agent-agnostic. It does not depend on Claude-specific features or Codex-specific memory. Any model that can read markdown and follow file conventions can participate. Add a third agent and the rules do not change: read the skills, read the local build log, respect the workspace contract, append what you changed, promote durable lessons.
Why local wins
It also scales because the memory is local before it becomes global. Project-specific facts stay in the project. Cross-project judgment gets promoted into skills only when it proves reusable. That prevents the shared system from filling up with noise. The build log is the event stream. The skills library is the compressed knowledge layer.
Next layer
If I were formalizing the next version, I would add four things. An agent registry so capabilities are explicit rather than informal. A project manifest so safe write zones, validation commands, and canonical outputs are machine-readable. A promotion queue so skill extraction has visible latency instead of relying on good intentions. Protocol health metrics so the system can detect repeated fixes, stale skills, unresolved open items, and missing log coverage before decay gets normalized.
How others learn
The most important question is how someone else learns it. Not by reading one huge manifesto. They learn it by moving through layers. First, the public build entry explains the problem and the shape of the solution. Then the longer guide and musing show the reasoning. Then the shared skills index shows the categories of reusable judgment. Then a real project folder shows `_BUILD_LOG.md` and `_WORKSPACE.md` in practice. The pattern has to be visible at every level: explanation, rule, example, repetition.
What this becomes
That is why I think of it as an operating layer rather than a prompt trick. A prompt can make one session better. A protocol can make the next session better. An ecosystem can make the next person better. The point is not “I have two AI tools.” The point is that the work compounds instead of resetting every time a new model or a new collaborator enters the room.
Protocol At A Glance
The minimum viable two-agent protocol
Same operating logic, but with clearer visual lanes for infrastructure, enforcement, and teaching.
Shared skill library
One portable library both agents can read before they start making decisions.
Scoped memory hierarchy
Continuity lives at the layer that changed: Jenn OS, the builds workspace, or one real project.
Promotion path
Project lessons only graduate when they prove reusable outside the local build.
Quality gates
The loop stays honest because outputs still have to survive review, checks, and render proof.
Key Rule
The agent that does the work writes the memory.
No hidden telemetry. No fake learning loop. If the work happened, the evidence should live beside the work.
How Someone Else Learns It
Read the build entry
Read the guide / musing
Open the skills index
Inspect the scoped build log and workspace note for one real layer
Follow the protocol on the next project