Navigate
HomeStart here
MusingsResearch & long-form
BuildingProjects & learnings
WorkProfessional practice
RunningTraining & races
AboutValues & identity
Life & PlacesCulture, food, travel, cities
Notes & ArchiveJournals, essays, portfolio
← Back to Building
PATTERNMar 2026

A Working Protocol for Claude and Codex

The problem was not that Claude and Codex were both useful. The problem was that they could not learn from each other. The fix was a simple protocol for shared memory, canonical files, and promotion of lessons into reusable rules.

claude-codecodexmulti-agentprotocolworking-memory

30+
Claude skills

shared workspace library

12
Codex skills

previously private and separate

89
dead signal records

all with blank event fields

1
shared protocol

the thing that made it compound

The Problem

Two brains, zero overlap

Useful work existed in both systems, but the judgment stayed trapped in whichever model discovered it first.

A fake learning loop

Telemetry was recording activity instead of meaning, which made the system feel alive while nothing was actually compounding.

Folder archaeology

When canonical files drift, every new session begins with reconstruction instead of execution.

The System

One loop, not two isolated sessions

Work updates memory. Memory updates the next round of work.

Shared Skillsportable judgmentJenn OS Logoperating-layer changes~BUILDS Logworkspace coordinationProject Loglocal implementation memoryNew Sessionagent reads before actingBetter Outputless rediscovery,more compounding

Why It Scales

Agent-agnostic

Any model can join if it can read markdown and follow file conventions.

Local before global

Project facts stay local. Only proven judgment gets promoted into shared skills.

Visible decay

Repeated fixes, stale skills, and missing logs can be measured instead of guessed.

How Someone Else Learns It

Explanation, then example, then repetition

1

Read the build entry

Understand the problem and the shape of the solution.

2

Read the guide

See the longer reasoning and tradeoffs behind the protocol.

3

Open the skills index

See how reusable judgment is organized across projects.

4

Inspect one real folder

Watch `_BUILD_LOG.md` and `_WORKSPACE.md` make the pattern concrete.

NotesRead the full write-up
1

Failure surface

The problem showed up in three forms at once. First, two agents were working on the same codebase with separate skill libraries and no shared memory. Second, the “learning loop” that was supposed to capture improvements was effectively fake: 89 signal records, blank event fields, no real transfer of judgment. Third, project folders kept turning into archaeology sites full of version drift, duplicate build environments, and no obvious canonical output.

2

Why it breaks

That combination does not scale. It works only as long as one person can remember who knows what, which file is current, what bug was fixed last week, and which lessons matter. The moment the work gets busy, or another agent joins, or another person touches the project, the whole thing collapses back into rediscovery.

3

The fix

The solution was deliberately low-tech. One shared skills library in `~/Desktop/~Working/skills/`. One append-only `_BUILD_LOG.md` in each project so every session starts by reading what happened and ends by writing what changed. One `_WORKSPACE.md` that defines canonical files, folder rules, and cleanup expectations. One promotion path from “Skill candidate: Yes” to a real reusable skill. That is enough to turn isolated sessions into a system.

4

Inline memory

The important architectural choice is that memory is inline with the work. The agent that does the work writes the record. No background hook pretending to infer what mattered. No side-channel telemetry that can fail silently. If the work happened, the evidence lives next to the work. That makes the system legible to both humans and models.

5

How it scales

This scales better than it looks because the protocol is agent-agnostic. It does not depend on Claude-specific features or Codex-specific memory. Any model that can read markdown and follow file conventions can participate. Add a third agent and the rules do not change: read the skills, read the local build log, respect the workspace contract, append what you changed, promote durable lessons.

6

Why local wins

It also scales because the memory is local before it becomes global. Project-specific facts stay in the project. Cross-project judgment gets promoted into skills only when it proves reusable. That prevents the shared system from filling up with noise. The build log is the event stream. The skills library is the compressed knowledge layer.

7

Next layer

If I were formalizing the next version, I would add four things. An agent registry so capabilities are explicit rather than informal. A project manifest so safe write zones, validation commands, and canonical outputs are machine-readable. A promotion queue so skill extraction has visible latency instead of relying on good intentions. Protocol health metrics so the system can detect repeated fixes, stale skills, unresolved open items, and missing log coverage before decay gets normalized.

8

How others learn

The most important question is how someone else learns it. Not by reading one huge manifesto. They learn it by moving through layers. First, the public build entry explains the problem and the shape of the solution. Then the longer guide and musing show the reasoning. Then the shared skills index shows the categories of reusable judgment. Then a real project folder shows `_BUILD_LOG.md` and `_WORKSPACE.md` in practice. The pattern has to be visible at every level: explanation, rule, example, repetition.

9

What this becomes

That is why I think of it as an operating layer rather than a prompt trick. A prompt can make one session better. A protocol can make the next session better. An ecosystem can make the next person better. The point is not “I have two AI tools.” The point is that the work compounds instead of resetting every time a new model or a new collaborator enters the room.

Protocol At A Glance

The minimum viable two-agent protocol

Same operating logic, but with clearer visual lanes for infrastructure, enforcement, and teaching.

01
Shared skill library

Shared skill library

One portable library both agents can read before they start making decisions.

~/Desktop/~Working/skills/
02
Scoped memory hierarchy

Scoped memory hierarchy

Continuity lives at the layer that changed: Jenn OS, the builds workspace, or one real project.

~Working -> ~BUILDS -> project _BUILD_LOG.md + _WORKSPACE.md
03
Promotion path

Promotion path

Project lessons only graduate when they prove reusable outside the local build.

Build log -> Skill candidate -> shared skill
04
Quality gates

Quality gates

The loop stays honest because outputs still have to survive review, checks, and render proof.

Review + criteria + provenance + export

Key Rule

The agent that does the work writes the memory.

No hidden telemetry. No fake learning loop. If the work happened, the evidence should live beside the work.

cross-model reviewacceptance criteriaprovenance checksrender / export checks

How Someone Else Learns It

1

Read the build entry

2

Read the guide / musing

3

Open the skills index

4

Inspect the scoped build log and workspace note for one real layer

5

Follow the protocol on the next project