When the Warning Is Wrong but the Save Is Real
Task Command showed a cloud-sync warning even though a deleted item stayed gone after refresh. The failure was not the task board alone. It was the proof model around the task board.
warning shown, task still gone after refresh
Oura and Strava back on Supabase
after state write, before confirmation path recovered
strongest proof wins
The interface and the data were saying different things
What looked broken
Cloud sync warning
Warning stayed red
The board said it failed to confirm a cloud save and system health still reported Task Command as stale.
Observed persistence
Task still disappeared
A deleted item was still gone after refresh, which meant the board state had landed somewhere durable enough to reload.
Ledger over state
Health model trusted the wrong layer
The health route treated an old cloud-save history timestamp as stronger evidence than a newer Supabase-backed board state.
The bug showed up because the system was checked from more than one angle
How it was caught
Behavior beat copy
Live user report
The trigger was simple: the warning said one thing, but the board behavior said another.
/api/task-command/health
Health endpoint
The endpoint showed the board state was newer than the last confirmed cloud-save history entry.
/api/task-command/state
State endpoint
A fresh lastSaved timestamp proved the board state itself was updating even while cloudLastSaved lagged.
500 after save path
Direct POST repro
A live POST to the state route reproduced the exact failure: the state save could land, then the confirmation/history step could still throw.
The save path and the proof path now fail more honestly
What the fix changed
Evidence hierarchy
State outranks ledger
A fresh Supabase-backed task state now counts as the effective confirmed cloud save when the history trail is behind.
No false 500
History is non-fatal
The route no longer turns a successful task-state write into a hard failure just because the save-history mirror could not update cleanly.
No crying wolf
Health copy got sharper
The health layer now distinguishes between a truly stale cloud board and a bookkeeping trail that is catching up.
This incident mattered because it exposed a different kind of trust bug. The board was not simply failing to save. It was saving state strongly enough for the deletion to persist across refresh, while the warning system still claimed cloud confirmation had failed.
The contradiction is what made it useful. A user action on the live board removed an item. The UI then showed a cloud-sync warning. After refresh, the item was still gone. That is the kind of mismatch Jenn OS should treat as a priority signal, because it means the explanatory layer and the operating layer are no longer aligned.
The way the bug was caught is exactly the workflow Jenn OS needs more of. First came the live observation: "the warning says save failed, but the task is still gone." Then came the endpoint checks. The health route showed Task Command as stale because cloudLastSaved was older than the board state. The state route showed a fresh lastSaved timestamp. A direct POST repro made the shape of the failure clear: the task-state write could succeed, then the follow-on history or confirmation path could still fail and produce a 500.
That is not just a Task Command bug. It is a proof-model bug. The system was trusting a weaker artifact - the save-history ledger - over a stronger artifact - the actual Supabase-backed board state. In a system that is supposed to tell the truth about itself, that ordering is backwards.
The fix was not to hide the warning. The fix was to make the evidence hierarchy explicit. A fresh Supabase-backed board state now counts as the effective confirmed cloud save if the history trail is behind. The save-history mirror is still useful, but it is no longer allowed to turn a successful state write into a false hard failure. The route now degrades more honestly and the health summary prefers the strongest live proof available.
This is the continuing Jenn OS lesson: not all evidence is equal, and the system has to know that. Logs are useful. Ledgers are useful. Summaries are useful. But when a stronger source contradicts a weaker one, the stronger source wins. Otherwise the operating layer becomes theater - a dashboard that sounds rigorous while misreporting what actually happened.
It also says something about blog structure. A useful build post should not just tell the story in order. It should separate the operating facts from the interpretation. Snapshot first. Contradiction second. Root cause third. Fix fourth. Rule fifth. The details still matter, but they need a hierarchy so the reader can act before they read everything.
INCIDENT SHAPE user deletes task -> board shows "cloud sync warning" -> refresh keeps deletion -> state exists, explanation is wrong WHAT FAILED state write could succeed -> save-history / confirmation path could still fail -> route returned 500 -> health trusted old cloudLastSaved ledger -> UI cried wolf FIX fresh Supabase-backed board state -> counts as strongest cloud proof -> health uses effective cloud timestamp -> save-history mirror becomes non-fatal JENN OS RULE strongest proof wins state > ledger > summary copy if the evidence layers disagree, report the disagreement and repair the ordering