Skills as Loadable Context
My first real AI project bombed because I delivered percentages instead of recommendations. Now I write down what went wrong and load it into every future build.
I built a tariff exposure analysis that produced a beautiful heatmap of exposure by country. Percentages. Bar charts. The client CFO looked at it and said: "We already know China is expensive. What am I paying you for?"
That was the exposure trap — showing how much something matters without saying what to do about it. Every metric needs to terminate in an action. "Supplier X has High tariff exposure" is decoration. "Supplier X sources from China at 33.4% ETR, current contract is $2.1M/yr, tariff cost $701K, alternative USMCA supplier identified at $2.4M/yr, net savings $401K by switching, recommend renegotiate by Q2" is a deliverable.
I wrote that failure down in a markdown file. Then I wrote down how my boss validates data (she divides two numbers and checks if the ratio matches her expectation — simple arithmetic that catches 80% of errors). Then I wrote down five ways AI adoption projects fail. Each file is 30-50 lines. Each one loads in 2 seconds.
Now before any build, I check the skills index and load the relevant ones. The pipeline reads the skill, and its output format changes. Instead of producing heatmaps, it produces renegotiation playbooks. Instead of presenting a final deck, it delivers incrementally so the stakeholder can sign off at each step.
The system has 10 skills across 5 categories: stakeholders (how specific people validate), domains (how specific problem spaces fail), methodology (how to do a specific kind of work), failure-modes (a specific failure with its fix), and interlocutor (cross-model verification).
CLAUDE.md is a suggestion. Skills are stronger than suggestions — they change what gets built, not just how it looks.