If a number is on the page, it needs a source.
Portfolio metrics look harmless until they drift. One FTI showcase count said 71. Another said 72. Another said 88. An older deck still said 64+. The fix was not proofreading harder. The fix was giving every public number a generator, a shared owner, and a way to fail loudly.
Public FTI label, backed by a raw count of 131.
Vela musings
Doldol designs
Audit surfaces in sync
Interactive FTI tools
The bug was architectural, not editorial.
I kept noticing a small embarrassment on the site. The portfolio card for the FTI training portal would say one page count. The work page would say another. The case study carried its own variation. A stakeholder deck still had an older number altogether. Every one of those counts had been true at some point. None of them were a trustworthy public answer anymore.
That kind of bug is easy to underestimate because it does not crash the build. It quietly weakens the site instead. The more detailed the project pages become, the more the numbers start doing reputational work. If they look improvised, the craftsmanship underneath the project starts looking improvised too.
So the requirement changed. The question stopped being what count should I type here? and became what system decides this count, and how can I inspect it later?
Principle
Public metrics should behave more like derived data than marketing copy.
The copy can stay warm. The number itself needs provenance.
Figure 1
Before the fix, one project had five believable counts.
The problem was not that anyone picked a bad number. The problem was that the number had been copied into too many places and stopped having a single owner.
The fix is a tiny supply chain.
I did not need a dashboard empire. I needed a short path from source repo to public card.
The pattern is deliberately boring. Each metric family gets a generator. The generator writes a machine-readable snapshot. Public pages import a shared data module instead of carrying their own literals. Then an audit step checks whether those consumers are still pointing at the canonical source.
For the FTI portal, that means the showcase count now comes from route rules written against the real training portal repo. For Vela, it means the work card reads musings, building entries, tools, and domains from the site structure. For Doldol, it means the portfolio card reads real page counts and the flash design library directly from the client repo.
Figure 2
The maintenance loop is small on purpose.
A trustworthy number needs three things.
One
A counting rule
Not just a result. The site needs to remember what is included, what is excluded, and why the public label is rounded the way it is.
Two
A shared import path
The visible card should import a canonical module. If the same project metric is typed in three components, drift is only a matter of time.
Three
An inspection surface
A private page that says what is in sync, what is drifting, and which files are responsible turns maintenance into a quick glance instead of archaeology.
Live examples
- FTI training pages
- 130+ public label
- Raw FTI training count
- 131
- Vela project inventory
- 19 projects
- Doldol flash library
- 234 designs
This is maintenance, but it is also voice.
I care a lot about how these pages look, but the visual confidence only works when the underlying claims feel earned. A clean metric card with a stale number is just a better designed way to be wrong.
The more I work on the site, the more I think of maintenance as editorial practice. An archive becomes legible when it keeps its receipts. A portfolio becomes more impressive when the numbers can defend themselves. Even a playful tracker or showcase page gets stronger the moment it can explain where its counts came from.
That is the version of automation I trust most: not the kind that hides the work, but the kind that makes the work inspectable.