Navigate
HomeStart here
MusingsResearch & long-form
BuildingProjects & learnings
WorkProfessional practice
RunningTraining & races
AboutValues & identity
Life & PlacesCulture, food, travel, cities
Notes & ArchiveJournals, essays, portfolio
← Back to Building
TOOLMar 2026

Six Data Explorers from Scratch

FTI economists were spending hours navigating BLS and Census websites to pull public data for background research. I built six single-file tools that do the same thing in seconds.

toolsdata-explorerscross-modelpublic-dataconsulting

The problem was mundane but expensive: every background research memo at FTI starts with an analyst manually pulling data from federal websites. Census population tables, FRED economic indicators, BLS employment stats, CFPB complaint data, EDGAR financial filings, BTS transportation metrics. Each site has its own query interface, its own export format, its own quirks. An experienced analyst can pull a county-level Census table in 20 minutes. A junior analyst takes an hour and sometimes grabs the wrong vintage.

So I built six self-contained HTML data explorers — one per source. Each file is a single HTML document with all the data embedded as JavaScript objects. No server. No API calls at runtime. No login. You open the file and the data is there. The architecture is deliberately simple: an `el()` DOM helper handles all rendering, SheetJS handles Excel export, and every explorer follows the same pattern. Census is 1,770 lines. FRED is 1,961. The biggest is under 2,000.

The feature I’m proudest of is cross-variable analysis. Each explorer has a tab where you can compare two variables on dual-axis time charts, generate scatter plots with collision-free labels, and see Pearson correlation coefficients. EDGAR and CFPB have derived metrics — net margin, debt ratio, year-over-year growth — that would take an analyst 30 minutes to calculate manually. BTS has route-level scatter with linear regression trend lines. These aren’t dashboards for executives. They’re workbenches for analysts.

The ChatGPT adversarial review was the validation moment. I had ChatGPT review the Census Explorer code. It found two bugs Claude missed — a silent column drop when a category had no data, and a missing NAME column guard that would have crashed on Alaska borough entries. Both bugs would have produced plausible-looking output with wrong numbers. No error, no warning, just a table that looks correct but has silently dropped rows. That’s the worst kind of bug in consulting — it ships.

Every explorer has expandable provenance panels showing the exact source query, a worked example of the calculation, and a URL you can click to verify against the original federal website. File import accepts CSV and Excel with auto-column detection per data model. A unified dashboard (`index.html`) shows all six tools with capability pills and data vintage so you know exactly how fresh the embedded data is.

// Each explorer follows the same pattern:
// 1. Data embedded as JS objects (no API calls needed)
// 2. el() DOM helper for all rendering
// 3. Cross-variable analysis tab
// 4. Provenance detail panels
// 5. Excel export via SheetJS
// 6. File import with auto-column detection

// Six tools:
// Census_Explorer.html  (1770 lines)
// FRED_Explorer.html    (1961 lines)
// BLS_Explorer.html     (1846 lines)
// CFPB_Explorer.html    (1400 lines)
// EDGAR_Explorer.html   (1715 lines)
// BTS_Explorer.html     (903 lines)
// index.html            (265 lines - unified dashboard)