Intelligence Layer — Overview
Most terminal UIs are a bag of heuristics. A magic number says “coalesce resizes after 50 ms.” Another magic number says “show this hint after the user idles for 1.5 seconds.” A third says “switch to a full redraw if more than 40% of cells changed.” The numbers are plausible, the authors meant well, and the system still thrashes the moment a workload drifts off the spreadsheet that produced them.
FrankenTUI takes a different posture: every non-trivial decision inside the frame loop is cast as a statistical problem, solved with a principled estimator, and written to a JSONL evidence sink so you can re-run the decision offline. Thresholds are quantiles, schedules are optimal policies, alerts are test martingales. The numbers become consequences of the data, not tuning dials.
This page is the index to that machinery. Each linked page explains one technique, motivates it against the naive alternative, shows the math, and points at the subsystem that consumes it.
The unifying idea
Five recurring concerns drive the design of every component in this layer:
- Calibrated uncertainty — a TUI never has more than a few hundred samples of anything. Bayesian posteriors (Beta over rates, Normal-Normal over heights) give usable uncertainty from a handful of observations; frequentist point estimates do not.
- Anytime-valid testing — the runtime “peeks” at every frame. A fixed- test breaks under repeated looks. E-processes and conformal prediction keep their guarantees under continuous monitoring.
- Cost-aware decisions — the cheapest rendering strategy depends on the workload. The diff, presenter, and degradation cascade all pick over an explicit cost model rather than a static rule.
- Bounded starvation — background effects still need to make progress. Smith’s rule with aging gives an optimal weighted-completion schedule plus a provable wait-time bound.
- Auditable reasoning — every decision emits a
ftui-evidence-v1JSONL record. If the palette ranked the wrong command, the evidence ledger shows exactly which Bayes factors pushed it up.
Mental model: think of FrankenTUI’s runtime as a control plant instrumented with probes. Bayesian estimators turn noisy probes into posteriors. Change detectors watch posteriors for regime shifts. Conformal layers wrap risk bounds around predictions. Control theory closes the loop on frame budget. Everything upstream writes evidence so the decisions are inspectable after the fact.
Map of techniques
Bayesian inference — what to believe
| Page | What it estimates | Where it’s used |
|---|---|---|
| Command-palette ledger | widgets/command-palette | |
| Hint ranking | utility per keybinding | Help overlay, focus hints |
| Diff strategy | change rate | render/diff |
| Capability detection | per capability | core/capabilities |
| Height prediction | Normal-Normal posterior per row class | widgets/virtualized-list |
Change detection — when the world shifts
| Page | What it detects | Where it’s used |
|---|---|---|
| BOCPD | Steady ↔ burst resize regimes | runtime/overview |
| CUSUM | Mouse jitter · allocation drift | Hover stabilizer · budget |
| Alpha-investing | mFDR across concurrent monitors | Runtime alert bus |
Conformal prediction — calibrated risk
| Page | What it bounds | Where it’s used |
|---|---|---|
| Vanilla conformal | Residual quantile alerts | Generic threshold gating |
| Mondrian conformal | Per-bucket frame-time upper CI | operations/frame-budget |
| Rank confidence | Gap-based p-value on top-k | Palette tie-breaking |
Cross-cutting machinery
Heuristic vs. principled — a side-by-side
Heuristic TUI
// The usual story.
if change_pct > 0.4 {
full_redraw();
} else {
dirty_row_diff();
}One magic number. No defence when terminals get wider, scrollback gets shorter, or the workload starts drifting. Tuning drift shows up as p99 regressions weeks later.
Evidence sink — the ground truth for every decision
Every page in this section ends with a “How to debug” subsection that
names the JSONL event emitted by that subsystem. The envelope is
ftui-evidence-v1 and writes land in FTUI_EVIDENCE_SINK (stdout or
file).
# Enable evidence to a file
FTUI_EVIDENCE_SINK=/tmp/ftui.jsonl cargo run -p ftui-demo-showcase
# Inspect the last 5 diff decisions:
jq -c 'select(.schema=="diff_decision")' /tmp/ftui.jsonl | tail -5See runtime/evidence-sink for the envelope
schema and the full event catalog in
reference/telemetry-events.
How to read this section
Start with the overview you’re on
You’re here. Skim the tables above to find the subsystem that’s bothering you or interesting to you.
Follow one page through to the evidence
Each page walks the same arc: what goes wrong naively → mental model → formula → Rust interface → how to debug. Read one end-to-end before fanning out — the sink event name gives you a scalpel for live investigation.
Cross-link to the using subsystem
Every technique is anchored in a consumer. If you are debugging a resize
glitch, you want BOCPD and
runtime/overview. If you are tuning p99 frame
time, you want Mondrian
and operations/frame-budget.
Keep the math-at-a-glance table open as a cheat sheet
intelligence/math-at-a-glance is the
single table that maps every technique to a one-line formula and a
performance impact. Use it to orient yourself before diving into a page.
What this layer does not promise
Principled ≠ magical. A calibrated posterior tells you what the data says; it cannot invent signal that is not in the data. If your palette corpus is unrepresentative, Bayes factors will happily rank the wrong command. If your calibration window is shorter than the longest regime, conformal bounds will over-cover and then catastrophically under-cover. The evidence sink exists precisely so you can catch these cases — stare at the ledger when a decision surprises you.
Cross-references
runtime/evidence-sink— where every decision lands as JSONL.operations/frame-budget— how the conformal + PI + degradation pipeline cooperates.reference/telemetry-events— full schema of every emitted event.