Command palette
The command palette is FrankenTUI’s flagship search widget, and it is one of the few places in the codebase where the intelligence layer is visible at the widget surface. Instead of a hand-tuned fuzzy-match score, the palette ranks candidates by posterior probability computed from a Bayesian evidence ledger — each piece of evidence (match type, word boundary, position, gap penalty, tag match, title length) contributes a Bayes factor, and the product plus a prior yields the final score.
Because the scoring is explicit, every ranked result carries its ledger. You can dump the ledger to JSONL, audit it in a debugger, and reason about why one candidate outranked another — no opaque heuristic black box.
The maths behind this page is covered in detail on bayesian-inference / command-palette-ledger (the intelligence-layer companion). This page focuses on the widget.
What the palette is
CommandPalette is a modal-style overlay that:
- Takes a query string from a text input.
- Scores every candidate command against the query.
- Ranks candidates by posterior probability.
- Computes a rank-stability indicator (via conformal prediction).
- Renders the top N results.
Sources:
- Widget:
command_palette/mod.rs - Scorer:
command_palette/scorer.rs
The probability model
The scorer treats “is this candidate relevant?” as a Bayesian hypothesis:
Posterior odds convert to a probability in via
That probability is the palette score. Results are sorted descending.
Prior odds from match type
Every candidate is first classified by MatchType
(scorer.rs:55).
The match type sets the prior odds:
MatchType | Prior odds | (approx) | When it fires |
|---|---|---|---|
Exact | 99:1 | 0.99 | Full string match |
Prefix | 9:1 | 0.90 | Query is a prefix of the title |
WordStart | 4:1 | 0.80 | Query matches word-start boundaries |
Substring | 2:1 | 0.67 | Contiguous substring anywhere |
Fuzzy | 1:3 | 0.25 | Non-contiguous character match |
NoMatch | 0 | 0.00 | Rejected |
The MatchType::prior_odds() table lives at
scorer.rs:77–85.
These priors are not arbitrary — they encode the empirical observation that exact matches are almost always what the user wants, and fuzzy matches need independent corroborating evidence before they can outrank a substring.
Evidence factors
After the prior is set, the ledger accumulates per-evidence Bayes factors.
Each factor multiplies the odds. Representative factors
(scorer.rs:118–144):
| Evidence | Typical Bayes factor | Intuition |
|---|---|---|
WordBoundary | ≈ 2.0 | The match aligns with a word start |
Position | Earlier matches are more informative | |
GapPenalty | Gaps between matched characters reduce confidence | |
TagMatch | ≈ 3.0 | Query also matches a metadata tag (category, alias) |
TitleLength | Shorter titles are more specific |
The final posterior is
and lives in Evidence::posterior_probability
(scorer.rs:238):
pub fn posterior_probability(&self) -> f64 {
let prior = self.prior_odds().unwrap_or(1.0);
let bf: f64 = self.entries.iter()
.filter(|e| e.kind != EvidenceKind::MatchType)
.map(|e| e.bayes_factor)
.product();
let posterior_odds = prior * bf;
posterior_odds / (1.0 + posterior_odds)
}JSONL ledger output
Because every factor is explicit, the whole ledger serialises cleanly. The
to_jsonl method at
scorer.rs:260
emits one line per candidate:
{"title":"Open file...","match_type":"Prefix","prior_odds":9.0,
"entries":[
{"kind":"WordBoundary","bayes_factor":2.0,"rationale":"matched at word start"},
{"kind":"Position","bayes_factor":0.75,"rationale":"match at pos 2"},
{"kind":"TitleLength","bayes_factor":1.3,"rationale":"short title"}
],
"posterior":0.912}Paste that into a log and you have a complete audit trail of why this
candidate was ranked where it was. The
DecisionCard widget can render it in-app.
Conformal rank confidence
Posterior probability tells you how relevant each candidate is on its own. It does not tell you whether the top result is meaningfully ahead of the runner-up. A query with two candidates at 0.42 and 0.41 is a dead heat; one at 0.90 and 0.55 is decisive.
ConformalRanker
(scorer.rs:973)
adds a RankConfidence to each item with a RankStability tag:
| Stability | Meaning |
|---|---|
Stable | Confidence interval clearly separates from the neighbours |
Marginal | Intervals overlap but medians are ordered |
Unstable | Overlap so severe that the ranking could swap |
The palette uses this to decide whether to commit to a top result or ask the user for more characters. It is also what the VoiDebugOverlay renders when you want to inspect how confident the palette is.
Worked example
A user types op file into the palette against these candidates:
| Candidate | MatchType | Key factors | Posterior |
|---|---|---|---|
Open File | WordStart | WordBoundary×2, TitleLength↑ | 0.93 |
Open Folder | WordStart | WordBoundary×2, no file tag | 0.84 |
Compile | Fuzzy | Gap penalty, position mid-string | 0.17 |
The top two are both WordStart matches, but “Open File” wins because the
literal file also matches a tag (BF ≈ 3.0). The rank confidence shows
Stable — the margin is ~0.09 on posterior scale, well above overlap.
Integration
use ftui_widgets::command_palette::CommandPalette;
pub struct Model {
pub palette: CommandPalette,
pub palette_open: bool,
}
impl Model {
fn open_palette(&mut self) {
self.palette_open = true;
self.palette.reset();
}
// Not the `Model` trait `update` — this is a plain helper that takes
// a raw `Event`. The real trait method takes `Self::Message` and is
// typically implemented by bridging `Event` through a `From<Event>`
// impl on your `Msg` enum, as shown in [hello-tick](/getting-started/hello-tick).
fn handle_event(&mut self, event: Event) {
if self.palette_open {
if self.palette.handle_event(&event) {
if let Some(cmd) = self.palette.chosen() {
execute(cmd);
self.palette_open = false;
}
}
}
}
fn view(&self, frame: &mut ftui_render::frame::Frame) {
render_main(frame);
if self.palette_open {
self.palette.render(frame);
}
}
}You typically pair this with a modal stack push so focus is trapped to the palette input.
Performance
Scoring N candidates is O(N × L) where L is average query length. The
palette’s underlying index is an adaptive radix tree
(adaptive_radix.rs)
so prefix matches short-circuit to O(L). With typical command sets (100 –
10,000 entries), the palette updates in sub-millisecond time per keystroke
on a cold cache, well inside a 16ms frame budget.
Pitfalls
- Mixing scoring systems. If you extend the palette with external scores, convert them to Bayes factors (log-odds, not raw weights). Multiplying a BF by a heuristic score silently re-weights the whole posterior.
- Forgetting tag evidence. Commands without tags can’t benefit from the ~3.0 factor; their ranking will lag semantically similar tagged commands.
- Showing all results. At 10,000 candidates, even a sorted list is
overwhelming. The palette caps visible results (default 20) — pair it
with
RankStabilityto decide when to cut off. - Running the ranker on every keystroke blind. The palette caches the last query’s trie cursor; on incremental typing the incremental update is fast. Invalidate the cache only when the query is cleared.
Where next
The math in depth — prior calibration, sequential Bayes, and the JSONL evidence schema.
Bayesian: command palette ledgerHow RankStability is computed via conformal prediction.
The same Bayesian machinery applied to inline hints.
Hint rankingOpen the palette as a modal with focus trap and backdrop.
Modal stackHow this piece fits in widgets.
Widgets overview