Skip to Content
OperationsEvidence grep patterns

Evidence grep patterns

Every non-trivial runtime decision emits one line of JSON onto an append-only ledger. events.jsonl is that ledger. Because it is structured, jq and ripgrep will get you to the failing decision in seconds.

Source: ftui-runtime’s evidence sink (schema ftui-evidence-v1), surfaced via meta/events.jsonl under every doctor_frankentui artifacts run root.

Orientation

Every line has these common fields:

{ "schema_version": "ftui-evidence-v1", "run_id": "doctor_happy_seed0", "seq": 4812, "timestamp_ms": 67200, "event": "<event_type>", /* event-specific payload */ }
  • run_id — stable ID from determinism fixtures.
  • seq — monotonic sequence number, globally unique within a run.
  • timestamp_ms — deterministic (from DeterminismFixture::now_ms) when E2E_DETERMINISTIC=1, wall-clock otherwise.
  • event — the event type. Every recipe below filters on this.

Pretty-printing tip. Most of the snippets below pipe to jq -c (compact). Drop the -c for a multi-line view when you want to read a single event in detail.

diff_decision

Emitted by the runtime when it chooses a diff strategy for the current frame (full repaint vs row-based vs cell-based vs adaptive).

Fields: strategy, frame_idx, dirty_rows, dirty_cells, predicted_cost_us, chosen_cost_us, baseline_cost_us.

# All diff decisions in order jq -c 'select(.event=="diff_decision")' meta/events.jsonl # Histogram of chosen strategies jq -r 'select(.event=="diff_decision") | .strategy' meta/events.jsonl | sort | uniq -c # Frames where the chosen strategy cost more than predicted (surprise penalty) jq -c 'select(.event=="diff_decision" and .chosen_cost_us > .predicted_cost_us*1.5) | {seq,frame_idx,strategy,predicted:.predicted_cost_us,chosen:.chosen_cost_us}' \ meta/events.jsonl

Background: runtime/rollout/lanes and intelligence/bayesian-inference/diff-strategy.

resize_decision

Emitted by the BOCPD-backed resize coalescer. Resize events arrive faster than they can be usefully honoured; BOCPD decides which to service and which to coalesce.

Fields: from_cols, from_rows, to_cols, to_rows, coalesced_count, bocpd_run_length, changepoint_probability, action ("service" | "coalesce" | "defer").

# All serviced resizes jq -c 'select(.event=="resize_decision" and .action=="service")' meta/events.jsonl # Resize storms: chains of coalesced events jq -c 'select(.event=="resize_decision" and .coalesced_count > 5)' meta/events.jsonl # Changepoint probability over time (for plotting) jq -r 'select(.event=="resize_decision") | [.timestamp_ms,.changepoint_probability] | @tsv' \ meta/events.jsonl

Background: BOCPD.

conformal_frame_guard

Per-frame budget prediction and verdict. See frame budget.

Fields: tier, predicted_p95_us, budget_us, headroom_us, verdict ("hold" | "breach"), frame_idx.

# Every breach jq -c 'select(.event=="conformal_frame_guard" and .verdict=="breach")' meta/events.jsonl # Breach count per tier jq -r 'select(.event=="conformal_frame_guard" and .verdict=="breach") | .tier' \ meta/events.jsonl | sort | uniq -c # p95 vs budget time series jq -r 'select(.event=="conformal_frame_guard") | [.timestamp_ms,.tier,.predicted_p95_us,.budget_us,.headroom_us] | @tsv' \ meta/events.jsonl

degradation_event

Tier transitions in the degradation cascade.

Fields: from_tier, to_tier, reason, consecutive_safe_frames (on upgrade).

# All tier transitions in order jq -c 'select(.event=="degradation_event")' meta/events.jsonl # Downgrades only (reason usually "conformal_frame_guard_breach") jq -c 'select(.event=="degradation_event") | select(.to_tier != .from_tier and (.from_tier | IN("Full","SimpleBorders","NoColors"))) | select(.reason | test("breach|trigger"))' \ meta/events.jsonl # Count transitions by reason jq -r 'select(.event=="degradation_event") | .reason' meta/events.jsonl | sort | uniq -c

voi_decision

Value-of-information sampling. The runtime asks “would paying the cost of observing X actually improve the decision I’m about to make?” and emits its reasoning.

Fields: decision_kind, observation, prior_entropy, posterior_entropy, info_gain_nats, cost, chose_to_observe.

# Observations skipped because cost exceeded expected value jq -c 'select(.event=="voi_decision" and .chose_to_observe==false)' meta/events.jsonl # Info gain distribution jq -r 'select(.event=="voi_decision") | .info_gain_nats' meta/events.jsonl \ | awk '{s+=$1; n+=1} END {print "mean:", s/n, "n:", n}'

Background: intelligence/voi-sampling.

queue_select

Effect queue selection under backpressure. The scheduler picks the next effect to run using SRPT + Smith’s rule + aging — this event records which effect won and why.

Fields: selected_effect_id, candidate_count, policy, effective_priority, queue_depth, aging_bonus.

# Selections where aging was decisive jq -c 'select(.event=="queue_select" and .aging_bonus > 0)' meta/events.jsonl # Queue depth over time (useful for backpressure studies) jq -r 'select(.event=="queue_select") | [.timestamp_ms,.queue_depth] | @tsv' \ meta/events.jsonl

Background: intelligence/scheduling.

eprocess_reject

E-process / anytime-valid rejection. When the runtime’s sequential test rejects a null (e.g. “strategy A is no better than strategy B”), this event records the rejection with the e-value that justified it.

Fields: test_name, e_value, alpha, null_hypothesis, observations_count.

# All rejections (these are where the runtime actually changed behaviour) jq -c 'select(.event=="eprocess_reject")' meta/events.jsonl # Rejections with e-value breakdown jq -r 'select(.event=="eprocess_reject") | [.test_name,.e_value,.observations_count] | @tsv' \ meta/events.jsonl

Background: intelligence/e-processes.

Cross-event recipes

Correlate a degradation with its cause

# For every degradation_event, print it + the preceding conformal_frame_guard jq -c '. as $e | if .event=="conformal_frame_guard" then {g: $e} else . end' meta/events.jsonl \ | awk '/"g":/ {prev=$0; next} /"degradation_event"/ {print prev; print}'

Rate-limit a noisy stream

# Only keep the first occurrence of each (event, tier) combination per second jq -c '[(.timestamp_ms/1000|floor), .event, .tier // ""] as $k | select(($k|tostring|in($seen))|not) | ($seen |= . + {($k|tostring): true})' --argjson seen '{}' \ meta/events.jsonl

Pull the event ledger around a specific frame

# Frames 40–44 in context (5 events before, 5 after the first matching line) awk '/"frame_idx":4[0-4][^0-9]/{print NR}' meta/events.jsonl \ | head -n 1 \ | xargs -I {} sed -n '{},+20p' meta/events.jsonl \ | jq -c .

Detect an “event storm”

# Bins with more than 50 events per 100 ms (candidate noise or an infinite loop) jq -r '[.timestamp_ms/100|floor,.event] | @tsv' meta/events.jsonl \ | sort | uniq -c | awk '$1 > 50'

Ripgrep, when jq is overkill

For a first-glance scan of a huge ledger, rg is faster:

# Every line mentioning a breach rg '"verdict":"breach"' meta/events.jsonl # Every degradation between two specific tiers rg '"from_tier":"Full","to_tier":"SimpleBorders"' meta/events.jsonl # Context around the first error rg -n '"severity":2|"verdict":"breach"' meta/events.jsonl | head

Pitfalls

Don’t trust a “stable” recipe on an evolving schema. The ledger’s schema_version field is authoritative. When it bumps, re-verify your recipes against the new shape.

Sequence numbers are monotonic, not contiguous across runs. seq resets each run. Don’t concatenate two events.jsonl files and expect seq ordering to be correct — use (run_id, seq) as the composite key.

Wall-clock timestamps are nondeterministic. If E2E_DETERMINISTIC=1 wasn’t set, your timestamp_ms values are machine-local. Prefer seq for ordering across runs.