Validation notes

Calibration and claim boundaries

This page exists to make the product easier to trust. It explains what is calibrated and published today, what confidence labels mean, and where the public site deliberately stops short of making stronger claims.

Current calibration state

KPI score interpretation

Published

RoastIQ scores are paired with named KPI definitions so teams know what the number is meant to represent before acting on it.

Benchmark context

Published

Benchmark labels provide context for whether a result looks above, around, or below the bar for comparable work.

Confidence language

Published

Confidence labels are there to signal how heavily a team should lean on a diagnostic when making a decision.

Public agreement coefficients

Not yet published

This site does not currently publish panel-agreement coefficients or external validation percentages, and it should not imply otherwise.

How teams should use the outputs

Use the report to narrow decisions, not to pretend uncertainty has disappeared. A strong RoastIQ result can support scale, a weak result can save rework, and a mixed result can point to the exact area that needs another iteration or a deeper test.

  • Use benchmark context to interpret a score, not as a substitute for strategy.
  • Use confidence labels to decide how quickly the team should act.
  • Escalate to human research when the commercial risk is high or the decision remains ambiguous.

Related references