KPI score interpretation
PublishedRoastIQ scores are paired with named KPI definitions so teams know what the number is meant to represent before acting on it.
Validation notes
This page exists to make the product easier to trust. It explains what is calibrated and published today, what confidence labels mean, and where the public site deliberately stops short of making stronger claims.
RoastIQ scores are paired with named KPI definitions so teams know what the number is meant to represent before acting on it.
Benchmark labels provide context for whether a result looks above, around, or below the bar for comparable work.
Confidence labels are there to signal how heavily a team should lean on a diagnostic when making a decision.
This site does not currently publish panel-agreement coefficients or external validation percentages, and it should not imply otherwise.
Use the report to narrow decisions, not to pretend uncertainty has disappeared. A strong RoastIQ result can support scale, a weak result can save rework, and a mixed result can point to the exact area that needs another iteration or a deeper test.