Methodology

One system. Two layers.
Transparent about both.

RoastIQ scores the creative against market benchmarks. BuyerLens explains why specific buyer segments resist. Each layer has its own method, its own limits, and its own evidence standard.

What goes in

One video or image. The platform, category, and region you're targeting. That's it.

🎬

Creative asset

MP4, MOV, GIF, PNG, JPG. Up to 60 seconds.

📱

Platform context

Instagram Reels, TikTok, YouTube Shorts, etc.

🌍

Market context

Category (FMCG, DTC, Tech) + region (UK, US, EU).

How it works

01

Extract

Frames, audio waveform, transcript, visual features — extracted in parallel.

~5s
02

Analyze

Visual attention (TranSalNet), brand signal detection, copy scoring, audio analysis.

~60s
03

Score

5 raw signals → sub-KPI families → 5 main KPIs → composite → verdict.

~15s
04

Benchmark

Score positioned against platform + category norms from ad transparency data.

~10s

Outputs: The 5-KPI Framework

Each creative is scored across five perception families. The composite determines the verdict.

Beat the Skip25%

Inputs: Attention (55%) + Skip retention (45%)

Get Noticed20%

Inputs: Branding (60%) + Emotion (40%)

Brand Impact20%

Inputs: Branding (45%) + Brand lift (30%) + Emotion (25%)

Sell Proposition20%

Inputs: Conversion (60%) + CTA (40%)

Build Brand15%

Inputs: Brand lift (55%) + Branding (30%) + Clarity (15%)

Scale

Composite ≥ 70, no KPI < 55

Sharpen

Composite 55-69 OR one KPI < 45

Rebuild

Composite < 55 OR two+ KPIs < 45

When to use RoastIQ

Before any media budget is committed — the diagnostic should arrive first

When you have a near-final cut and need a go/hold/fix decision

When the team disagrees about the creative and needs a shared reference point

When you want to compare two variants before choosing which to scale

What RoastIQ does NOT claim

Does not predict in-market success — no published outcome correlation yet

Heatmaps predict visual attention, not measured eye tracking

Attribute detection is ~85% accurate, not 100%

Benchmark pool coverage varies by platform and category

Scores are model predictions, not measurements

Related reading

ScienceThe Pre-Post Gap: Why creative testing after launch is too lateScienceThree layers of scoring: from raw signals to KPI families to creative verdictScienceVisual attention prediction vs. eye tracking: what we can and can't claim

The decision loop

1

RoastIQ

Score the creative. Get the 5-KPI verdict + benchmark context.

2

BuyerLens

If a KPI is weak: understand which buyers resist and why.

3

Edit

Apply the fix direction. Keep what scored well. Change what didn't.

4

Rerun

Upload the new cut. Compare scores. Confirm the fix landed.

See the methodology in action

Run a free diagnostic →