Methodology

How SaliencyLab Produces Enterprise-Grade Creative Signals

SaliencyLab combines multimodal model outputs, structured scoring rules, and benchmark confidence metadata. Results are designed for decision support before media spend, not post-campaign attribution.

1. Analysis Pipeline

  • Input ingestion for image/video creatives and optional transcript context.
  • Video default path uses hybrid FFmpeg frame contract, Google Video Intelligence shot/label extraction, and Gemini semantic synthesis.
  • Visual and language analysis produces core metric primitives (attention, clarity, branding, emotion, CTA).
  • Perception layer generates diagnostics, attention decay, and drop-point explanations.
  • Enterprise layer enriches payload with pillar scores, skip prediction, KPI families, and matrix classification.

2. Scoring Framework

RoastIQ

Composite score: Attention 25% + Clarity 20% + Branding 20% + Emotion 20% + CTA 15%.

Enterprise Pillars

Brand, Creative, and Behavioral pillar scores provide executive-level decomposition for faster decision-making.

Skip x Impact Matrix

Beat the Skip and Brand Impact scores map each creative into goal/missed/wasted/avoid opportunity zones.

3. Benchmark Confidence

Benchmark cards include sample count, source type, platform norm version, and confidence level. This prevents over-confidence when data density is low or heuristic estimates are used.

4. Validation and Governance

  • Model responses are schema-normalized before persistence.
  • Analysis payloads include model version, confidence estimate, and generation timestamp.
  • Benchmark metadata is versioned to support reproducibility and audits.

5. Known Limitations

Forecasts are decision-support signals, not guaranteed outcomes. Performance still depends on media buying, audience selection, offer quality, and competitive dynamics.