Research Philosophy
Why We Exist
Understanding why advertising fails before traditional research can measure it.
The Problem
Most Ads Fail Before Research Can Measure Them
Advertising does not fail because it lacks recall. It does not fail because sentiment scores are low. It fails because attention collapses in the first seconds of exposure.
The brain filters advertising before conscious evaluation begins. Visual salience determines whether an ad gets processed or dismissed. Perceptual filtering happens faster than any survey can capture.
By the time traditional research measures outcomes—recall, lift, brand sentiment—the failure has already occurred. The ad was filtered out before it could form a memory.
"The moment attention drops, there is nothing left to measure."
The Blind Spot
Research Measures Outcomes. It Doesn't Explain Perception.
Traditional research excels at what it was designed to do: validate outcomes. Recall studies confirm whether a message was remembered. Brand lift studies measure changes in perception after exposure. Sentiment analysis captures emotional response.
These methods confirm whether an ad worked. They do not explain why attention dropped in the first 1.5 seconds. They cannot model the perceptual filtering that precedes conscious evaluation.
This is not a criticism. It is a scope distinction.
| Traditional Research | SaliencyLab |
|---|---|
| "Did they remember it?" | "Did they see it long enough to form a memory?" |
| "Did the message resonate?" | "Did the brain process it before filtering it out?" |
| "What was the emotional response?" | "What triggered attention in the first second?" |
Why SaliencyLab Exists
We Model How Ads Are Perceived, Not How They Are Rated
SaliencyLab is a perception intelligence system. We do not replace traditional research. We operate upstream of it.
We explain why attention drops before validation studies begin. We model the first-second mechanisms that determine whether an ad gets processed or filtered. Our taxonomy of perception signals—attention drop causes, ad-recognition triggers, story signals—enables causal explanation, not just correlation.
Key distinction:
Other tools answer: "What performed?"
SaliencyLab answers: "Why it failed or succeeded before performance was measured."
How This Changes Decisions
Reduce Failure Before Testing
SaliencyLab sits upstream of validation. Use perception diagnostics to filter creative concepts before expensive testing. Identify attention collapse risks before media spend. Complement—not replace—brand lift, recall, and sentiment studies with causal mechanism data.
Decision workflow:
Complementary, Not Competing
| Dimension | Traditional Research | SaliencyLab |
|---|---|---|
| Core Question | Did the ad work? | Why did attention drop? |
| Unit of Analysis | Recalled messages, sentiment ratings | Perception signals in first seconds |
| Timing in Process | Post-exposure validation | Pre-testing / upstream diagnostics |
| Methodology | Surveys, panels, brand lift studies | AI perception modeling, attention prediction |
| Output Type | Scores, lift percentages, recall rates | Causal mechanisms, perception taxonomy |
| Primary Value | Confirms outcomes | Explains mechanisms |
| Optimal Use Case | Final validation before scale | Filtering concepts before validation |
| Limitations | Cannot explain first-second perception | Does not replace validation metrics |
Traditional research validates outcomes. SaliencyLab diagnoses perceptual mechanisms. The two are complementary.
"We don't measure reaction.
We explain perception."