Why we score recommendations
Each monitoring cycle produces 3-15 recommendations per brand — content gaps, competitor positioning shifts, technical fixes. Without prioritization, brands cannot distinguish between a gap that affects one low-volume prompt and a gap that spans dozens of high-intent prompts where competitors dominate. Impact Score answers “which recommendation should I act on first?” by combining four measurable factors into a single priority signal.The four input factors
Impact Score is computed from four inputs, each capturing a different dimension of recommendation urgency.| Factor | Abbrev. | Scale | What it measures |
|---|---|---|---|
| Query Volume Weight | QV | 1-10 | How many tracked prompts are affected by this gap. More affected prompts = higher weight. Floor: 3. |
| Competitive Gap Size | CG | 1-10 | How large the visibility gap is between the brand and its closest competitor on the affected prompts. See bidirectional logic below. |
| Fix Feasibility | FF | 1-5 | How actionable the fix is with available resources. Technical fixes (robots.txt, schema) score higher than long-term authority campaigns. |
| Intent Value | IV | 1-5 | How commercially valuable the affected prompts are. Commercial and transactional intent prompts score higher than purely informational ones. |
Brand-relative normalization
Impact Score is brand-relative, not universal. A score of 60 means “60% of the maximum achievable impact for THIS brand’s current data maturity” — not a universal benchmark comparable across brands. The formula:Worked example: two brands, same recommendation quality
Brand A has 15 pipeline runs (QV = 5). A recommendation has CG = 10, FF = 4, IV = 5.Why brand-relative
A universal 0-100 scale makes HIGH impact scores mathematically impossible for low-data-maturity brands. Their recommendations would all cluster at LOW despite being genuinely urgent. Brand-relative scoring ensures every brand sees meaningful prioritization signal within its own data context.Competitive Gap: bidirectional logic
CG treats offensive and defensive scenarios differently. The question changes depending on whether the brand is trailing or leading a competitor on the affected prompts.Offensive (brand trailing competitor)
When the brand’s visibility is lower than the competitor’s, CG reflects distance-to-catch-up — larger gaps produce higher urgency.| Visibility gap | CG score | Interpretation |
|---|---|---|
| < 10 percentage points | 1 | Nearly caught up |
| 10-30 pp | 3 | Moderate gap |
| 30-50 pp | 5 | Significant gap |
| 50-70 pp | 7 | Large gap |
| > 70 pp | 10 | Dominant competitor, maximum urgency |
Defensive (brand leading competitor)
When the brand’s visibility is higher than the competitor’s, the question shifts from “how far behind am I?” to “how exposed am I?” CG reflects proximity-to-vulnerability — how close the nearest competitor is to catching up.| Lead margin | CG score | Interpretation |
|---|---|---|
| < 10 pp | 7 | Competitor closing in — high urgency |
| 10-20 pp | 5 | Moderate exposure |
| 20-30 pp | 3 | Reasonable buffer |
| > 30 pp | 1 | Comfortable lead — low urgency |
Strategic alignment multiplier
When a recommendation’s gap topic matches the brand’s declared positioning (stored inbrand_intent.target_attributes), the Impact Score is multiplied by 1.5.
Example: A skincare brand declares “retinol” and “vitamin C serums” as target attributes. A recommendation addressing a retinol content gap receives a 1.5x multiplier because the gap aligns with how the brand wants to be known. A recommendation about “sunscreen SPF” — important but not declared as a positioning priority — does not receive the multiplier.
Matching logic: Both the gap’s topic and the brand’s declared target attributes are tokenized into significant words (longer than 3 characters, excluding stop words). Any word-level overlap triggers the multiplier. The match is intentionally broad — strategic alignment is a directional signal, not an exact-match filter.
Low-query-count safeguard
If the strategic multiplier fires AND the recommendation is supported by fewer than 5 queries, the final score is capped at 59 (just below the HIGH threshold). This prevents single-query edge cases from surfacing as HIGH priority based on strategic boost alone. The recommendation still appears — it is scored MEDIUM rather than HIGH until more query data supports the signal. When this fires in practice: Approximately one recommendation per brand cycle, typically on niche positioning topics where the brand has declared a target attribute but the prompt library contains only 1-3 relevant prompts. Once the prompt library grows to cover the topic with 5+ prompts, the cap no longer applies.Tier thresholds
Impact Scores map to three priority tiers displayed in the dashboard.| Tier | Score range | Meaning |
|---|---|---|
| HIGH | 60-100 | Act on this recommendation first. High query volume, significant competitive gap, actionable fix, valuable intent. |
| MEDIUM | 30-59 | Important but not urgent. Address after HIGH-tier items, or when HIGH-tier items are already in progress. |
| LOW | 0-29 | Low urgency. The gap exists but is small, affects few prompts, or involves a difficult fix. Monitor rather than act immediately. |
Production tier distribution
The following table shows the tier distribution for four production brands after the April 2026 recalibration. Each row reflects the brand’s current data maturity and competitive position.| Brand | Before recalibration | After recalibration | Notes |
|---|---|---|---|
| Zero-state brand | 1 HIGH / 4 MED / 2 LOW | 3 HIGH / 3 MED / 1 LOW | Growing brand, more recommendations surfaced as HIGH |
| Dominant brand | 0 HIGH / 5 MED / 0 LOW | 0 HIGH / 1 MED / 4 LOW | Dominant brand — defensive logic correctly produces LOW scores |
| Scaling brand | 1 HIGH / 3 MED / 9 LOW | 5 HIGH / 5 MED / 3 LOW | Low-query-count cap fired once (strategic multiplier capped at 59) |
| Mid-range brand | 0 HIGH / 7 MED / 0 LOW | 0 HIGH / 4 MED / 3 LOW | Mid-range brand, improved differentiation between MED and LOW |
What the Impact Score does not capture
Impact Score is a prioritization tool, not a prediction engine. Specific boundaries:- Does not predict exact visibility lift. The
estimated_improvementfield on each recommendation is a separate, rougher estimate. Impact Score ranks recommendations against each other; it does not quantify the outcome of acting on them. - Does not account for implementation cost in time or money. Fix Feasibility (FF) is a 1-5 proxy based on fix type, not a project plan. A technical fix scoring FF = 5 might still require engineering scheduling.
- Does not incorporate historical success rate. Whether past recommendations of the same type produced results is not yet factored in. This is planned future work.
- Not comparable across brands. A score of 60 for Brand A is not equivalent to a score of 60 for Brand B. Each brand’s score is relative to its own maximum possible score. This is the defining property of brand-relative scoring.
Related reading
- Mention rate — the metric feeding Query Volume Weight
- Refresh cadence — when Impact Scores are recomputed
- What we don’t do — scope boundaries including Impact Score
- Close competitor gaps — how to act on high-impact recommendations
- Competitor gap — the visibility gap CG measures
Frequently asked questions
Why did my recommendation's score change recently?
Why did my recommendation's score change recently?
Impact Scores are recomputed on each narrative refresh cycle. Scores change when the underlying inputs change — new pipeline data affecting QV, competitor visibility shifts affecting CG, or prompt library updates changing the query count. A score shift of 5-10 points between cycles is normal.
Why are all my recommendations scored LOW?
Why are all my recommendations scored LOW?
Two common causes. First, the brand may be dominant in its category — defensive CG logic assigns low urgency scores when the brand leads competitors by 30+ percentage points. This is correct behavior, not a bug. Second, the brand may have very few pipeline runs — but the QV floor of 3 should prevent extreme clustering. If all recommendations are LOW and the brand is not dominant, contact support for a diagnostic review.
How often is the Impact Score recomputed?
How often is the Impact Score recomputed?
On every narrative refresh cycle: twice weekly for Pro plan (Tuesday and Friday at 02:00 UTC), weekly for Starter plan (Sunday at 02:00 UTC). See refresh cadence for the full pipeline schedule.
Can I override the Impact Score for a specific recommendation?
Can I override the Impact Score for a specific recommendation?
Not currently. The scoring formula is applied uniformly. If you believe a recommendation is more or less urgent than the score suggests, use it as a starting point and apply your own business context — the score does not know your team’s capacity, current sprint priorities, or strategic timing.
What is the difference between Impact Score and estimated_improvement?
What is the difference between Impact Score and estimated_improvement?
Impact Score is a relative prioritization ranking (0-100, brand-relative). It answers “which recommendation should I work on first?” The
estimated_improvement field is a rough prediction of how much mention rate or share of voice might change if the recommendation is implemented. They measure different things — one is priority, the other is predicted outcome.Why does the strategic multiplier only fire sometimes?
Why does the strategic multiplier only fire sometimes?
The strategic multiplier requires a word-level match between the recommendation’s gap topic and the brand’s declared
target_attributes. If target attributes are not populated, or if the gap topic does not overlap with any declared attribute, the multiplier does not fire. Brands that populate their target attributes thoroughly see strategic alignment on approximately 75% of recommendations.Does my plan tier affect Impact Score calculation?
Does my plan tier affect Impact Score calculation?
Plan tier does not directly affect the scoring formula. However, plan tier determines the narrative refresh cadence (Pro: twice-weekly, Starter: weekly), which affects how quickly Impact Scores reflect new pipeline data. Pro brands see updated scores 2-3 days sooner than Starter brands after each pipeline run.
Last updated: 2026-04-21