In traditional SEO, the goal is Position 1. In AI search, there is no position 1. There is only cited or not cited, first mention or fifth mention, positive framing or neutral framing. The metrics are completely different.
Why Traditional Metrics Fail in AI Search
Marketing teams are used to a clear performance metric: rank. Position 1 means you win. Position 10 means you're invisible. The entire SEO industry was built on this single scoreboard.
AI search doesn't work this way. When a user asks ChatGPT 'What's the best project management tool for a small team?', ChatGPT doesn't return a ranked list of ten results. It synthesises an answer — sometimes naming one product, sometimes naming three, sometimes comparing five. There is no rank. There is only presence or absence, and if presence, the context and framing of that presence.
This requires a completely different measurement framework.
The Four Metrics That Actually Matter
1. Visibility Score
The percentage of tracked prompts where your brand appears in the AI's answer — across all platforms. If you're tracking 100 prompts and appear in 34 of them, your Visibility Score is 34%. This is your headline metric. Everything else is detail behind it.
2. Share of Voice
Of all AI answers in your category that mention any brand, what percentage mention yours? A visibility score tells you how often you appear. Share of Voice tells you how you're doing relative to your competitive set. A brand with 40% visibility but 12% Share of Voice is being outcompeted by brands showing up more consistently. A brand with 15% visibility but 28% Share of Voice is punching above its weight.
3. Average Position
When AI does mention your brand in an answer that also mentions competitors, where do you tend to appear? First mention in an AI answer carries significantly more weight than fifth mention — the same way position 1 in Google matters more than position 7, even though both are 'on the first page'. Tracking your average mention position tells you whether you're leading the answer or being added as an afterthought.
4. Sentiment Score
AI answers don't just mention brands — they frame them. 'HubSpot is excellent for teams that need an all-in-one solution' is different from 'HubSpot can be expensive for small businesses'. Both are citations. Only one is an endorsement. Tracking the sentiment of every brand mention gives you an early warning system for reputation issues and a signal for content gaps — if AI consistently pairs your brand with a specific limitation, that's a content brief.
Platform-Level Breakdown
These four metrics should be tracked per AI platform, not just in aggregate. A brand might have 40% visibility on ChatGPT and 8% on Perplexity. That's not a single visibility story — it's two very different stories requiring two different content and technical strategies. Perplexity weights external source citations heavily. ChatGPT weighs entity recognition and training data. Google Gemini is heavily influenced by traditional search signals. The platform breakdown shows you where your gaps actually are.
Setting Benchmarks
For brands new to AI search monitoring, typical benchmarks by maturity stage:
- •Early stage (0-3 months in) — Visibility 5-15% · Share of Voice 3-8% · Mostly neutral sentiment
- •Growing (3-12 months) — Visibility 20-35% · Share of Voice 10-18% · Mostly positive sentiment
- •Category leader — Visibility 40%+ · Share of Voice 20%+ · Consistently cited first or second
All four metrics — Visibility Score, Share of Voice, Average Position, and Sentiment — are tracked daily per AI platform in the Hema Dashboard. The competitive ranking table gives you an instant view of where every tracked brand stands across all four dimensions.