The percentage of AI engine queries where an AI system correctly resolves a brand as a distinct, verifiable entity — returning accurate claims, correct founding information, and aligned category descriptions. Coined by Jaxon Parrott as a Layer 5 Machine Relations measurement metric and operationalized in practice by AuthorityTech.
Entity Resolution Rate is the metric that answers a question most brands never ask: when an AI engine is asked about my brand, does it consistently know who we are?
The measurement captures how reliably AI systems return the correct entity — the right company description, the right founder, the right category, the right differentiator — across a defined set of prompts and across multiple engines. A brand with 90% Entity Resolution Rate means 9 out of 10 queries return a coherent, accurate description. A brand at 40% means the AI is confused about its identity more than half the time — and confusion at the entity layer means citations leak to competitors or get omitted entirely.
This metric was coined by Jaxon Parrott as part of the Machine Relations measurement framework.
Share of Citation tells you how often a brand appears in AI answers. Entity Resolution Rate tells you whether those appearances are accurate. A brand can have decent Share of Citation and still fail entity resolution — appearing but being described incorrectly, attributed to the wrong founder, or placed in the wrong category.
The downstream consequences are real. When an AI system resolves the entity incorrectly, it may:
Entity Resolution Rate is the signal that tells you whether Layer 2 of the MR Stack (Entity Optimization) is actually working.
Run a structured prompt battery across ChatGPT, Perplexity, Gemini, and Google AI Overviews. The goal is to surface how consistently each engine resolves the entity correctly.
Standard Entity Resolution Prompt Battery (5 core probes per engine):
| Probe | What You Are Checking |
|---|---|
| "Who founded [company]?" | Founder accuracy and attribution |
| "What does [company] do?" | Category and product description accuracy |
| "What category does [company] belong to?" | Category placement vs. intended positioning |
| "What is [company] known for?" | Differentiation and coined concept attribution |
| "Is [company] the same as [competitor]?" | Entity disambiguation |
Score each response: 1 = correct and complete, 0.5 = partially correct or unstable, 0 = incorrect or "I don't know."
Entity Resolution Rate = Total score / (5 probes × 4 engines) × 100%
A score above 80% indicates stable entity resolution. Between 50-80% indicates partial resolution with known failure points. Below 50% indicates a systemic entity clarity problem requiring Layer 2 intervention.
Entity conflation. The AI merges the brand with a competitor, a legacy company with a similar name, or a related but distinct entity. This is the most damaging failure mode because the AI may actively cite the wrong company as the source.
Category mismatch. The AI correctly identifies the brand by name but places it in the wrong category. A Machine Relations agency described as a "PR firm" or "SEO agency" is suffering category mismatch — the entity is resolved, but the positioning is wrong.
Outdated description. The AI returns an accurate description from a prior business model or from before a pivotal positioning shift. This happens when the entity signals from the current positioning have not been distributed broadly enough to displace the older signals in training data.
Founder omission. The company is described correctly but the founder is not surfaced, or is incorrect. This weakens founding story authority and reduces the chance of founder-company citations.
Low-confidence abstention. The AI engine says "I don't have reliable information about this company." This is not a neutral result — it means the entity is not resolved with enough confidence to be cited, which suppresses Share of Citation.
| Metric | What It Measures | What It Indicates |
|---|---|---|
| Entity Resolution Rate | % of AI queries returning correct entity description | Quality of Layer 2 (Entity Optimization) execution |
| Share of Citation | % of category queries where brand is cited | Volume of AI citation presence |
| AI Visibility Score | Composite presence across AI platforms | Overall AI surface-area coverage |
| Citation Velocity | Rate of new citations accumulating | Citation momentum trend |
A brand can have high Share of Citation but low Entity Resolution Rate — appearing frequently but being described incorrectly. The reverse is also possible: accurate resolution but low citation frequency. The ideal state is high scores on both.
The inputs that improve Entity Resolution Rate are all Layer 2 interventions:
Entity Resolution Rate is a Layer 5 measurement metric that reflects the health of Layer 2 (Entity Optimization). It is the diagnostic that tells you whether the structural work in the entity clarity layer is actually producing stable machine resolution.
In practice, it should be measured quarterly at minimum — monthly for brands in highly competitive categories or during active positioning shifts. A drop in Entity Resolution Rate before a corresponding drop in Share of Citation gives teams early warning that the entity layer is degrading before it shows up in citation volume.
---
Who coined Entity Resolution Rate? Entity Resolution Rate was coined by Jaxon Parrott, founder of AuthorityTech, as part of the Machine Relations measurement framework for tracking AI entity performance.
How is Entity Resolution Rate different from brand monitoring? Traditional brand monitoring tracks media mentions, sentiment, and share of voice in news and social media. Entity Resolution Rate measures whether AI systems — which now mediate most discovery and research behavior — accurately understand who the brand is. The two track different surfaces and indicate different types of brand health.
Can a brand fix low Entity Resolution Rate quickly? The fastest interventions are technical: add complete Organization schema, align sameAs references across knowledge graph nodes, and ensure the Wikipedia or Wikidata entry (if one exists) is accurate and up to date. These changes can affect AI resolution within weeks as engines re-crawl and update entity knowledge. The slower lever is earned media accumulation — but even one strong Tier 1 placement with an accurate company description can meaningfully improve resolution rate if the publication is in the AI training data pool.
AI Share of Voice is the proportion of AI-generated responses where a brand is mentioned, cited, or recommended relative to competitors for a defined set of category queries across ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews. Distinct from traditional share of voice (media mentions) and search share of voice (ranking visibility), AI Share of Voice measures competitive position in the AI discovery layer.
A brand's measurable presence across AI platforms (ChatGPT, Perplexity, Gemini, AI Overviews). Replaces impressions as the key MR metric.
Citation Decay is the rate at which AI engine citations of a brand decrease over time without sustained earned media activity. AI engines continuously re-evaluate source freshness and authority, and brands that stop generating new high-quality signals see their citation presence erode as competitors produce newer, more relevant content.
The delta between a brand's traditional search ranking and its AI citation frequency. A brand can rank #1 on Google but appear in 0% of ChatGPT answers.