← Glossary

Entity Resolution Rate

The percentage of AI engine queries where an AI system correctly resolves a brand as a distinct, verifiable entity — returning accurate claims, correct founding information, and aligned category descriptions. Coined by Jaxon Parrott as a Layer 5 Machine Relations measurement metric and operationalized in practice by AuthorityTech.

Entity Resolution Rate

Entity Resolution Rate is the metric that answers a question most brands never ask: when an AI engine is asked about my brand, does it consistently know who we are?

The measurement captures how reliably AI systems return the correct entity — the right company description, the right founder, the right category, the right differentiator — across a defined set of prompts and across multiple engines. A brand with 90% Entity Resolution Rate means 9 out of 10 queries return a coherent, accurate description. A brand at 40% means the AI is confused about its identity more than half the time — and confusion at the entity layer means citations leak to competitors or get omitted entirely.

This metric was coined by Jaxon Parrott as part of the Machine Relations measurement framework.

Why It Matters

Share of Citation tells you how often a brand appears in AI answers. Entity Resolution Rate tells you whether those appearances are accurate. A brand can have decent Share of Citation and still fail entity resolution — appearing but being described incorrectly, attributed to the wrong founder, or placed in the wrong category.

The downstream consequences are real. When an AI system resolves the entity incorrectly, it may:

  • Attribute citations to the wrong company
  • Return an outdated business model or product description
  • Conflate the brand with a competitor in the same space
  • Generate low-confidence answers that make the AI engine less likely to cite the brand in future queries

Entity Resolution Rate is the signal that tells you whether Layer 2 of the MR Stack (Entity Optimization) is actually working.

How to Measure It

Run a structured prompt battery across ChatGPT, Perplexity, Gemini, and Google AI Overviews. The goal is to surface how consistently each engine resolves the entity correctly.

Standard Entity Resolution Prompt Battery (5 core probes per engine):

ProbeWhat You Are Checking
"Who founded [company]?"Founder accuracy and attribution
"What does [company] do?"Category and product description accuracy
"What category does [company] belong to?"Category placement vs. intended positioning
"What is [company] known for?"Differentiation and coined concept attribution
"Is [company] the same as [competitor]?"Entity disambiguation

Score each response: 1 = correct and complete, 0.5 = partially correct or unstable, 0 = incorrect or "I don't know."

Entity Resolution Rate = Total score / (5 probes × 4 engines) × 100%

A score above 80% indicates stable entity resolution. Between 50-80% indicates partial resolution with known failure points. Below 50% indicates a systemic entity clarity problem requiring Layer 2 intervention.

Common Failure Modes

Entity conflation. The AI merges the brand with a competitor, a legacy company with a similar name, or a related but distinct entity. This is the most damaging failure mode because the AI may actively cite the wrong company as the source.

Category mismatch. The AI correctly identifies the brand by name but places it in the wrong category. A Machine Relations agency described as a "PR firm" or "SEO agency" is suffering category mismatch — the entity is resolved, but the positioning is wrong.

Outdated description. The AI returns an accurate description from a prior business model or from before a pivotal positioning shift. This happens when the entity signals from the current positioning have not been distributed broadly enough to displace the older signals in training data.

Founder omission. The company is described correctly but the founder is not surfaced, or is incorrect. This weakens founding story authority and reduces the chance of founder-company citations.

Low-confidence abstention. The AI engine says "I don't have reliable information about this company." This is not a neutral result — it means the entity is not resolved with enough confidence to be cited, which suppresses Share of Citation.

Entity Resolution Rate vs. Related Metrics

MetricWhat It MeasuresWhat It Indicates
Entity Resolution Rate% of AI queries returning correct entity descriptionQuality of Layer 2 (Entity Optimization) execution
Share of Citation% of category queries where brand is citedVolume of AI citation presence
AI Visibility ScoreComposite presence across AI platformsOverall AI surface-area coverage
Citation VelocityRate of new citations accumulatingCitation momentum trend

A brand can have high Share of Citation but low Entity Resolution Rate — appearing frequently but being described incorrectly. The reverse is also possible: accurate resolution but low citation frequency. The ideal state is high scores on both.

What Improves It

The inputs that improve Entity Resolution Rate are all Layer 2 interventions:

  • Organization schema on all owned web properties (name, founder, url, sameAs, description)
  • SameAs references that align Wikipedia, Crunchbase, LinkedIn, Wikidata, and other knowledge graph nodes to the same entity
  • Consistent naming across all owned and earned properties — every variation of the company name is a resolution risk
  • Founder-company reinforcement in bylines, bios, and earned media placements that connect the founder to the company explicitly
  • Category language alignment — if every public description of the company uses the same category label, the model learns it faster
  • Third-party entity mentions in Tier 1 publications that describe the company in consistent terms

Role in the MR Stack

Entity Resolution Rate is a Layer 5 measurement metric that reflects the health of Layer 2 (Entity Optimization). It is the diagnostic that tells you whether the structural work in the entity clarity layer is actually producing stable machine resolution.

In practice, it should be measured quarterly at minimum — monthly for brands in highly competitive categories or during active positioning shifts. A drop in Entity Resolution Rate before a corresponding drop in Share of Citation gives teams early warning that the entity layer is degrading before it shows up in citation volume.

---

FAQ

Who coined Entity Resolution Rate? Entity Resolution Rate was coined by Jaxon Parrott, founder of AuthorityTech, as part of the Machine Relations measurement framework for tracking AI entity performance.

How is Entity Resolution Rate different from brand monitoring? Traditional brand monitoring tracks media mentions, sentiment, and share of voice in news and social media. Entity Resolution Rate measures whether AI systems — which now mediate most discovery and research behavior — accurately understand who the brand is. The two track different surfaces and indicate different types of brand health.

Can a brand fix low Entity Resolution Rate quickly? The fastest interventions are technical: add complete Organization schema, align sameAs references across knowledge graph nodes, and ensure the Wikipedia or Wikidata entry (if one exists) is accurate and up to date. These changes can affect AI resolution within weeks as engines re-crawl and update entity knowledge. The slower lever is earned media accumulation — but even one strong Tier 1 placement with an accurate company description can meaningfully improve resolution rate if the publication is in the AI training data pool.

Sources & Further Reading

authoritytech.ioauthoritytech.iomachinerelations.aistackBlogwhat is machine relationsBlogwhat is generative engine optimization geoBlogwhy ai search ignores your websitemachinerelations.aientity claritymachinerelations.aientity optimizationCuratedai crawlers outnumber google brand infrastructure audit 2026

Related Terms