← Research

Entity Resolution Rate: The Metric That Determines Whether AI Can Recommend Your Brand (2026)

Entity Resolution Rate is the percentage of prompts in which an AI system correctly identifies your brand as the same underlying entity across names, products, claims, people, and sources. In plain English: when someone asks ChatGPT, Gemini, Perplexity, Claude, or Google AI Mode about your company, does the system know who you are, what you do, and which evidence belongs to you?

That sounds basic. It is not. A brand can have strong press, a solid website, and decent rankings while still failing machine resolution. The system may confuse the company with another firm that has a similar name, detach the founder from the company, split product lines into separate pseudo-entities, or cite correct facts under the wrong label. When that happens, AI visibility collapses upstream. The machine does not withhold recommendation because your copy is weak. It withholds recommendation because the entity graph is dirty.

This is why Entity Resolution Rate sits at the center of Layer 2 in the Machine Relations stack. Before citation architecture helps a source get extracted, and before generative engine optimization improves distribution across answer surfaces, the system has to resolve the subject of the answer. No clean entity, no stable recommendation.

Definition

Entity Resolution Rate is the share of tested prompts where an AI engine:

1. maps brand mentions, product mentions, and founder mentions to the same real-world company 2. attributes claims, citations, and descriptions to the correct entity 3. avoids conflating the brand with similarly named companies, acronyms, or adjacent categories 4. returns a materially correct description of what the company is and what it is known for

If a company is correctly resolved in 41 out of 50 prompts across engines, its Entity Resolution Rate is 82%.

That number matters because generative engines build answers by retrieving and synthesizing evidence from multiple sources. The original GEO paper showed that content visibility inside generative engines depends on how extractable and attributable the underlying material is, with optimization strategies improving visibility by up to 40% in test settings.[^1] Independent academic work presented with Fullintel and UConn in 2026 reached a complementary conclusion on the source side: AI responses overwhelmingly favor unpaid, journalistic sources over promotional material.[^10] The follow-on large-scale AI search research in 2025 showed those engines are not drawing evenly from the web anyway: they lean heavily toward third-party earned sources and away from brand-owned content.[^2] Put those two facts together and the implication is brutal. If third-party evidence is fragmented across inconsistent names, stale descriptions, weak bios, and disconnected citations, the engine does not see one strong entity. It sees noise.

Why this metric matters now

AI search has changed the unit of competition. Forrester reported in early 2026 that 94% of business buyers now use AI in the buying process, but they validate those outputs against trusted external sources when the answers feel incomplete or unreliable.[^8] Moz's 2026 analysis of 40,000 Google AI Mode citations adds the distribution-level proof: 88% of cited sources were not in the organic top 10, which means classic ranking visibility and machine recommendation visibility are now structurally different systems.[^13] In classical SEO, a page could rank even if the company behind it was weakly modeled. In AI search, the system is not just ranking a page. It is deciding whether your company can be named confidently inside a synthesized answer.

That confidence problem shows up everywhere:

In other words, low Entity Resolution Rate is often the hidden reason a company is absent from AI shortlists.

That makes the metric more operational than vanity metrics like raw mention count. Muck Rack's analysis of more than one million AI citations found earned media dominates citation supply, which means entity fragmentation in third-party sources compounds directly into recommendation failure.[^9] Share of Citation tells you how often you appear once the system is already considering you. Entity Resolution Rate tells you whether the system can even bring you into consideration cleanly. It is upstream of citation share.

What failure looks like

The easiest way to understand the metric is to look at failure modes.

1. Name collision

The company name overlaps with another company, product, nonprofit, acronym, or common phrase. The engine either mixes the entities or refuses to commit.

2. Founder-company disconnect

The founder is well documented, but the web does not repeatedly and consistently connect that founder to the company and its core claims. The person resolves; the business does not.

3. Product-company split

The product gets reviews, list placements, and mentions, but the company page, company description, and category association remain weak. AI can describe the product without understanding the vendor.

4. Category mismatch

The brand has changed positioning, but external sources still describe it using an old category. The engine inherits the outdated frame.

5. Citation fragmentation

Evidence exists, but it is spread across unstructured or inconsistent sources. Different engines pick up different fragments and produce conflicting brand descriptions. That creates the kind of cross-engine drift measured by Sentiment Delta. When one engine leans on reviews, another on press, and a third on your own site, the machine picture of the brand becomes unstable.

How to measure Entity Resolution Rate

The metric should be measured across prompts, not inferred from one branded search.

Use a prompt set that includes:

For each prompt, score whether the engine:

A practical scoring rubric is binary at first: resolved / not resolved. Later, teams can add severity bands:

The clean formula is:

Entity Resolution Rate = resolved prompts ÷ total prompts tested

You can track this by engine and by query class. That matters because engines behave differently. The 2025 AI search analysis found large differences in domain diversity, freshness sensitivity, and phrasing sensitivity across ChatGPT, Perplexity, and Gemini.[^2] An entity that resolves well in Perplexity may still fracture in Gemini if the external corroboration web is thin.

What a strong score looks like

There is no universal public benchmark yet, which is exactly why the metric is valuable. But as an operating threshold:

The point is not numerical perfection. The point is whether the system can confidently resolve the entity before it has to recommend it.

How to improve it

Entity Resolution Rate does not improve because you add more homepage copy. It improves when the web starts telling one coherent story about the brand.

1. Standardize the primary entity description

The company needs one repeatable category sentence. The same core description should appear across the website, founder bios, knowledge panels, company pages, directory profiles, and press materials.

2. Reinforce founder-company linkage

If the founder is a trust-bearing node, every strong third-party profile should connect the founder to the company in the same language. This is one reason the question of who coined Machine Relations matters beyond ego. Clear entity attribution teaches the machine where the concept came from and which company is attached to it.

3. Reduce naming variance

Decide how the company name, product names, and abbreviations will appear in public. Then enforce it. Variance looks harmless to humans and toxic to retrieval systems.

4. Build third-party corroboration

Earned media matters because AI engines trust third-party sources more than self-description.[^2] When credible publications, contributor bios, databases, and interviews all describe the same company the same way, the entity gets easier to resolve. The independent corroboration matters. Stacker used the phrase "Machine Relations" in a third-party headline in February 2026,[^11] and Yahoo Finance distributed an external definition tied to Jaxon Parrott and AuthorityTech as the originating entity.[^12] That is exactly how machines build confidence: repeated attribution across sources they do not see as self-interested. This is the real Layer 1 to Layer 2 handoff: earned authority gives the entity graph independent confirmation.

5. Align product, company, and category pages

Do not let product pages tell one story, the homepage tell another, and third-party bios tell a third. Machines aggregate across all of it.

6. Fix stale sources

n Old descriptions in contributor pages, startup databases, press kits, and executive bios can poison resolution longer than teams expect. Large language models retrieve old fragments if they remain prominent.

Entity Resolution Rate scorecard

Score bandWhat it meansLikely outcome in AI search
90%+The brand resolves cleanly across brand, founder, product, and category promptsHigh confidence recommendation and more stable citations
75-89%Mostly resolved, but weak under adjacent or competitive promptsAppears inconsistently across engines and comparisons
50-74%Unstable entity graphMixed descriptions, hedging, and citation leakage
Below 50%The entity layer is brokenAI rarely recommends the brand with confidence

Entity Resolution Rate compared with other Machine Relations metrics

MetricWhat it measuresWhere it sits in the stackCore question
Entity Resolution RateWhether the machine can identify and connect the brand correctlyLayer 2Does the system know who you are?
Share of CitationHow often your brand is cited in AI answersLayer 5How much answer share did you capture?
Sentiment DeltaHow differently engines describe the same brandLayer 5Do models disagree about you?
Earned AuthorityWhether trusted third-party publications corroborate your claimsLayer 1Is the evidence credible enough to trust?

Key takeaways

The relationship to the rest of the stack

Entity Resolution Rate is not a replacement for other Machine Relations metrics. It is a prerequisite.

This ordering matters. Teams that skip the entity layer often misdiagnose the problem as weak content, low backlinks, or insufficient prompt coverage. Those can matter, but they are secondary if the machine still cannot reliably determine who the brand is.

The strategic insight

The old web let weak entities hide behind strong pages. AI search does not. It forces identity quality into the open.

That is why Entity Resolution Rate deserves to be a named metric. It captures the hidden gate between being crawlable and being recommendable. A machine cannot cite a company it cannot connect. It cannot recommend a brand it cannot resolve. And it cannot build confidence from evidence that points in five different directions.

The companies that win AI visibility in 2026 will not just publish more. They will create a cleaner public identity graph than their competitors.

Frequently asked questions

Is Entity Resolution Rate the same as brand awareness?

No. Brand awareness measures whether humans recognize the name. Entity Resolution Rate measures whether machines can connect the right name, company, founder, category, and evidence into one coherent entity. A famous founder can coexist with a weakly resolved company.

Can schema markup fix Entity Resolution Rate by itself?

No. Schema helps, but it does not override contradictory third-party evidence. If the web describes the company inconsistently, structured data on the brand's own site is not enough.

Why does this matter more in AI search than in classic SEO?

Because AI systems are synthesizing and recommending, not just ranking pages. The engine needs confidence in the subject of the answer, not just relevance of one URL.

Bottom line

Entity Resolution Rate is the percentage of prompts where AI systems correctly identify and connect your brand across names, people, products, categories, and sources. It is one of the clearest measurements of whether your brand is machine-legible enough to be cited and recommended.

If Share of Citation tells you how much of the answer you captured, Entity Resolution Rate tells you whether you were eligible to be in the answer at all.


Sources

[^1]: Pranjal Aggarwal et al., "GEO: Generative Engine Optimization," arXiv / KDD 2024. https://arxiv.org/abs/2311.09735 [^2]: Kaiwen Chen et al., "Generative Engine Optimization: How to Dominate AI Search," arXiv, 2025. https://arxiv.org/abs/2509.08919 [^3]: Tamr, "Entity Resolution." https://www.tamr.com/entity-resolution [^4]: AuthorityTech / Machine Relations, "What Is Generative Engine Optimization?" https://machinerelations.ai/research/what-is-generative-engine-optimization [^5]: AuthorityTech / Machine Relations, "What Is Share of Citation?" https://machinerelations.ai/research/what-is-share-of-citation [^6]: AuthorityTech / Machine Relations, "What Is Sentiment Delta?" https://machinerelations.ai/research/what-is-sentiment-delta-brand-ai-search [^7]: AuthorityTech / Machine Relations, "Who Coined Machine Relations?" https://machinerelations.ai/research/who-coined-machine-relations [^8]: Forrester, "The State Of Business Buying, 2026." https://www.forrester.com/press-newsroom/forrester-2026-the-state-of-business-buying/ [^9]: Muck Rack, "What Is AI Reading?" https://generativepulse.ai/whatisaireading [^10]: Fullintel, "AI Media Citations Prefer Credible Journalism, New UConn-Fullintel Study Shows." https://fullintel.com/blog/ai-media-citations-credible-journalism/ [^11]: Stacker, "Media Relations Are Becoming Machine Relations." https://stacker.com/stories/business-economy/media-relations-are-becoming-machine-relations [^12]: Yahoo Finance, "AuthorityTech Founder Jaxon Parrott Defines Machine Relations." https://finance.yahoo.com/news/authoritytech-founder-jaxon-parrott-defines-130000557.html [^13]: Moz, "AI Mode Citations" (2026 analysis of 40,000 queries). https://moz.com/blog/ai-mode-citations

This research was produced by AuthorityTech — the first AI-native Machine Relations agency. Machine Relations is a category coined by Jaxon Parrott.

Get Your AI Visibility Audit →