The research validating why earned media is the primary foundation for AI-mediated brand discovery.
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →
MR relevance: Validates the Machine Relations thesis.
View study →239% median lift
Stacker-distributed stories achieved a median 239% lift in AI citations compared to owned content, with 97% earning at least one citation versus 82% for owned[3].
MR relevance: This recent study reinforces Machine Relations by demonstrating earned media distribution as a proven tactic for tripling AI visibility and citations.
View study →239% median lift in AI search visibility; 97% of Stacker-distributed stories earned AI citations vs. 82% for owned content[2]
Earned media distribution produces a median 239% lift in AI search visibility, with 97% of distributed stories earning at least one AI citation compared to 82% for owned content[2]
MR relevance: Directly validates that earned media is the primary driver of AI engine citations and brand discovery, demonstrating measurable lift in GEO (Generative Engine Optimization)[2]
View study →Angela Dwyer · IPRRC Academic Conference
89% of AI citations are earned media; 95% unpaid — IPRRC presentation
47% of all AI citations in responses came from journalistic sources. 89%+ of cited links were earned media. 95% were unpaid.
MR relevance: Direct empirical validation of Layer 1 (Earned Authority) as the foundation of Machine Relations. Academic conference presentation gives peer-reviewed status.
View study →OtterlyAI Research · OtterlyAI
73% of sites block AI crawlers — invisible to citation systems by default
73% of sites have technical barriers blocking AI crawler access. Community platforms and brand domains capture outsized citation share. Wikipedia maintains strong performance across ChatGPT.
MR relevance: Validates Layers 2-3 (Entity Clarity + Citation Architecture) — technical accessibility is a prerequisite for citation, independent of content quality.
View study →Signal Genesys Research · Signal Genesys
179.5M citation records: distribution at scale drives AI citation reach
179.5 million citation records, 6.1 million unique domains, 6 LLM platforms. 88.4% domain citation coverage. Perplexity drives largest citation volume of tested platforms.
MR relevance: Direct measurement of earned media distribution → AI citation rates. Validates the Layer 1 → Layer 4 connection in the MR stack.
View study →Yext Research · Yext Research
17.2M citations analyzed: no single tactic dominates all AI platforms
17.2 million distinct AI citations analyzed across Q4 2025. No single optimization strategy works across ChatGPT, Gemini, Perplexity, Claude. Model-specific citation behavior requires multi-surface approach.
MR relevance: Validates Layer 4 (Distribution Across Answer Surfaces) — different AI engines cite different sources, requiring a systematic cross-platform approach.
View study →82% earned media; 5x press release growth
Earned media accounts for 82% of generative AI citations, with press releases experiencing a 5x increase since July 2025 due to higher rates in ChatGPT and Gemini.[1]
MR relevance: This validates Machine Relations by showing earned media dominates AI citations, making it essential for brands to optimize for AI engine recommendations and visibility.
View study →325% citation lift
Earned media distribution across third-party news outlets increases AI citations by up to 325%, from 8% to 34% across multiple LLMs.[3]
MR relevance: Demonstrates how earned media distribution amplifies brand discovery in AI responses, core to Machine Relations strategies for entity optimization and AI SEO.
View study →91% of professionals report using generative AI; 73% for idea generation; 68% for writing/content refinement
91% of PR professionals use generative AI in workflows, with 73% for idea generation and 68% for content refinement, signaling AI's central role in adaptive communications.
MR relevance: This demonstrates PR's shift toward AI-enabled strategies, aligning with Machine Relations' focus on AI SEO and measurement for earning engine citations.
View study →>95% nonpaid mentions; 27% earned media
LLMs cite nonpaid mentions and earned media in over 95% of links, with 27% directly from earned media, positioning PR ahead of SEO for AEO and reputation in AI search[6].
MR relevance: This reinforces Machine Relations by highlighting earned media's dominance in AI citations and the need for AEO strategies to drive brand discovery over traditional SEO.
View study →84% recognize GEO; 42% choose GEO
A Fractl survey reveals GEO as the most recognized (84%) and preferred (42%) term among marketers for AI-era brand visibility, outpacing AEO (61% recognition, 14% preference) and AISEO (60% recognition, 16% preference), highlighting ongoing naming fragmentation[3].
MR relevance: This survey validates the Machine Relations thesis by demonstrating widespread recognition of GEO within a fragmented naming landscape, underscoring the need for a unifying discipline focused on earning AI citations and entity optimization.
View study →The 2026 AEO/GEO Benchmarks Report provides industry benchmarks for AI visibility, urging measurement of citations and alignment of AEO/GEO with SEO strategies[3].
MR relevance: Establishes empirical benchmarks for AI citations as KPIs, directly supporting Machine Relations' focus on earning and measuring AI engine recommendations[3].
View study →680 million citations analyzed across ChatGPT, Google AI Overviews, and Perplexity from August 2024 through June 2025
Analysis of 680 million AI citations across ChatGPT, Google AI Overviews, and Perplexity reveals that different AI platforms cite fundamentally different sources, with no universal top source for citations—only patterns shaped by intent, platform, industry vertical, and time.
MR relevance: Validates Machine Relations thesis that different models cite different sources, requiring platform-specific citation strategies rather than monolithic AI search optimization.
View study →Muck Rack Research Team · GlobeNewswire
82% of AI citations = earned media (1M+ citation dataset)
82% of all AI-cited links are earned media. 95% unpaid. Press releases grew 5x but still represent only 1% of total AI citations.
MR relevance: Core data source for MR Layer 1. 1M+ citations analyzed across GPT, Gemini, Claude.
View study →94% of citations from non-paid sources; press releases 5x growth
94% of AI citations originate from non-paid earned media sources, with press release citations growing 5x between July and December 2025 across ChatGPT and Gemini[3].
MR relevance: This supports the Machine Relations thesis that earned media dominates AI recommendations, driving brand visibility through non-paid third-party placements over owned or paid channels.
View study →82% of AI citations from earned media; 94% from non-paid sources
Analysis of over one million AI citations found that 82% came from earned media sources and 94% from non-paid sources, with AI models relying more heavily on earned media and journalism for brand discovery questions.[2]
MR relevance: Demonstrates at scale that AI engines systematically prioritize earned media citations over owned content, particularly for brand discovery scenarios where users seek category leaders.
View study →Mahe Chen, Xiaoxuan Wang, Kaiwen Chen, Nick Koudas · arXiv / ACM 2025
Systematic earned media bias confirmed across ChatGPT, Perplexity, Gemini — Chen et al. 2025 (5M+ data points)
AI search engines show 'systematic and overwhelming bias towards Earned media (third-party, authoritative sources) over Brand-owned and Social content.' Researchers conclude the primary GEO strategy is to 'dominate earned media to build AI-perceived authority.'
MR relevance: A GEO research paper independently concludes that earned media dominance is the mechanism for AI citation — which is Machine Relations Layer 1 (Earned Authority). GEO researchers and Machine Relations practitioners converge on the same truth from different starting points.
View study →Klaudia Jaźwińska · Columbia Journalism Review Tow Center
Only 49% of answers from eight leading AI engines contained any citation; just 31% of those links pointed to original publisher
Systematic testing of eight generative search tools found that only 49% of answers contained any citation at all, and just 31% of those links pointed to the original publisher, with premium chatbots providing more confidently incorrect answers than free counterparts.
MR relevance: Demonstrates critical gap in AI citation reliability and attribution accuracy, underscoring the need for deliberate Machine Relations strategies to ensure brand visibility and correct attribution in AI-generated answers.
View study →Aggarwal et al. · SIGKDD 2024
30-40% improvement in AI citation rates from structured, statistic-rich content
Adding verifiable statistics to content improves AI citation rates by 30-40%. Citing credible sources increases probability of being cited. Structure matters more than volume.
MR relevance: Validates Citation Architecture (Layer 3) and the foundational role of credible third-party sources (Layer 1 — Earned Authority) in AI citation behavior.
View study →