LLMs cite between 2 and 7 domains in a typical response. ChatGPT averages 3.86 sources per answer. Perplexity averages 7.42. Google AI Overviews sit somewhere in between, with most responses citing 6 to 14 sources. That is the entire playing field. There is no page 2 of ChatGPT results. No position 11. You are either in the answer or you do not exist.
(The industry calls this practice GEO (Generative Engine Optimization), AEO (Answer Engine Optimization), LLM SEO, or AI SEO. Different names, same discipline: structuring content so AI engines cite it.)
The concentration is extreme
The distribution of AI citations follows a power law that makes traditional search look democratic by comparison.
An analysis of roughly 36 million AI Overviews by Digital Bloom found that the top 5 domains capture 38% of all citations. The top 10 capture 54%. The top 20 capture 66%. Wikipedia alone accounts for 11.2% of all AI Overview mentions, followed by YouTube at 9.5% and Reddit at 5.8%.
A 13-week study of 230,000 prompts and 100 million citations confirmed the pattern. Before September 2025, Wikipedia and Reddit appeared in roughly 55-60% of ChatGPT responses each, about five times more than the next most-cited domains.
Each engine is searching a different web
The concentration problem is compounded by the fact that these engines do not all search the same index.
ChatGPT retrieves from Bing. A study of 500+ citations found that 87% of ChatGPT's search citations match Bing's top 10 organic results. Only 56% matched Google's results for the same queries. If you have only ever optimized for Google, you may be invisible to ChatGPT's retrieval pipeline entirely.
Google AI Overviews pull from Google's index, but not in the way you would expect. The overlap between AI Overview citations and traditional organic top-10 rankings has dropped from 76% to 38% in under a year, per a study of 863,000 keywords. A BrightEdge analysis puts the overlap even lower at 17% for some query categories. Google's AI is increasingly sourcing from outside its own top results.
Claude retrieves via Brave Search. Research on Claude's citation behavior found an 86.7% overlap between Claude's cited results and Brave's top organic results. If your site does not appear in Brave Search results, Claude will not find it regardless of your Google or Bing rankings.
Perplexity runs its own index alongside partnerships with multiple search providers. It re-crawls aggressively, and half of the content it cites is less than 13 weeks old.
Yext analyzed 6.8 million citations across 1.6 million responses from Gemini, ChatGPT, and Perplexity and found only 11% domain overlap between ChatGPT and Perplexity. That means 89% of citation opportunities are platform-specific. A brand visible on one engine should assume it is invisible on the others until proven otherwise.
We see this divergence constantly in CiteGap audits. A D2C wellness brand we assessed recently was cited by Perplexity for 6 out of 8 target queries but appeared in zero ChatGPT responses. Same brand, same content, completely different outcomes by engine. The retrieval index difference meant the content was never even found by ChatGPT's pipeline.
Being cited changes everything
A September 2025 study across 3,119 search terms and 42 organizations quantified what being in the answer is actually worth:
- Brands cited in AI Overviews earn 35% more organic clicks (0.70% CTR vs. 0.52%)
- Cited brands see 91% more paid clicks (7.89% CTR vs. 4.14%)
- Non-cited brands on the same queries suffer a 65% organic CTR decline year-over-year
The gap between "cited" and "not cited" is binary, not incremental.
This is the mention-link gap playing out at scale. AI engines may know your brand exists, but if your content does not meet their citation criteria, the click goes to whoever does.
Citation slot stickiness: why early matters more than perfect
Traditional search gives you time. You can publish content, build backlinks over months, and climb from position 15 to position 3. AI search does not work that way because of a property we call citation slot stickiness.
SE Ranking's research shows that over 60% of cited domains and 80% of cited URLs change between identical queries run at different times. On the surface, that suggests slots are still volatile. But when CiteGap re-audits the same brand after 3-4 weeks, we see a more nuanced pattern: the slots are volatile at the URL level (which specific page gets cited) but increasingly sticky at the domain level (which domains hold the citation positions). Once a domain locks a citation slot for a query, it tends to hold it even as the specific cited URL rotates.
The mechanism is straightforward. AI engines build retrieval patterns based on which sources consistently provide structured, fact-dense content for a topic cluster. A domain that holds a citation slot accumulates more of the signals that keep it there: the page gets more traffic, more external references, more freshness signals from updates. The switching cost for the AI engine goes up over time. Displacing an entrenched domain is not like outranking someone on Google, where you can brute-force it with backlinks. You have to provide content that is measurably better on the specific signals that engine weights for that specific query type.
We measure this stickiness directly in CiteGap's competitor displacement analysis. For a B2B fintech company, our initial audit showed a third-party review site holding the pricing citation slot across ChatGPT and Perplexity. The re-audit three weeks later showed the same domain still holding the slot, with the same URLs in 4 of 6 query variations. That is high stickiness. The audit identified four product pages where specific content format gaps gave the content team a targeted restructuring plan: page-level diagnostics showing exactly what the review site had that the brand's pages lacked. The window to displace the entrenched competitor was narrower than the brand expected, and it would have been narrower still if they had waited another quarter.
What the math tells you to do
The 2-7 domain constraint means every query is a zero-sum competition for a handful of slots. Brands that are still ignoring AI search visibility are forfeiting those slots to whoever shows up first.
In a 10-blue-links world, you could rank #8 and still get traffic. In AI search, the gap between "cited" and "not cited" is binary. And because each engine retrieves from a different index, a brand's visibility picture is actually three or more separate problems, not one.
CiteGap audits measure not just who holds each citation slot but how entrenched they are. A competitor with high citation slot stickiness (same domain holding the same slot across re-audits) requires a different displacement strategy than one with low stickiness (rotating domains, no clear incumbent). Knowing the entrenchment level tells you which slots are winnable now and which require a longer content investment to crack.
The mention-link gap is the most common entry point: the AI already recognizes your brand, but the content format gap means the click goes elsewhere. And because stickiness compounds, every month a competitor holds a citation slot is a month their position gets harder to displace. The brands that measure their competitive entrenchment now and act on it have a structurally different outcome than brands that wait for the slots to calcify.
FAQ
How many sources does ChatGPT cite per response? ChatGPT averages 3.86 citations per response, based on an analysis of 230,000 prompts. Perplexity averages 7.42 citations. Google AI Overviews typically cite 6-14 sources. The exact count varies by query complexity, but the range is consistently narrow compared to traditional search's 10 blue links per page.
Is AI search really winner-take-most? Yes. The top 10 cited domains capture 54% of all AI Overview citations, and the top 20 capture 66% (Digital Bloom, 2025). Only 11% of domains are cited across both ChatGPT and Perplexity (Yext). The concentration is more extreme than traditional search, where at least page 2 exists as a fallback.
Does being cited in AI Overviews actually increase traffic? A study of 3,119 keywords across 42 organizations found that brands cited in AI Overviews earn 35% more organic clicks and 91% more paid clicks compared to non-cited brands on the same queries. The effect is binary: you are either benefiting from the citation or losing traffic to it.
Can I optimize for all AI engines at once? Not effectively. Yext's analysis of 6.8 million citations found only 11% domain overlap between ChatGPT and Perplexity. Claude adds another layer of divergence since it retrieves via Brave Search instead of Google, favoring a different source pool entirely. Each engine has different source preferences, retrieval logic, and content format biases. You need to audit and optimize per engine, starting with the one your audience uses most.
Is it too late to start optimizing for AI search? No, but citation slot stickiness is increasing. SE Ranking found that 60%+ of cited domains change between runs of the same query at the URL level, but CiteGap re-audits show domain-level positions are stabilizing. The window to enter the citation rotation is open now. As more brands optimize and positions calcify, displacing entrenched competitors will require more effort. Starting now is meaningfully easier than starting in six months.
CiteGap measures citation slot stickiness across ChatGPT, Google AI, and Claude, showing which competitor positions are entrenched and which slots are still open to displacement. Request a consultation.