← All posts
ai-search-visibility geo aeo llm-seo ai-seo topical-authority content-strategy

Topical authority in AI search: why 15 deep posts beat 50 thin ones

AI engines do not evaluate your pages one at a time. They evaluate whether your domain has depth on the topic. A site with 15 well-structured pages covering one subject from multiple angles will get cited more often than a site with 50 thin posts spread across unrelated topics. The mechanism behind this is called query fan-out, and it explains why content volume alone does not translate to AI visibility.

(The practice of optimizing for these citations goes by several names: GEO (Generative Engine Optimization), AEO (Answer Engine Optimization), LLM SEO, or AI SEO. This post explains the specific role topical depth plays in earning citations across ChatGPT, Google AI, and Claude.)

How query fan-out works

When someone asks Google AI Mode or ChatGPT a question, the engine does not run a single search. It decomposes the query into 8-12 parallel sub-queries, each targeting a different angle of the topic. Google calls this query fan-out. A question like "best vitamin C serum for oily skin" generates sub-queries covering ingredient safety (niacinamide vs ascorbic acid), concentration levels by skin type, brand comparisons, dermatologist recommendations, and price-per-ml analysis.

Each sub-query retrieves its own candidate sources. The engine synthesizes an answer from all of them and decides which domains to cite.

This is where topical depth becomes a citation signal. If your site covers the main topic and the sub-topics that fan-out queries generate, you appear in multiple retrieval passes. The AI treats that as evidence of authority. If you only cover the surface-level query, you appear in one pass (maybe) and compete against every other surface-level page.

The data: fan-out coverage and citation odds

A Surfer SEO study of 10,000 keywords and roughly 33,000 fan-out queries measured the relationship between topical coverage and AI Overview citations. The findings:

If you rank for the head term but not for the related sub-queries, you are leaving more than half of potential citations on the table. And 68% of cited pages did not even appear in the top 10 organic results. Traditional SEO rank is not a prerequisite. Topical depth is.

Why depth beats volume

Foglift's Q1 2026 benchmark of 4,217 brands quantified the depth-vs-volume tradeoff: 50 deep, well-structured pages outperform 500 thin pages by 3.2x in AI citation rate. The same study found that brands with comprehensive JSON-LD markup score 23 points higher on average, and pages with FAQ sections are 2.8x more likely to be cited.

The reason thin content fails is structural. A 300-word post with two paragraphs of marketing copy and a CTA gives an AI engine nothing to extract. No statistics to quote, no comparison to reference, no specific claim to verify. When the fan-out sub-query hits that page, the retrieval system scores it low and moves on.

Thin content also fragments your domain's authority signal. Fifty posts that each touch on "skincare routines" but none go deep (no data, no expert references, no FAQ sections) produce 50 shallow signals instead of one strong one. A competitor with 15 pages containing comparison tables, sourced statistics, and structured Q&A looks like the expert.

What "deep" actually means for AI citation

Depth is not word count. A 3,000-word page padded with filler is still thin to an AI engine. What depth means in practice:

Semantic density. Fact-dense throughout, with specific verifiable claims at regular intervals. Not "our products are high quality" but "our serum contains 15% L-ascorbic acid at pH 3.5, compared to the industry average of 10% (dermatology review data)." AI engines can extract and cite the second. The first is invisible.

Sub-topic coverage. Each page should answer the main question and 2-3 related questions fan-out sub-queries would generate. If your page about "vitamin C serums" also addresses ingredient interactions, skin type suitability, and application routines by concern, it matches multiple retrieval passes instead of one.

Structured extractability. FAQ sections, comparison tables, and clear heading hierarchies let AI engines pull specific answers from specific sections. Citations pull disproportionately from the opening sections of a page. If the opening is a brand story instead of a direct answer, the citation goes elsewhere.

Schema markup. Article schema, FAQ schema, and product schema give AI engines machine-readable context about what each section contains. BrightEdge found that sites with author schema are 3x more likely to appear in AI answers.

What this looks like in practice

We audited a mid-size D2C skincare brand with 47 blog posts across ingredient guides, skin concern routines, and product comparisons. Their ChatGPT citation rate was 9%. The diagnosis: they covered skincare in 12 posts, but each was 400-600 words with no statistics, no sourced claims, and no FAQ sections. The AI engine's fan-out queries for "best face wash for acne-prone skin" generated sub-queries about ingredient safety, pH levels, skin type suitability, dermatologist recommendations, and price comparisons. None of their content answered those sub-queries with extractable depth.

Their competitor, a smaller brand with 14 pages, had built a content cluster around acne care. A pillar page, supported by pages covering ingredient breakdowns, skin type routines, outcome data, and product comparison tables. Each page had sourced dermatological references and structured Q&A. That competitor was cited by ChatGPT, Google AI, and Claude for queries the larger brand's 47 posts could not win.

What we observe in topically authoritative sites

In CiteGap audits, we map which fan-out sub-queries each page covers and where the gaps are. The sites that score highest on topical depth tend to share a recognizable profile.

They concentrate their content rather than spreading it thin. Instead of covering dozens of unrelated topics, they go deep on a few clusters where they have genuine expertise. They cover not just the head query but the sub-queries AI engines generate during fan-out. And they consolidate rather than fragment: the sites earning citations have fewer, more comprehensive pages rather than many thin ones competing with each other for the same citation slots.

The specific pattern varies by industry and by engine. A skincare brand's topical authority looks different from an edtech platform's. The hard part is not knowing that depth matters. It is knowing which sub-queries your content misses, which pages are fragmenting your authority signal, and where the gaps are relative to the competitors currently winning the citations. That diagnosis is page-level and engine-level, not a universal formula.

Structured data and freshness. The 23-point score advantage from comprehensive JSON-LD markup is one of the highest-return technical signals. And pages that go stale lose citation authority even if the underlying topic is evergreen. Topical authority is not a one-time build, especially for topics where data or recommendations change.

FAQ

Does topical authority matter for AI search citations? Yes. A study of 10,000 keywords found that pages covering both the main query and fan-out sub-queries are 161% more likely to be cited in AI Overviews. AI engines use query fan-out to decompose each search into 8-12 sub-queries, and domains with topical depth appear in multiple retrieval passes.

How many pages do I need on a topic to get cited? There is no fixed number, but the pattern from Foglift's 4,217-brand benchmark is clear: 50 deep pages outperform 500 thin ones by 3.2x in citation rate. Quality and coverage of sub-topics matter more than page count. A cluster of 10-20 well-structured pages covering a topic comprehensively is a reasonable starting point.

Is it better to have many short posts or fewer in-depth pages? Fewer deep pages. AI engines need extractable facts, comparison structures, and FAQ sections to cite your content. Short posts without statistics or structured data give retrieval systems nothing to work with. Consolidating thin posts into comprehensive pages with sourced data consistently improves citation rates.

What is query fan-out and why should I care? Query fan-out is the process where AI engines decompose a user's question into 8-12 parallel sub-queries covering different angles. Google AI Mode, ChatGPT, and Claude all use variations of this. If your site covers only the surface-level query, you appear in one retrieval pass. If you cover the sub-topics too, you appear in many, and your citation odds increase 161%.

Does schema markup affect topical authority in AI search? Schema markup does not create topical authority, but it amplifies it. Brands with comprehensive JSON-LD markup score 23 points higher on AI visibility benchmarks (Foglift Q1 2026). Article, FAQ, and Product schema make your content machine-readable, which helps AI engines extract and cite specific sections during fan-out retrieval.


CiteGap maps your fan-out query coverage across ChatGPT, Google AI, and Claude, showing exactly where your topical depth earns citations and where competitors are filling gaps you have not covered. Request a consultation.

Want to know if AI engines cite your brand?

CiteGap audits your visibility across ChatGPT, Google AI, and Claude.

Request a Consultation