Half of all content Perplexity cites was published in the current year. Not the current decade. Not the last few years. This year. A study of 5,000+ URLs across ChatGPT, Perplexity, and AI Overviews found that 65% of AI bot activity targets content from the past 12 months, with Perplexity showing the most extreme recency bias at 50% of citations from 2025 alone.
This is a fundamentally different content model from traditional SEO. In SEO, a well-built evergreen page can rank for two to three years without major updates. In AI search, content that once stayed relevant for 24-36 months now loses ground in six to nine months. The practice of adapting to this (called GEO (Generative Engine Optimization), AEO (Answer Engine Optimization), LLM SEO, or AI SEO) requires treating content as a living asset, not a publish-and-forget artifact.
The freshness gap between AI search and traditional SEO
Traditional Google search has always had a mild freshness signal. But AI engines have turned it into a primary filter.
An analysis of 17 million citations across seven AI platforms found that AI-cited content is 25.7% fresher on average than content ranking in Google organic results. The average age of an AI-cited URL is 1,064 days (about 2.9 years). The average age of a Google organic result is 1,432 days (3.9 years). That 368-day gap is the freshness advantage AI engines structurally prefer.
ChatGPT shows the strongest individual bias, consistently favoring newer content over what appears in Google's top results. Perplexity is even more aggressive, pulling half its citations from the current year.
The one exception is Google AI Overviews, which actually cites content that is 16 days older on average than organic results. Google AI still leans on its traditional index. But ChatGPT, Perplexity, and Claude do not share that bias. If your AI search visibility strategy is built around Google alone, you are missing the engines where freshness matters most.
Why AI engines structurally favor new content
This is not an arbitrary preference. The freshness bias is built into how AI retrieval works at the system level.
Query expansion adds time signals automatically. When someone asks ChatGPT a question, the system breaks it into multiple sub-queries before retrieving sources. Research on ChatGPT's fan-out behavior found that research-oriented sub-queries include year qualifiers (like "2026") at meaningful volume. The system injects freshness filters even when the user didn't ask for them.
Retrieval ranking weights modification dates. Perplexity's retrieval system heavily biases toward content with recent "Last Modified" dates. If your page was last updated in 2024 and a competitor published on the same topic last week, the competitor wins the citation even if your domain authority is higher.
Training data creates recency expectations. AI models are trained on data that includes temporal context. When a user asks about "best CRM software," the model expects current pricing, current features, and current comparisons. A page from 18 months ago with outdated pricing gets skipped because the model can detect the temporal mismatch.
The result is what MarTech calls a "built-in decay timer." In traditional SEO, your content decays slowly. In AI search, the decay is measured in weeks.
Platform-by-platform freshness patterns
Each engine treats freshness differently, which is why optimizing for one engine is not enough.
| Engine | Citations from current year | Freshness bias vs Google organic |
|---|---|---|
| Perplexity | 50% | Strongest overall |
| AI Overviews | 44% | Slight preference for older, established content |
| ChatGPT | 31% | Strongly favors newer content over organic results |
Source: Studies of 5,000+ URLs and 17M citations, both linked above.
Perplexity's 50% figure is striking. It means that for every two citations Perplexity makes, one of them points to content published this year. If your last content update was eight months ago, you are competing for the other half of citations, against every other page that also hasn't been updated.
Claude's freshness behavior is less studied than the others, but it retrieves via Brave Search (not Google), and its citation patterns show similar recency preferences.
The industry-level variation matters too. Financial services content shows extreme recency bias, with AI bots almost exclusively targeting content from the past two years. Travel content follows a similar pattern. Energy and education content has a broader window because the underlying topics change more slowly.
The SEO "set and forget" trap
In traditional SEO, evergreen content is a legitimate strategy. A well-built guide to "how mortgage rates work" can rank on page one for years with minimal updates. Top-performing evergreen content holds a top-10 Google ranking for two years or more before experiencing noticeable traffic decline.
That mental model does not transfer to AI search, and the failure mode is invisible.
We audited a multi-city diagnostic lab chain that had comprehensive test information pages ranking well on Google. Some of those pages had not been substantively updated in 14 months. Google still ranked them on page one. But when we ran their target queries through ChatGPT, Google AI, and Claude, those same pages appeared in zero AI responses. A smaller competitor with thinner but recently updated content (published three months prior, with current pricing, updated reference ranges, and new test panel information) was getting cited instead.
The brand's marketing team had no idea. They were tracking Google rankings (strong), checking their brand mentions on ChatGPT occasionally (saw their name, assumed they were fine), and had no systematic way to know that citation slots were shifting to fresher competitor pages week by week. This is what makes freshness decay dangerous: it is a slow leak with no dashboard to show it is happening. By the time you notice the impact, the competitor has held the citation slot long enough that their page becomes the engine's default source.
This is the freshness penalty pattern playing out. The brand had authority but stale content. The competitor had less authority but current information. In traditional SEO, authority wins that matchup. In AI search, freshness wins.
What "updating content" actually means for AI engines
Changing the publish date and adding "Updated for 2026" to the title does not work. AI engines and Google evaluate actual content changes, not metadata manipulation.
In CiteGap audits, we flag which pages have fallen outside the citation window and what specifically has gone stale. The interventions that move citation rates are not generic "update your content" advice. They are page-specific: this page cites a 2024 study where a 2026 version exists. This comparison page has pricing data from 18 months ago while a competitor published current pricing last month. Each staleness signal is different, and each requires a different fix.
The difference between a productive refresh and busywork is knowing which pages are losing citation slots to fresher competitors and what on each page triggered the loss. CiteGap's content comparison shows exactly what the winning competitor page has that yours does not, so the refresh targets the actual gap rather than applying blanket updates across the site.
Why "refresh quarterly" is the wrong framing
The 90-day citation window is real. But treating it as a calendar ("refresh everything every quarter") creates two problems: you waste effort updating pages that are not competing for citation slots, and you miss pages where a competitor just published a fresher alternative last week.
The hard question is not "how often should I refresh?" It is "which of my pages are losing citation slots to fresher competitors right now, what specifically triggered each loss, and will my planned changes actually recover the citation or just change words?"
These are diagnostic questions, not content strategy questions. They require comparing your page against the competitor page that is currently winning the citation, across each engine independently. A page losing its ChatGPT citation to a competitor with more current pricing data needs different work than a page losing its Perplexity citation to a competitor with a newer publication date. In CiteGap audits, we surface exactly which pages have fallen outside the citation window, which competitor URLs replaced them, and what content differences the engine is weighting. That turns freshness from a vague quarterly mandate into a prioritized list of specific interventions with measurable before/after citation rates.
The mention-link gap adds another layer. Some pages are still mentioned by name but the link now points to a fresher competitor page. The brand's name is in the AI response, so a spot-check looks fine. But the traffic goes elsewhere. This is the most common freshness failure mode we see, and it is invisible without systematic tracking.
FAQ
How often should I update content for AI search visibility? The 60-90 day window matters, but the cadence depends on which pages are actively losing citation slots and what competitors are publishing. A study of 5,000+ URLs found 65% of AI bot activity targets content from the past 12 months, with Perplexity pulling 50% of its citations from the current year alone. Blanket quarterly refreshes waste effort on pages that are not competing for citations.
Does changing the publish date without updating content work? No. AI engines and Google both evaluate actual content changes, not metadata dates. Changing only the date without substantive updates produces no freshness benefit and can harm trust if the content still references outdated information.
Is content freshness more important than domain authority for AI citations? In many cases, yes. An analysis of 17 million citations found AI engines cite content 25.7% fresher on average than Google organic results. A thinner but current page frequently beats a comprehensive but stale one, especially on ChatGPT and Perplexity.
Which AI engine cares most about content freshness? Perplexity shows the strongest freshness bias, with 50% of citations from the current year. ChatGPT follows with a strong preference for recently updated content. Google AI Overviews is the least freshness-sensitive, actually preferring slightly older established content.
Does this mean evergreen content is dead? Not dead, but it needs a different maintenance model. In traditional SEO, evergreen content can rank for two to three years without updates. In AI search, the effective shelf life is closer to 90 days for competitive topics. The concept can be evergreen, but the data points, comparisons, and examples need to stay current. The challenge is knowing which specific elements have gone stale and whether your refresh actually recovered the citation.
CiteGap's audit shows which of your pages have lost citation slots to fresher competitor content, what specifically went stale on each page, and what the winning competitor page has that yours does not. Request a consultation.