Researchers fooled AI models with fake dates to boost visibility
Adding fake publication dates to online content can dramatically boost its visibility across leading AI models, a team from Waseda University discovered. This seems to confirm that tools like ChatGPT systematically favor newer content over older, equally relevant material.
Why we care. AI models seem to reward timestamps more than quality. That means your older high-quality content could vanish from AI search results unless it’s regularly updated – apparently, regardless of whether those updates are substantial or artificial.
How they did it. Researchers added fake publication dates to passages from standardized test collections with no other changes. Then they asked seven major AI models – including GPT-4o, GPT-3.5, LLaMA-3, and Qwen-2.5 – to rank the results.
Every model preferred the newer-dated text.
The “seesaw effect.” Across all models, top-ranked content skewed younger, while older material systematically sank:
Even highly authoritative older sources – academic papers, medical research, or detailed guides – lost visibility to more recent, often less credible content
Bias. Here are the models that fell for it (and those that didn’t):
The backstory. Earlier this year, independent researcher Metehan Yesilyurt found the setting “use_freshness_scoring_profile: true” in ChatGPT’s configuration files – evidence that OpenAI’s reranking system explicitly favors recent content.
What’s next. Yesilyurt warned of a looming “temporal arms race”:
What they’re saying. In response to the report, Chris Long, co-founder at Nectiv., wrote on LinkedIn:
Rich Tatum, fractional SEO and AI solutions architect at Edgy Labs, said on LinkedIn:
Which spurred this interesting reply from Rand Fishkin, cofounder of SparkToro:
Bottom line. In AI search, fresh beats factual – at least for now. If your content isn’t new, it’s already invisible.
The report. I Found It in the Code, Science Proved It in the Lab: The Recency Bias That’s Reshaping AI Search by Metehan Yesilyurt
Continue reading...

Adding fake publication dates to online content can dramatically boost its visibility across leading AI models, a team from Waseda University discovered. This seems to confirm that tools like ChatGPT systematically favor newer content over older, equally relevant material.
Why we care. AI models seem to reward timestamps more than quality. That means your older high-quality content could vanish from AI search results unless it’s regularly updated – apparently, regardless of whether those updates are substantial or artificial.
How they did it. Researchers added fake publication dates to passages from standardized test collections with no other changes. Then they asked seven major AI models – including GPT-4o, GPT-3.5, LLaMA-3, and Qwen-2.5 – to rank the results.
Every model preferred the newer-dated text.
- Top-10 results shifted 1-5 years newer on average.
- Individual passages jumped up to 95 ranking positions.
- 1 in 4 relevance decisions flipped based solely on the date.
The “seesaw effect.” Across all models, top-ranked content skewed younger, while older material systematically sank:
- Ranks 1–10: 0.8–4.8 years fresher.
- Ranks 61–100: up to 2 years older.
Even highly authoritative older sources – academic papers, medical research, or detailed guides – lost visibility to more recent, often less credible content
Bias. Here are the models that fell for it (and those that didn’t):
- Most biased: Meta’s LLaMA-3-8B, which showed 25% reversal rates and nearly 5-year shifts.
- Least biased: Alibaba’s Qwen-2.5-72B, with only 8% reversals and minimal year shifts.
- OpenAI’s GPT-4o and GPT-4 fell in the middle, showing measurable but smaller recency bias.
The backstory. Earlier this year, independent researcher Metehan Yesilyurt found the setting “use_freshness_scoring_profile: true” in ChatGPT’s configuration files – evidence that OpenAI’s reranking system explicitly favors recent content.
What’s next. Yesilyurt warned of a looming “temporal arms race”:
- Publishers can boost rankings by faking “Updated for 2025” labels.
- AI systems will respond by detecting superficial edits.
- Ultimately, the bias could reward receny over quality.
What they’re saying. In response to the report, Chris Long, co-founder at Nectiv., wrote on LinkedIn:
- “What I will say is that freshness updates are probably one of the more scalable on-page improvements you can make. For years we’ve known it’s helped with traditional search. Now we can see that updating content likely impacts AI visibility as well.”
Rich Tatum, fractional SEO and AI solutions architect at Edgy Labs, said on LinkedIn:
- “Freshness and recency make sense as a relevance signal for LLM models, and an archive of sources with high trust (academic papers) would logically make “naive” LLMs treat such freshness signals there as highly salient and trustworthy.
- Sadly our profession will naturally abuse those signals until future LLMs have that naïveté trained out of them. And those standard freshness signals will become mere noise.”
Which spurred this interesting reply from Rand Fishkin, cofounder of SparkToro:
- “I don’t think it’s necessarily sad to abuse signals that llms use. I used to be very anti-spam against Google and then I saw how Google abused evil manipulative political thing in this world to gain and hold Monopoly power. The large language model AI tool providers feel no different and I see no reason why we should adopt a set of arbitrary ethics simply to acquiesce to their whims.
- What have the big tech companies and AI tool providers done that makes them deserving of respect for their systems? I cannot name a thing.”
Bottom line. In AI search, fresh beats factual – at least for now. If your content isn’t new, it’s already invisible.
The report. I Found It in the Code, Science Proved It in the Lab: The Recency Bias That’s Reshaping AI Search by Metehan Yesilyurt
Continue reading...