Your AI citations are decaying.
We study what survives.

Citation rot is real, and it eats at your bottom line. Dozens of tools can tell you whether you're being cited. Only we can tell you why your content persists and what to change when it doesn't.

Quoted is the only AEO research company studying what makes citations persist across all major AI engines. The result is a prescriptive playbook specific to your company, your vertical, and your buyers.

Now accepting partners for our In-House AEO Research beta program. Limited spots.

~60% of AI citations turn over every month.

SEO rankings compounded—the longer you ranked, the stronger you got. AI citations decay. They rot. And they rot because four forces are moving at once:

01

The model gets updated

GPT-5.2 cited you. GPT-5.3 doesn't. New version, new preferences, no warning. Every update reshuffles which sources get referenced—and updates ship constantly.

02

The search index changes

What's in ChatGPT's search index this week is not what was there last week. Pages get added, removed, and re-ranked behind the scenes—independent of the model itself.

03

A competitor publishes something better

Someone publishes a more useful page on your topic and takes your slot. Unlike SEO, there's no gradual decline—you're cited or you're not. One better page can displace you overnight.

04

People ask different questions

The same buyer phrases their question differently this month and gets routed to different sources. Small changes in wording change which content gets surfaced.

The critical problem: you can't tell which of these caused your drop.

Without knowing the cause, you can't fix it. Today you call your SEO agency, who shrugs. You check a visibility tool, which shows you the drop but not the why or the what-to-do.

Illustrative: Citation turnover across AI engines over 12 weeks
0% 50% 100% Wk 1 Wk 4 Wk 8 Wk 12 Original citations still present New citations replacing them

Visibility is becoming commoditized. Prescription is the gap.

Five or more platforms can tell you "are you being cited?" That's table stakes. The question that actually matters—why content persists and what to change—has zero competitors with published methodology. That's where Quoted operates.

Visibility platforms
Content agencies
Quoted
Track whether you're cited
Yes
No
Yes
Explain why citations change
No
No
Yes
Longitudinal persistence data
No
No
Yes
Prescriptive playbook (what to change)
No
Generic
Data-backed
Content feature analysis
No
Intuition
Statistical
Published research methodology
No
No
Yes
Intelligence compounds over time
No
No
Yes

Visibility platforms (Profound, Scrunch, BrightEdge Generative AI features) show you the drop. Content agencies rewrite your pages based on best practices. Quoted runs the longitudinal research that tells you why content persists and what specifically to change—backed by data, not intuition.

Longitudinal research, not one-time audits

We treat AI citation optimization as an ongoing research problem, not a project with an end date. Here's why that matters:

1

We study your citation landscape continuously

Every week, we run a defined set of prompts—the questions your buyers are actually asking AI engines—across ChatGPT, Perplexity, Google AI Overviews, Claude, and Gemini. We record which URLs get cited, in what position, and track how that changes over time.

2

The data compounds into intelligence

After 4 weeks, you see trends. After 12 weeks, you see patterns. After 6 months, you have a proprietary dataset that tells you exactly which content characteristics persist in citations and which decay. This isn't a report someone writes from intuition—it's statistical analysis from thousands of tracked data points.

3

Your playbook gets sharper every month

Early recommendations are directional—"content with these characteristics tends to persist." By month three, they're specific—"for these 15 prompts your buyers ask most, here's the exact content gap between you and the brands getting cited." By month six, the playbook is a competitive weapon.

Project Glassbox

A longitudinal study of what makes content persist in AI citations

AI answer engines are black boxes. Everyone can see what gets cited. We study why it gets cited—and why it stops. Project Glassbox is the only ongoing research initiative focused on the structural and contextual factors that determine citation persistence.

Every week, we track which content gets cited, analyze dozens of structural features across every cited URL, and measure which characteristics correlate with longevity. The result is a growing dataset of citation contributors—the specific content attributes that make AI engines choose one source over another.

5
AI engines tracked weekly
40+
content features analyzed per URL
2
industry verticals active
Ongoing
weekly data collection since launch

Most companies guess at what AI engines want. We've been measuring the contributors—the content features that actually predict whether a citation persists or decays. The findings from Glassbox inform every client playbook we build.

We analyze 40+ content features to find what makes citations stick

Every week, we track which content gets cited, scrape and analyze every cited URL across dozens of structural features, and measure which characteristics correlate with persistence over time. Here's what that research produces:

Schema Markup
2.3x
URLs with FAQ schema showed 2.3x citation persistence in cybersecurity prompts over 12 weeks.
Source Attribution
+67%
Pages citing primary data sources showed 67% higher citation persistence than those without attribution.
Recency Signal
−41%
Pages without a visible publish or update date lost citations 41% faster across all engines tracked.
Entity Clarity
3.1x
Pages that defined key terms within the first 200 words persisted 3.1x longer in comparison prompts.
Engine Disagreement
38%
Only 38% of cited URLs appeared in more than one engine for the same prompt. Engines disagree more than they agree.
Answer-First Structure
+72%
Pages leading with a direct answer in the first paragraph showed 72% higher initial citation rates across all engines.
Competitor Displacement
7 days
Average time for a new competitor page to fully displace an existing citation, once it enters the search index.
Citation Decay Rate
~60%
Average monthly citation turnover across all tracked prompts and engines. The baseline that drives everything we do.
Read Our Full Methodology →

Illustrative findings. Actual client findings are specific to their vertical and competitive landscape.

A research dashboard and a strategic playbook—both improving every week

Live citation intelligence

A dashboard showing your citation landscape in real time:

  • Which of your URLs are currently being cited, by which engines, for which prompts
  • Citation persistence trends—what's holding, what's fading, what's new
  • Competitor citation tracking—who else shows up when your buyers ask AI
  • Content feature analysis—the structural patterns that correlate with citation longevity in your vertical

Updated weekly. Accessible anytime.

Periodic playbook reports

Monthly or quarterly strategic reports with deeper analysis and specific recommendations:

  • Which content gaps are costing you citations on high-value prompts
  • What your cited competitors are doing structurally that you aren't
  • Priority list of content to create or restructure, ranked by citation impact
  • Longitudinal trends—how your citation footprint is evolving over time

Written for marketing leaders, not data scientists.

Now Accepting Partners

In-House AEO Research Beta Program

Launch your own in-house citation research study—powered by Quoted's infrastructure and methodology. A six-month research engagement designed for content teams and agencies who want to be first movers in AEO intelligence.

Research infrastructure

  • Up to 100 tracked prompts across your buyer journey
  • Weekly citation tracking across 5 AI engines
  • Third-party verified live citation research
  • Full access to raw data exports
  • Live dashboard with citation persistence and competitor tracking

Strategic deliverables

  • Two quarterly deep-dive reports with longitudinal analysis
  • Voice and content structure guidelines informed by citation data
  • Content feature analysis—what persists and why
  • Prioritized content recommendations ranked by citation impact
  • Ongoing prompt refinement as your market evolves

Six-month commitment. Longitudinal research requires time to produce statistically meaningful findings. By month three, your playbook is data-backed. By month six, it's a proprietary competitive asset.

The best time to start tracking was six months ago.
The second best time is now.

Every week of data collection makes your playbook more precise. The companies that start tracking AI citations today will have six months of longitudinal intelligence by the time their competitors realize they need it.

This isn't a tool you try for a month and cancel. It's an in-house AEO research program that gets more valuable the longer it runs.