Citations per answer
How many sources each AI search engine links to on average. Updated daily.
Average number of distinct source citations per AI answer. 28-day rolling window.
The average number of distinct source citations each model includes per answer.
| Rank | AI model | Average citations per answer | Value |
|---|---|---|---|
|
1
|
Google AI Mode
|
|
16.73 |
|
2
|
Google AI Overviews
|
|
12.87 |
|
3
|
ChatGPT
|
|
11.49 |
|
4
|
Gemini
|
|
5.66 |
|
5
|
Perplexity
|
|
2.71 |
The density of citations in an AI answer shapes the game for content marketers. More citations per answer means more "slots" per query, and more chance to be part of the answer. Perplexity leads this ranking by a wide margin, but the gap between ChatGPT and Google AI Overviews is narrowing.
Built on the LLM Pulse dataset.
For every answer we capture, we count the distinct source citations it contains. Denominator is answers with at least one citation.
The ranking shows the average across the 28-day window, per model.
28-day rolling window, rebuilt daily.
More citations per answer means more visibility opportunities per query — and a more competitive SERP-equivalent.
If your content game is working on Perplexity but not on ChatGPT, this ranking is one of the reasons.
Powered by the LLM Pulse dataset
This page is the public tip of the LLM Pulse iceberg. Internally we track millions of AI answers every week across every major AI search engine, rolled up to the citation, the brand mention and the sentiment level. Point it at your own domain in under a minute.