Query fan-out

How much do AI search engines fan out?

How many related sub-queries each AI search engine fans out to per answer.

Updated Apr 18, 2026

Average fan-out per answer — last 90 days

LLM Pulse

Average number of fan-out sub-queries per AI answer. 28-day rolling window.

AI models ranked by query fan-out

The average number of fan-out sub-queries each model generates per answer.

Rank AI model Average fan-out queries per answer Value
1
ChatGPT
1.01

What this report covers

Query fan-out is how modern AI search engines — especially ChatGPT and Google AI Mode — turn a single user prompt into a series of sub-queries that run in parallel. The more a model fans out, the more surface area your content has to appear.

Built on the LLM Pulse dataset.

How we measure this

  • For every answer that includes fan-out sub-queries (captured via the model's own reasoning trace), we record the count.

  • The ranking shows the average across the 28-day window, per model.

  • 28-day rolling window, rebuilt daily.

Why this matters

If a model fans out to 6 sub-queries per prompt, your optimisation target is no longer "rank for this one query" — it is "rank for the half-dozen sub-queries that prompt generates".

LLM Pulse surfaces the exact fan-out queries for every prompt you track, so you can optimise for the full intent tree — not just the surface query.

Powered by the LLM Pulse dataset

The industry's leading AI citation dataset — pointed at your brand

This page is the public tip of the LLM Pulse iceberg. Internally we track millions of AI answers every week across every major AI search engine, rolled up to the citation, the brand mention and the sentiment level. Point it at your own domain in under a minute.

  • Millions of AI answers analyzed every week
  • Five AI search engines tracked — ChatGPT, Perplexity, Gemini, AI Mode, AI Overviews
  • The deepest public dataset of AI citations anywhere