A user types: 'What's the best CRM for a fintech startup?' ChatGPT doesn't just answer that question. It expands it into 8-15 related sub-queries, runs all of them, synthesises the results, and then answers. The brands that appear across those sub-queries win the answer.
What Actually Happens When You Ask AI a Question
The response you see in ChatGPT, Perplexity, or Gemini is the end product of a process most people never think about. Before composing an answer, AI models with web access — which now includes all the major platforms — generate a set of background queries to retrieve current, relevant information.
This expansion process is called a query fanout. It's the AI equivalent of a researcher who, before answering a question, pulls out twelve related reference books rather than just one.
The original prompt — 'What's the best CRM for a fintech startup?' — might expand into queries like:
- •best CRM software fintech companies 2026
- •CRM tools for financial services startups
- •CRM comparison fintech HubSpot Salesforce alternatives
- •CRM with compliance features financial services
- •lightweight CRM early stage fintech
- •CRM reviews fintech founders Reddit
The AI retrieves sources for each of these queries, synthesises the results, and builds its answer from the combined information. A brand that appears consistently across multiple fanout queries — not just the original prompt — has a dramatically higher probability of being cited in the final answer.
Why This Matters More Than Traditional Keyword Research
Traditional keyword research tells you what people type into Google. Query fanout analysis tells you what questions AI is running on their behalf — the background layer of queries that users never see and traditional tools never capture.
These two sets of queries are often very different. A user might type a 5-word phrase into ChatGPT. The fanout generates 10 specific, longer-tail queries across multiple related topics. If you're optimising only for the original prompt, you're missing the 10 queries that actually determine whether you get cited.
How Fanout Queries Differ by Platform
Each AI platform has its own fanout behaviour. Perplexity is aggressive — it generates more sub-queries and surfaces more sources. ChatGPT with browsing is more selective — it generates fewer queries but weights source authority more heavily. Gemini incorporates Google's index and tends to favour content that already ranks well in traditional search.
A word-level diff comparison across platforms — showing exactly which words each AI adds or removes from your prompt when expanding it — reveals these platform-specific patterns and shows you precisely where your content needs to be positioned for each engine.
How to Use This in Your Content Strategy
The practical application is straightforward:
- •For any important category query, map out the likely fanout — what related questions would an AI generate when trying to answer it?
- •Create content that directly answers the fanout queries, not just the original prompt
- •Identify which fanout queries your competitors are currently being cited for that you are not — these are your highest-value content gaps
- •Structure each piece of content around a single, specific question so the AI can extract it cleanly when it runs that fanout sub-query
Hema's Query Fanouts feature shows you the exact word-level diff for every prompt you track — how each AI platform expands your query, what words it adds, what it removes, and how the fanout varies platform by platform. It's the only tool that makes this layer visible.