Skip to content

Glossary

Fanout queries

Fanout queries are automatically generated variations of a base prompt used to measure how robustly LLMs mention a brand across phrasings.
  • One idea, many phrasings—so you are not fooled by a lucky wording.

  • Essential before trusting a single headline LLM-Score.

Definition

Fanout queries are controlled paraphrases and angle variations around the same user intent—"best CRM for SMB" vs "CRM tools for small teams" vs "affordable CRM with invoicing." They exist because models are sensitive to lexical choice. A brand that appears in one phrasing but vanishes in another has brittle visibility, not a durable win.

How it's computed

Getllmspy generates a fanout set from your base organic prompts or category templates, then runs them through the same model coverage. Aggregation looks at mention stability, rank-like placement when lists appear, and sentiment drift across variants.

How to read it

High stability means your positioning is encoded in multiple ways models recognize. Wild swings across fanouts mean you should tighten definitions, add corroborating sources, or fix conflicting pages that teach the model different stories.

When to use

  • Before/after a site migration or rebranding.
  • When a competitor suddenly wins a niche phrase.
  • When marketing wants proof beyond a single screenshot.

When fanouts matter most

If your LLM-Score jumps between runs without obvious product changes, you may have wording-luck instead of durable presence.

Example of brittle visibility:

  • “Best brand monitoring tool for ChatGPT” → brand appears in 9/10 answers
  • “Tool to track mentions in neural assistants” → brand appears in 2/10 answers

Same buyer intent, different vocabulary. Fanouts expose that gap so you can tighten definitions, synonyms, and corroborating sources.