Glossary
GEO & LLM metrics: glossary of terms
Working definitions for GEO optimization and LLM brand monitoring — the metrics and practices behind Getllmspy reports, from LLM-Score and Share of Voice to RAG, fanout queries, and hallucinations. Each term links to related entries: browse by section below or jump to A–Z.
Metrics
Scores and measurements
LLM-Score
LLM-Score is a 0–100 metric showing how correctly and how often large language models mention your brand fo…
Read →
GPI™ (Generative Presence Index)
GPI™ (Generative Presence Index) is Getllmspy's aggregated index combining mention rate, correctness, senti…
Read →
Share of Voice (SoV)
Share of Voice in LLMs is the percentage of answers in your category where a brand is mentioned, measured a…
Read →
Prompt Win Rate
Prompt Win Rate is the share of scripted prompts where your brand wins the evaluation rule — for example co…
Read →
LLM Visibility
LLM Visibility is an umbrella term for how often and how favourably large language models surface a brand —…
Read →
Sentiment scoring
Sentiment scoring classifies each LLM mention of your brand as positive, neutral, or negative to track repu…
Read →
Practices
Disciplines and workflows
GEO — Generative Engine Optimization
GEO (Generative Engine Optimization) is the practice of making your brand correctly mentioned and cited in …
Read →
AEO (Answer Engine Optimization)
AEO (Answer Engine Optimization) is the practice of optimizing content to appear inside answer-first surfac…
Read →
llms.txt
llms.txt is a simple text file (often at /llms.txt) that tells LLM crawlers which paths matter for training…
Read →
Entity optimization (for AI answers)
Entity optimization is the discipline of shaping how machines recognise your brand as a structured object —…
Read →
LLM SEO
LLM SEO is optimization work aimed at making large language models mention and cite your brand correctly in…
Read →
Brand visibility check
A brand visibility check is a one-shot measurement of how often and how correctly LLMs mention your brand a…
Read →
Mechanics
How answers are produced and tested
RAG (Retrieval-Augmented Generation)
RAG lets an LLM answer using retrieved documents (your site, help center, PDFs) instead of relying only on …
Read →
Hallucination
A hallucination is a confident but incorrect statement produced by an LLM about your brand, product, or pri…
Read →
Fanout queries
Fanout queries are automatically generated variations of a base prompt used to measure how robustly LLMs me…
Read →
Citation
A citation in LLM answers is an explicit mention of a source domain or URL that backs up the statement abou…
Read →
Organic prompt
An organic prompt is a real-user phrasing of a question, as opposed to a synthetic prompt designed by the b…
Read →
Model coverage
Model coverage is the list of LLMs a monitoring tool queries — more models means more complete visibility.
Read →
Prompt pack
A prompt pack is the curated set of questions a brand tracks against LLMs to measure its LLM-Score over time.
Read →
Dated snapshot
A dated snapshot is a timestamped copy of an LLM answer, used as evidence and for comparing visibility acro…
Read →
A–Z
AEO (Answer Engine Optimization)
AEO (Answer Engine Optimization) is the practice of optimizing content to appear inside answer-first surfac…
Read →
Brand visibility check
A brand visibility check is a one-shot measurement of how often and how correctly LLMs mention your brand a…
Read →
Citation
A citation in LLM answers is an explicit mention of a source domain or URL that backs up the statement abou…
Read →
Dated snapshot
A dated snapshot is a timestamped copy of an LLM answer, used as evidence and for comparing visibility acro…
Read →
Entity optimization (for AI answers)
Entity optimization is the discipline of shaping how machines recognise your brand as a structured object —…
Read →
Fanout queries
Fanout queries are automatically generated variations of a base prompt used to measure how robustly LLMs me…
Read →
GEO — Generative Engine Optimization
GEO (Generative Engine Optimization) is the practice of making your brand correctly mentioned and cited in …
Read →
GPI™ (Generative Presence Index)
GPI™ (Generative Presence Index) is Getllmspy's aggregated index combining mention rate, correctness, senti…
Read →
Hallucination
A hallucination is a confident but incorrect statement produced by an LLM about your brand, product, or pri…
Read →
LLM SEO
LLM SEO is optimization work aimed at making large language models mention and cite your brand correctly in…
Read →
LLM Visibility
LLM Visibility is an umbrella term for how often and how favourably large language models surface a brand —…
Read →
LLM-Score
LLM-Score is a 0–100 metric showing how correctly and how often large language models mention your brand fo…
Read →
llms.txt
llms.txt is a simple text file (often at /llms.txt) that tells LLM crawlers which paths matter for training…
Read →
Model coverage
Model coverage is the list of LLMs a monitoring tool queries — more models means more complete visibility.
Read →
Organic prompt
An organic prompt is a real-user phrasing of a question, as opposed to a synthetic prompt designed by the b…
Read →
Prompt pack
A prompt pack is the curated set of questions a brand tracks against LLMs to measure its LLM-Score over time.
Read →
Prompt Win Rate
Prompt Win Rate is the share of scripted prompts where your brand wins the evaluation rule — for example co…
Read →
RAG (Retrieval-Augmented Generation)
RAG lets an LLM answer using retrieved documents (your site, help center, PDFs) instead of relying only on …
Read →
Sentiment scoring
Sentiment scoring classifies each LLM mention of your brand as positive, neutral, or negative to track repu…
Read →
Share of Voice (SoV)
Share of Voice in LLMs is the percentage of answers in your category where a brand is mentioned, measured a…
Read →