Skip to content

Glossary

Model coverage

Model coverage is the list of LLMs a monitoring tool queries — more models means more complete visibility.
  • Skipping regional stacks (e.g., YandexGPT, GigaChat) underestimates real buyer journeys.

  • Coverage interacts directly with GPI™ and Share of Voice.

Definition

Model coverage is the explicit set of large language models and assistants your monitoring run queries—global vendors, open-weight stacks, and regional players. Incomplete coverage creates false negatives: a brand can look "fine" in ChatGPT while disappearing in Alice or GigaChat. Coverage is a product choice, not a vanity stat.

How it's computed

Each check declares which endpoints fired successfully. Reports show per-model blocks so you can see dropouts or timeouts. When a model is offline or rate-limited, the run records the gap so you do not mistake missing data for zero mentions.

How to read it

Expand coverage when revenue crosses regions or when a new assistant gains share in your niche. Shrink only if a model is irrelevant to your buyers—document the rationale so KPIs stay honest.

When to use

  • Procurement comparing Getllmspy to Western-only tools.
  • CIS go-to-market checks.
  • Quarterly roadmap reviews with engineering.