Loading report…

GET /api/v1/checks/:id

Report: Demo brand

Check completed
Trend window (6 weeks)
Data snapshot
Demo Multi-model 2 risks
58
Overall LLM-Score

Weighted visibility across models (mock + spec formula §6).

Visibility by model

ChatGPT
0%
Claude
0%
Gemini
0%
Perplexity
0%
Grok
0%
DeepSeek
0%

Key metrics

Share of voice, trend, recommendation placement, and factual risk—in one view. SoV is recomputed from mock model answers.

Share of AI Voice
mention share
67%

Per model: share of queried models where the brand appears in at least one answer in this check. A detailed breakdown is filled from the run data.

SoV % · hover a point

Week 1: 52% SoV (demo) Week 2: 58% SoV (demo) Week 3: 55% SoV (demo) Week 4: 68% SoV (demo) Week 5: 64% SoV (demo) Week 6: 72% SoV (demo) W1 W2 W3 W4 W5 W6
SoV growth over 4 weeks
mock trend
+12 pp.

From to : started at 58% → now 70% (illustrative demo trend).

SoV % · bars by week

Week 1: 58% SoV (demo) Week 2: 62% SoV (demo) Week 3: 66% SoV (demo) Week 4: 70% SoV (demo) W1 W2 W3 W4
Average answer position
rank
2,4

Average brand rank in recommendation lists: 1 is first—lower is better.

Rank in list (1 = top)

Hallucinations & fact-check
risk
2

Disputed or unverified brand claims: 2 of 54 extracted snippets (mock). Watch threshold is usually ≥5 per period.

Cases per week · dashed = threshold 5

Week 1: 4 suspicious snippets (demo) Week 2: 2 suspicious snippets (demo) Week 3: 3 suspicious snippets (demo) Week 4: 1 suspicious snippet (demo) Week 5: 2 suspicious snippets (demo) W1 W2 W3 W4 W5

Visibility trend

History from visibility_history. Y-axis: mock index 0–100.

ChatGPT Claude Gemini
100 75 50 25 0 Wk 1 Wk 2 Wk 3 Wk 4 Wk 5 Wk 6

Sentiment и конкуренты

Sentiment

  • positive ~55%
  • neutral ~27%
  • negative ~18%

Prompt ТЗ §5.3 → JSON sentiment.

Competitors

BrandВидимостьТип
Ваш бренд
72
Skyscanner
64
compared
Островок
48
mentioned
Яндекс.Путешествия
58
recommended

Hallucination detector

This block lists AI claims about your brand that don’t match what people can verify on your official site and pricing. It’s not “the model goofed for no reason”—it’s a prompt to fix plan names, prices, and product copy so customers and models see the same facts.

Verification per spec §5.2.

In the product spec, §5.2 defines how answers are checked against your site: plans, features, numbers. This demo uses illustrative examples of mismatches so you can see how the live product would flag them.

wrong_price · critical
wrong_price means the model cited a price or plan name that isn’t in your published pricing (e.g. a made-up tier). critical is the serious tier: users might trust the wrong number. Fix the pricing page and make plan names explicit on the site.
Non-existent plan named “Business Lite”.
fake_feature · warning
fake_feature is when the model adds a capability (e.g. an integration) your site doesn’t confirm. warning is softer than critical but still matters: either document it on the site or brief support with accurate answers.
“1C integration” mentioned with no proof on the website.

Model quotes

# Model Brand Sentiment Full quote

Sub-queries from one prompt

A user asks one question, but the model often leans on several semantic branches when answering: pricing, fees, alternatives, adjacent topics. That branching is query fan-out: not one monolithic reply, but multiple sub-questions inside a single prompt.

Why it’s in the report: you can see which branches mention your brand and which are still on-topic for you but drop the brand name — typical gaps to close with content and prompt strategy.

Below is a demo with three sub-queries from one check.

Branch summary
Branch 1 — yes Branch 2 — no Branch 3 — no
Sub-query wording Your brand in this branch
best flight aggregator 2026yes
metasearch fee comparisonno
how to earn miles with a credit cardno

Prompt gaps

What this is: query wordings where model answers surface competitors or clear substitutes but not your brand. They may extend beyond the core prompt pack in your check—the list is built from topic overlap and signals in answers (in production, per product rules).

Why it matters: a ready backlog for content, landing pages, ad clusters, and internal briefs—so you cover demand where the market is already compared side by side, but you are unnamed.

Demo: a few queries from a sample run.

    Citation sources

    Дрейф источников vs прошлый месяц: 54% (мок).

    vc.ru 18% Отзовик 14% Офиц. блог 9%
    • vc.ru — 18%
    • Отзовик — 14%
    • Официальный блог — 9%

    AEO и Shopping

    AEO Content Score

    61 / 100

    • llms.txt — нет
    • Schema.org — да
    • Sitemap — да
    • Wikipedia — нет

    Shopping visibility

    0

    Карточки ChatGPT Shopping не обнаружены (мок).

    Краулеры и техаудит

    Crawler visits

    GPTBot 1240 ClaudeBot 780 Perplexity 210
    • GPTBot — 1240
    • ClaudeBot — 780
    • PerplexityBot — 210

    Чеклист

    robots.txt
    Cloudflare / боты
    llms.txt
    Schema.org

    Plan рекомендаций

    Правила ТЗ §10.

    1
    Исправьте галлюцинации
    Обновите публичные описания и справочные страницы.
    critical
    2
    Создайте llms.txt
    AI-краулеры смогут однозначно сопоставить бренд и продукты.
    important
    3
    Разблокируйте AI-краулеры
    PerplexityBot в Disallow снижает видимость в Perplexity.
    important
    4
    Закройте 2 Prompt Gap
    Страницы под запросы, где есть конкуренты, а вас нет.
    medium
    5
    Добавьте Schema.org
    Структурированные данные повышают шанс корректного цитирования.
    medium
    6
    Wikipedia и Shopping
    Долгосрочные усилители AI-видимости.
    good

    Actions

    New check To history

    PDF is a stub in this prototype.

    Daily report runs

    Paid plan

    Run this check every day automatically and get a fresh snapshot without clicking “New check”.

    Pricing

    Requires a Pro subscription or higher. In this prototype the button only shows a hint.