Process Overview

From URL
to full report
in 48 hours

We run 92 structured queries across ChatGPT, Perplexity, Gemini, and Claude using official production APIs — then deliver a scored, prioritized audit report that tells you exactly where you stand and what to fix first.

Queries per analysis 92
AI engines measured 4
Checklist items audited 36
Turnaround time 48h
Site pages crawled up to 100

Four steps,
one complete picture

01
Submit ~5 min · You

Submit Your Brand URL

Purchase your audit tier, then submit your brand URL via the confirmation form. We confirm: brand name, primary market (EN / KR / global), target language, and up to 3 competitor URLs you'd like tracked in the competitive map.

What you provide
Primary URL · Brand name · Market · Language preference · Optional: 3 competitor URLs
What happens next
Confirmation email sent within 1 hour · Pipeline queued · Crawl begins automatically
02
Crawl ~20 min · Automated

Adaptive Site Crawl

Our crawler visits up to 100 pages across your domain. It extracts brand context, product descriptions, pricing signals, schema markup, meta structure, and technical AEO indicators. A brand context profile is built and used to generate the query set for your specific industry and positioning.

Technical checks (auto)
llms.txt · robots.txt · sitemap.xml · schema markup · meta descriptions · canonical URLs · Core Web Vitals signals
Brand context extracted
Product category · Target users · Key features · Pricing tier · Competitor mentions · Brand tonality
Query generation
23 brand-specific queries generated from crawl context: brand_general · category_discovery · competitor_comparison · use_case · pricing
Tool stack
Firecrawl adaptive crawl · Custom brand_context extractor · NLP categorization pipeline
03
Analysis ~30 min · Automated

92-Query AI Analysis

Each of the 23 generated queries is sent independently to all 4 AI engine APIs. Responses are captured raw, parsed for brand mentions, competitor mentions, sentiment polarity, position ranking, and response length. Results are aggregated into per-engine scores and a weighted overall AEO score (0–100).

Engines queried
OpenAI GPT-4o · Perplexity sonar-pro (live web) · Gemini 2.0 Flash (Google grounding) · Claude Sonnet (Anthropic)
Per-response metrics
Brand mentioned: Y/N · Mention position · Sentiment (positive / neutral / negative) · Competitor co-mentions · Response length
Scoring formula
Technical 25% · Content 35% · Entity & Brand 25% · Authority & Trust 15% → weighted 0–100 score
Why official APIs
Web UIs give non-deterministic, personalized results. APIs give reproducible, baseline responses — the same query always scores the same way.
04
Delivery Within 48h · Email

Full Report Delivered

Your audit report arrives as a standalone HTML file by email — no login required, no SaaS dashboard. Open in any browser. 8 sections, interactive score simulator, collapsible fix guides, verbatim AI response logs. Everything needed to understand the problem and start fixing it immediately.

Report format
Standalone HTML · No login required · Mobile responsive · Print-ready · Permanent — download and keep
8 sections included
Priority Actions · Score Overview · Competitive Map · Engine Analysis · Brand Perception · AI Response Logs · API Signals · Full Checklist

Four engines,
one audit

OpenAI's flagship model. The most widely used AI assistant globally. Brand recognition in GPT-4o is driven by training data coverage — high-authority web content, documentation, Wikipedia, and press coverage directly improve ChatGPT visibility.
23
Queries run
GPT-4o
Model version
Training
Data source
Mention rate · Avg position · Sentiment polarity Competitor co-mention frequency Response verbatim log captured
Real-time web search + AI synthesis. Perplexity actively crawls the web when answering queries — meaning your current web presence matters more here than training data. 0% Perplexity visibility = insufficient live web signals even with search active.
23
Queries run
sonar-pro
Model version
Live web
Data source
Mention rate · Citation domain detection Live search query capture (API metadata) Response verbatim log captured
Google's multimodal AI with Search Grounding enabled. Gemini cross-references Google's index in real time. Brands without Google Knowledge Panel entries or strong Google-indexed authority pages tend to score 0% here.
23
Queries run
2.0 Flash
Model version
Google+
Data source
Mention rate · Google grounding signal web_search_queries metadata captured Response verbatim log captured
Anthropic's conversational AI. Claude relies primarily on training data — Wikipedia articles, GitHub READMEs, academic papers, and tech media coverage drive recognition. Brands that are only known in niche circles tend to score 0% here without third-party editorial mentions.
23
Queries run
Sonnet 4.5
Model version
Training
Data source
Mention rate · Entity confidence signals stop_reason + token metadata captured Response verbatim log captured

Eight sections,
zero guesswork

01
Priority Actions
Top 5 fixes sorted by score impact. Each shows exact implementation steps, estimated time, and expected point gain. Start here — nothing else matters first.
📊
02
Score Overview + Simulator
Current score breakdown by category. Interactive simulator shows projected score after Critical fixes → full remediation. Baseline → achievable potential visualized.
🗺️
03
Competitive Brand Map
Every brand mentioned in AI responses to your category queries. Threat level classified (High / Medium / Low). See who AI recommends instead of you.
🔍
04
Per-Engine Analysis
Individual breakdown for ChatGPT, Perplexity, Gemini, and Claude. Mention rate, average position, #1 rate, and diagnostic notes per engine.
🪞
05
Brand Perception
Keywords AI engines associate with your brand. Positive vs. neutral vs. not-recognized status. If AI describes you wrong, you see it here — and can fix it.
💬
06
Verbatim AI Response Logs
Unedited, raw responses from all 4 engines. Brand mentions highlighted. Expandable full-length view. See exactly what users receive when asking AI about you.
📡
07
API Signal Metadata
API-only data not visible in consumer interfaces: Perplexity citation domains, Gemini's live Google search queries, Claude stop_reason codes, confidence signals.
08
Full 36-Item Checklist
Technical + Content + Entity + Authority. Pass / Fail / Manual review status. Each failing item has a collapsible fix guide. The complete reference document.
🎯
Bonus
Score Improvement Roadmap
A consolidated table of all failing items, sorted by score impact, estimated effort, and recommended sequence. Ready to hand to your dev or content team.

Frequently
asked

How is this different from SEO tools like Ahrefs or SEMrush?
SEO tools measure search engine rankings. AEO measures AI engine visibility — how AI chatbots respond to questions about your brand. These are entirely different systems. AI engines don't use keyword rankings; they use training data, entity recognition, and live web retrieval.
Why do you use official APIs instead of the web interfaces?
Web UIs give personalized, session-dependent results. APIs provide reproducible baseline responses — the same query always returns a comparable result. This makes scoring meaningful and tracking over time valid. Web UI screenshots are not measurable data.
How long until I see results after fixing checklist items?
Technical fixes (llms.txt, schema markup) can show results within 1–4 weeks as AI crawlers re-index. Content and entity fixes take 2–8 weeks. Authority fixes (press coverage, Wikipedia) take longer — typically 1–3 months before new training data is incorporated.
What happens if my site is mostly in Korean?
We support Korean-language brands. Queries are generated in the brand's primary language, and AI responses are captured in that language. All scoring, analysis, and report sections are produced in English by default with Korean data preserved verbatim.
Can I get a re-run after implementing fixes?
Yes — Growth and Agency plans include re-audits. For Starter tier, you can purchase an additional audit at the standard rate. We recommend re-running 4–8 weeks after implementing Critical fixes to measure score movement.
Is the report file mine to keep?
Yes. You receive a standalone HTML file by email. No login, no expiry, no SaaS platform required. Download, save, and share it with your team. It works in any browser and is fully self-contained.

Ready to see your score?

Submit your URL today. Full 8-section report delivered within 48 hours. No setup, no SaaS login required.

View Pricing & Get Started → See Sample Report