How we measure AI visibility
AI visibility is still a new category, which means it is easy to make it sound broader, cleaner or more certain than it really is. We think the better approach is to be precise.
What this page covers
This page explains how we measure AI visibility at Tilio, what we track, what we do not claim to track, and how that turns into practical website actions. It is here to show that our measurement is structured, comparative and commercially useful, not vague.
If you want a defined starting point, begin with an AI Visibility Audit. If you already know you need ongoing support, you can explore our AEO agency service.
Definition
What AI visibility measurement means
AI visibility measurement is the process of checking how often your brand, pages and competitors appear across a defined set of prompts on selected AI search surfaces.
In practice, that means we are not looking at one keyword and one ranking position in the way traditional SEO tools often do. We are looking at a structured prompt set, grouped by topic and intent, and assessing how your business shows up in AI-generated answers.
That includes questions such as:
- is your brand mentioned at all
- is your website cited as a source
- which pages are being cited
- which competitors are being recommended instead
- how visibility changes by platform, topic or prompt group
- whether your positioning is clear, weak or inconsistent
The goal is not to create a vanity score. The goal is to understand how visible, citable and understandable your business is when real buyer questions are asked in AI-led search.
Tracking
What we track
We track AI visibility in a structured way, not as a general impression.
Our core measurement usually includes:
- tracked prompts across agreed themes, services or commercial topics
- grouped intent, so prompts can be reviewed by category rather than one by one
- brand mentions in AI-generated answers
- citations to your domain and key URLs
- cited pages, so we can see which parts of the site are actually being used
- competitor benchmarking across the same prompt sets
- platform-level differences across the environments we monitor
- movement over time, so reporting is not based on a one-off snapshot
Where relevant, we also review whether important commercial pages are easy to retrieve, easy to understand and easy to cite. That is where measurement becomes useful. It stops being a dashboard exercise and starts becoming page-level prioritisation.
Our monitoring layer is powered using Profound's platform, but the reporting and actions clients receive are shaped around Tilio's own prompt sets, analysis and recommendations.
Limits
What we do not claim to measure
This is the most important part of the methodology. We do not claim to measure all AI visibility everywhere, and we do not think anyone should pretend they can.
More specifically, we do not claim:
- to see every answer every user sees across every AI platform
- to measure the full prompt universe for your category
- to treat one sampled visibility score as a complete market truth
- to guarantee that a mention will lead directly to traffic or revenue
- to claim that a citation always means influence, or that no citation means no influence
- to present AI search as a perfectly stable environment where the same prompt always returns the same result
- to collapse all platforms into one neat number without context
That is why our methodology is prompt-based, grouped, comparative and directional. It is designed to produce useful decision-making signals, not false precision.
The point is not to overclaim. It is to understand enough, consistently enough, to improve the pages and topics that matter.
Platforms
Platforms included
We focus on three platforms because they are highly relevant to buyer discovery, comparison behaviour and answer-led search.
Google AI Overviews
Where AI-generated answers appear at the top of search results, often before a user visits any website.
ChatGPT
Where people ask direct buyer questions such as who should I hire, what should I choose, and what are the best options.
Perplexity
A citation-led AI search engine where your pages can be referenced as sources for high-intent queries.
We do not present this as universal measurement of every AI environment. It is measurement across the platforms we actively monitor, using defined prompt sets and structured comparisons. A methodology is more useful when it is clear about where it applies.
Prompt design
Prompt selection and intent grouping
Prompt selection is one of the most important parts of the whole process. A weak prompt set creates weak reporting, even if the dashboard looks polished.
We usually group prompts around a few practical types of intent:
Category understanding
Prompts that test whether AI can describe what your category is and who it serves.
Service or solution discovery
Prompts around what types of business, product or service exist to solve a given problem.
Shortlist and comparison behaviour
Prompts that produce ranked lists, comparisons or recommendations within a category.
Alternatives and competitor evaluation
Prompts that ask for alternatives to a known brand or category leader.
Pricing and fit questions
Prompts that surface cost, constraints or suitability for a buyer's situation.
Trust, proof and credibility checks
Prompts that test whether a brand or claim can be verified or substantiated.
That matters because AI visibility is rarely one-query deep. A commercial decision often sits across a cluster of prompts rather than one exact phrase.
Grouping prompts this way helps us avoid two common mistakes. The first is overfocusing on a tiny number of vanity queries. The second is treating all prompts as equally valuable when they clearly are not. The result is a reporting structure that is easier to interpret and easier to act on.
Signal layers
Mentions, citations, and positioning
These are related, but they are not the same thing.
Mention
Your brand is named in the response.
Citation
The answer includes a source from your website and uses that source as part of the response.
Positioning
How your business is described. You might be mentioned, but described vaguely. You might be cited, but for the wrong page. You might appear often, but mainly in lower-value prompts rather than higher-intent ones.
That is why we separate these layers in reporting. A useful AI visibility report should not just say you were visible. It should help answer questions like:
- were we mentioned or absent
- were we cited from our own site
- which pages were cited most often
- were competitors cited more often than we were
- are we being positioned accurately
- which topics or prompt groups are weakest
That is where measurement becomes commercially useful rather than cosmetic.
Actions
How this becomes website actions
Measurement only matters if it changes decisions.
Once we know where visibility is strong, weak or inconsistent, we turn that into practical website actions. That often includes:
- improving service pages that are too vague or hard to summarise
- strengthening pricing pages and buyer FAQs
- adding clearer comparison or alternatives content
- tightening internal linking between commercial pages and supporting proof
- reducing inconsistent language across the site
- improving the pages most likely to earn citations
- fixing gaps where competitors are repeatedly winning the same prompt groups
For clients on ongoing support, this does not stop at reporting. It also feeds directly into content production, content optimisation and prioritised actions each month, so the output is not just insight. It is progress.
In other words, the output is not “here is your score, good luck”. The output is a clearer view of what to improve first, what can wait, and which pages are most likely to influence AI visibility in a commercially meaningful way.
Deliverables
What clients actually receive
The exact format depends on whether you start with an audit or ongoing support, but the output is designed to be practical.
If you start with an AI Visibility Audit, the focus is usually on baseline visibility, current gaps and the highest-priority next steps.
If you move into ongoing support, clients typically receive:
- tracking across Google AI Overviews, ChatGPT and Perplexity
- 100 tracked prompts, monitored daily
- competitor benchmarking
- English-language tracking
- 3 content pieces per month, either new or updated, written to be cited
- 3 content optimisations per month, focused on improving existing pages based on monitoring
- 4 prioritised actions per week, tied to specific visibility opportunities
- an onboarding call
- a monthly report covering findings, trends and next steps
- Google Analytics integration for AI traffic attribution
- unlimited domains
That means clients receive more than visibility data alone. They receive structured measurement, competitor context, content actions, page improvements and reporting that helps turn AI visibility into a practical workflow.