18 April 2026

author logoJack F, Answer Engine Optimisation Specialist

Claude search visibility: how brands get mentioned and cited

Claude doesn’t work in quite the same way as ChatGPT Search or Google AI Overviews. Here’s what’s public about how it finds content, what seems to trigger inclusion, and what to do next.

Tilio blog featured image

“How can I get my business mentioned in Claude?”

It’s a question we’re hearing more and more at Tilio.

Claude can search the live web, fetch a page directly from a URL a user gives it, run agentic Research across the web and connected sources, and cite uploaded documents.

Showing up in Claude answers can mean being surfaced in web search, quoted from a directly fetched page, being cited from a document, or being used inside a Research session that also pulls from connected sources like Google Docs or Gmail.

That already makes Claude a slightly different visibility problem from ChatGPT Search and Google AI Overviews. Anthropic publishes a fair amount about how Claude searches, fetches and cites, but it doesn’t publish a public ranking formula for how sources get selected.

So the useful move is to work from what Anthropic has made public, then separate what is documented from what is only a reasonable inference. Google and OpenAI are both more explicit in different ways about their own search-layer mechanics.

What appearing in Claude actually means

In practice, there are a few different ways content can show up in Claude.

The first is web search. Claude can search the live web and return answers with direct citations to the sources it used. The second is web fetch. If a user gives Claude a URL while web search is enabled, Claude can retrieve the full content of that page and analyse it directly. The third is Research mode, where Claude runs multiple searches that build on each other and can also pull from connected internal sources. The fourth is document citation, where Claude cites uploaded PDFs, text files or custom content documents.

That matters because the route into Claude changes the optimisation problem. A page that is hard to surface in live search might still perform well when someone pastes the URL directly into Claude. A page that works well in Google might still be weak at sentence-level citation if the copy is dense, vague or difficult to quote cleanly. And in enterprise contexts, public web content can end up competing with connected internal sources.

Claude uses different bots for different jobs

Anthropic is unusually clear that it uses more than one bot, and that’s one of the most useful technical details for publishers.

Anthropic says it uses ClaudeBot for model training data collection, Claude-User for user-directed retrieval when someone asks Claude a question, and Claude-SearchBot to improve the quality of search results. It also says blocking Claude-User can reduce your site’s visibility for user-directed search, while blocking Claude-SearchBot can reduce visibility and accuracy in search results. So blocking one Anthropic bot isn’t the same as blocking all Claude-related visibility.

That’s already different from ChatGPT Search. OpenAI’s public guidance is simpler: inclusion depends on allowing OAI-SearchBot to crawl the site and on allowing traffic from OpenAI’s published IPs. Google’s public guidance is different again: for AI Overviews, a page needs to be indexed and eligible to be shown with a snippet in Google Search, and there are no additional technical requirements beyond the normal Search requirements.

So one of the first practical questions isn’t “how do we optimise for Claude?” It’s “are the relevant Anthropic bots actually allowed to do their jobs?” If Claude-User or Claude-SearchBot is blocked, it becomes much harder for Claude to use your content in the first place.

How Claude finds and cites content

Anthropic’s public documentation gives a fairly clear picture of the citation layer, even if it doesn’t give a ranking formula.

For web search, Anthropic says Claude decides when to search based on the prompt, the API executes the searches and provides Claude with the results, and that process can repeat multiple times during a single request. At the end of the turn, Claude returns an answer with cited sources, and citations are always enabled for web search results. Those citation objects include the source URL, title and a short cited_text span. So Claude isn’t only linking to pages. It’s grounding statements in specific pieces of text from the returned results.

That span-based behaviour matters. It suggests Claude’s citation layer works at a smaller unit than “this page is relevant”. Anthropic’s citations documentation reinforces the same pattern for documents. It says uploaded documents are chunked to define citation granularity, and for PDFs and plain text documents the chunking is sentence-based. That makes Claude unusually citation-led. It isn’t only selecting pages. It’s selecting quotable text spans inside those pages or documents.

This is also where direct fetch makes Claude different. Anthropic says that when web search is enabled and a user provides a direct URL, Claude can retrieve the full page content into its context window and analyse it directly. That means a page can enter Claude even if the user already knows the URL and bypasses a broader search step. Google AI Overviews don’t work like that, and OpenAI’s public consumer guidance focuses much more on search plus citations than on direct full-page fetch as a distinct publisher pathway.

Research mode adds another layer again. Anthropic says Research is agentic, runs multiple searches that build on each other, and can use both the web and connected internal context such as Google Docs, Gmail and Calendar. That means the content Claude ends up citing can be the result of a much longer retrieval chain than a single visible prompt suggests.

Why Claude is different from ChatGPT Search and Google AI Overviews

Claude, ChatGPT Search and Google AI Overviews all use the web, but they don’t expose the same mechanics to publishers.

Google’s public model is the most SEO-native. AI Overviews are part of Google Search, they use Google’s core ranking systems, pages need to be indexed and snippet-eligible, and the same foundational SEO best practices still apply. Google also says AI Overviews and AI Mode can use query fan-out, which means multiple related searches may sit behind one visible query.

ChatGPT Search sits somewhere in the middle. OpenAI says ChatGPT may automatically search the web, may work with external search providers, and may rewrite a user’s question into one or more targeted queries before running follow-up searches. So for ChatGPT, the public guidance is still largely search-and-crawl oriented, even if query rewriting is more conversational than classic SEO.

Claude is different because the public documentation leans more heavily into citation mechanics, direct fetch, Research, and multiple bots with different jobs. Publicly, Anthropic tells you more about how Claude retrieves and cites than about how it ranks the open web. That means appearing in Claude is slightly less like “ranking in a search engine” and slightly more like “being a source Claude can discover, fetch, quote and trust in the right context.”

The inclusion signals most likely to help

Anthropic doesn’t publish a checklist of ranking factors for Claude web search, so nobody can honestly say “do these five things and Claude will cite you.” What can be said, based on the public docs, is that some conditions are much more likely to help than others.

The first is accessibility. If Claude-User and Claude-SearchBot are blocked, or if important content is hidden behind login walls, script-heavy interfaces or fragile delivery setups, Claude has a harder time retrieving it. That’s table stakes.

The second is text-first extractability. Claude’s public citation mechanics are span-oriented. It cites specific text from search results, and its document citation system works at sentence level. That suggests pages are easier to cite when the important information exists in clean, direct, textual form rather than buried in images, oversized components or vague copy. Google’s own AI guidance makes a similar point from a different angle by recommending that important content be available in text form.

The third is query alignment across subquestions. Because Claude’s web search can repeat searches inside a single request, and Research can build multiple searches on top of each other, pages that answer only one narrow phrasing are easier to miss. Pages that cover the obvious supporting questions, comparisons and follow-ups are more likely to stay relevant across the retrieval chain.

The fourth is source clarity. Anthropic’s web search result objects include the page URL, title and page age. That doesn’t prove a formal freshness factor, but it does show that title clarity and recency metadata are part of the source objects Claude is working with. If your title is vague or your page looks stale in contexts where freshness matters, that’s unlikely to help.

The fifth is citation fidelity. This is the practical question of whether Claude can quote your page accurately without distorting the point. Pages that make precise claims in short, stand-alone sentences are simply easier to cite faithfully than pages that bury the meaning in long, abstract paragraphs. Claude’s citation architecture makes that more important than many teams realise.

What tends to work well in Claude

The pages most likely to do well in Claude aren’t always the flashiest pages on the site. They’re usually the pages that are easiest to fetch, easiest to parse and easiest to quote.

That often means strong technical explainers, comparison pages, clear process or pricing pages, useful FAQs and reference content that answers practical questions directly. It also means having public pages that survive direct fetch well, because Claude can pull a full page into context when a user provides the URL.

This is also where measurement platforms can help. A good platform won’t tell you “Claude likes this sentence.” But it can help you separate mentions from citations, compare Claude against ChatGPT Search and Google AI Overviews, identify which page types show up most often, and see whether structural or content changes are actually improving visibility over time. That’s the difference between guessing and working from signal. Google and OpenAI both frame their own AI search surfaces as dynamic and multi-query, which makes that kind of comparative measurement more useful, not less.

What to do next if Claude matters to you

Start with the basics. Check whether Claude-User and Claude-SearchBot are allowed. Make sure your important pages are public, text-first and easy to fetch. Tighten page titles and section headings. Rewrite vague copy into cleaner, more quotable statements. Build out the pages that answer the practical questions your buyers ask, not just the head terms your SEO tool likes.

Then measure Claude separately. Don’t assume a page that works in Google AI Overviews will behave the same way in Claude, or that a page ChatGPT cites will necessarily show up in Claude Research. The retrieval pathways are too different for that. Our guides on how we measure AI visibility and how to get found in AI search go deeper into the measurement side of that.

FAQs about appearing in Claude

Does Anthropic publish ranking factors for Claude web search?
No. Anthropic publishes how Claude searches, fetches and cites, plus how site owners can allow or block ClaudeBot, Claude-User and Claude-SearchBot. It doesn’t publish a public ranking formula for source inclusion.

Do direct URLs matter in Claude?
Yes. When web search is enabled and a user provides Claude with a direct URL, Claude can retrieve the full page and analyse it directly. That makes page structure and extractability matter even when a user bypasses a broader search step.

Is appearing in Claude the same as appearing in ChatGPT Search or Google AI Overviews?
No. Google AI Overviews are tied to Google’s index and snippet eligibility. ChatGPT Search is tied to web search plus query rewriting and OAI-SearchBot crawl access. Claude combines web search, direct fetch, Research and a citation layer that is unusually focused on specific text spans and sources.