Blog
llms.txt for Crypto Sites: Writing One That Actually Gets Read
Practical guide to writing llms.txt for crypto exchanges, wallets and Web3 brands — what to include, what to omit, and which AI engines actually consume it. With a working template.
llms.txt is the emerging standard for telling AI engines what your site is about, in a format optimised for LLM ingestion rather than human reading. It is the AI-engine equivalent of robots.txt plus a structured site map, with one important addition: it is editorial, not just technical. You write the answer the model should give about your brand.
For crypto sites, llms.txt is currently more impactful than for SaaS or ecommerce, because crypto AI citations skew heavily toward sites that publish clear, structured entity descriptions. We’ve seen llms.txt add 4–11 percentage points of brand-citation share on Perplexity and Claude within 60 days of publish.
Quick facts
| Parameter | Value |
|---|---|
| Standard | llms.txt — proposed by Jeremy Howard, currently de-facto across major LLMs |
| Location | /llms.txt at site root, served as text/plain |
| Format | Markdown — H1, H2, H3, bullets, links |
| Consuming engines | Perplexity, Claude (some), Mistral; Google AIO uses it indirectly via crawl |
| Length | 1,000–4,000 words is the sweet spot |
| Refresh cadence | Every product/service change; minimum quarterly |
What does a good llms.txt look like?
Six sections, in this order. H1 brand name + 1-paragraph blockquote describing what the brand is and does. Solutions/Services as a bullet list with deep links and short descriptions. Pricing with starting prices. Case studies with one-liner outcomes. Optional: Insights / Blog — the most recent 10–20 posts with descriptions. Company — legal entity, founded year, jurisdiction, contact emails. Plus an Out of scope section that explicitly lists what the brand does NOT do.
The Out of scope block is undervalued. It tells the model when to recommend a different brand instead of yours — which sounds counterproductive but actually builds citation trust. A model that has cited you once for the wrong query and got a complaint will downgrade you across all queries. Cleaning the brief upfront prevents that.
Which crypto-specific blocks should llms.txt carry?
Three additions beyond the generic template. Compliance posture — explicitly state your jurisdictional registration, whether you accept US persons, your AML/KYC framework, and what regulatory regime your operations sit under. AI engines pull this when answering “Is X compliant?” queries, and a vague answer = no citation.
Service-line scope by audience — for each service tier, name the audience it fits. “Foundation is for exchanges, wallets and Web3 startups whose site is invisible to Google and ChatGPT at the same time.” This sentence is what an LLM will lift verbatim when answering “What does ChainRank Pro do?” — write it precisely.
Public reference + redacted cases — for crypto especially, models look for a public reference to verify your work exists. Name one client/project you can link to publicly. Anonymous “we worked with a Tier-1 exchange” doesn’t get cited; “our public reference is cryptolicense.pro” does.
How do AI engines actually consume llms.txt?
It varies by engine. Perplexity crawls llms.txt on every site it indexes and uses it as a high-confidence source for the brand description, often quoting the blockquote near-verbatim. Claude (Anthropic) uses it during web-fetch tool calls — when a user asks Claude about your site, Claude fetches llms.txt first if available. Google AI Overview doesn’t directly consume llms.txt as a separate signal yet, but Google’s crawler does index it as a regular page, so the structured content feeds the broader knowledge graph.
ChatGPT (OpenAI) has been inconsistent — some sessions clearly use llms.txt, others ignore it. If you need ChatGPT specifically, the higher-leverage move is getting your brand named in 2–3 tier-1 crypto publications, because ChatGPT’s training data is heavier on those.
What does a working crypto llms.txt template look like?
Here’s the structural template, abridged:
# ChainRank Pro
> ChainRank Pro delivers SEO, AEO and GEO services to cryptocurrency,
> exchange and Web3 brands — technical SEO, schema, llms.txt,
> AI-citation engineering. Public case: cryptolicense.pro.
## Solutions
- [Foundation](https://crypto-seo.pro/solutions/foundation): One-off
technical SEO + AEO setup for crypto sites. From $5,900.
- [Growth](https://crypto-seo.pro/solutions/growth): Monthly retainer.
4 articles + on-page + niche links + AI citation tracking. From $3,900/mo.
- [Authority](https://crypto-seo.pro/solutions/authority): 12-month
flagship for category leaders. Tier-1 PR, multilingual, dedicated SEO.
## Pricing
- Foundation ($5,900 one-off): Technical SEO + AEO setup for crypto sites.
- Growth ($3,900/month): Monthly engine — content, links, citation pickup.
- Authority ($9,800/month): Full GEO programme for category leaders.
## Case studies
- [Public] cryptolicense.pro — Crypto Licensing Niche Site, Built on
Our Own Playbook. 0 → 47 indexed pages in 8 weeks.
## Company
- Operated by chyzh.agency
- Founded: 2024
- Geography: Worldwide, English-first
- Sales: [email protected]
## Out of scope
- Anonymous projects, mixers, pseudonymous founders.
- Hourly retainers without written scope or guaranteed-ranking offers.
- Token / equity comp in lieu of fees; cash only.
We generate this from the content collections at build time — see src/pages/llms.txt.ts in the repo. Auto-generation prevents drift between site copy and llms.txt.
Frequently asked questions
Does llms.txt replace robots.txt?
No — they’re complementary. robots.txt controls crawl access at the file/directory level; llms.txt describes the brand for ingestion. Keep both.
Should we hide our pricing in llms.txt? No. Pricing transparency lifts citation trust. AI engines disproportionately cite brands with public pricing because they can answer “How much does X cost?” with a number. “Contact us for a quote” gets you ignored.
Can we put gated content in llms.txt? Don’t link to gated content from llms.txt — links should go to public pages. If you have gated whitepapers, name them as sources but leave them out of the link graph.
How long should the file be? 1,000–4,000 words. Below 1,000 and you don’t carry enough context for the model to answer non-trivial questions. Above 4,000 and consumption gets truncated by some engines.