# Cited Docs > Reference knowledge base for Generative Engine Optimization (GEO). Built by Cited. ## Docs - [2026 Changelog](https://docs.getcited.in/changelog/2026.md): Documentation and methodology changes shipped in 2026. - [Changelog](https://docs.getcited.in/changelog/index.md): A record of significant changes to Cited's documentation, methodology, and tracked platforms. - [What sources LLMs cite](https://docs.getcited.in/concepts/foundations/ai-citation-sources.md): The types of web content that LLMs prefer to cite — and what makes a source more likely to be referenced. - [Citations vs mentions](https://docs.getcited.in/concepts/foundations/citations-vs-mentions.md): The difference between a citation (linked source) and a mention (brand name without link) in AI-generated answers. - [GEO vs SEO](https://docs.getcited.in/concepts/foundations/geo-vs-seo.md): How Generative Engine Optimization differs from traditional Search Engine Optimization — and why both matter. - [Query intent taxonomy](https://docs.getcited.in/concepts/foundations/intent-taxonomy.md): How Cited classifies prompts by intent type — informational, commercial, navigational, transactional — and why it matters for AI visibility. - [Non-determinism in LLM responses](https://docs.getcited.in/concepts/foundations/non-determinism.md): Why the same prompt produces different answers across runs — and how Cited accounts for this in measurement. - [Average position](https://docs.getcited.in/concepts/metrics/average-position.md): Average position tracks where your brand typically appears in the citation list of AI-generated answers. - [Citation rate](https://docs.getcited.in/concepts/metrics/citation-rate.md): Citation rate measures the percentage of AI answers that include a direct link to your domain as a source. - [Mention rate](https://docs.getcited.in/concepts/metrics/mention-rate.md): Mention rate measures the percentage of AI-generated answers that reference your brand across tracked prompts. - [Sentiment in AI answers](https://docs.getcited.in/concepts/metrics/sentiment.md): How AI platforms characterize your brand — positive, neutral, or negative — when mentioning it in responses. - [Share of voice in AI search](https://docs.getcited.in/concepts/metrics/share-of-voice.md): Share of voice measures how often your brand appears in AI answers relative to competitors for the same prompts. - [How ChatGPT search works](https://docs.getcited.in/concepts/platforms/how-chatgpt-search-works.md): How ChatGPT generates answers, selects sources, and decides which brands to mention — and what this means for AI visibility. - [How Claude search works](https://docs.getcited.in/concepts/platforms/how-claude-search-works.md): How Anthropic's Claude generates answers, its approach to source attribution, and what brands should know about Claude's AI visibility behavior. - [How Gemini search and AI Overviews work](https://docs.getcited.in/concepts/platforms/how-gemini-search-works.md): How Google's Gemini generates AI answers, how it relates to Google Search, and what brands should know about AI Overviews. - [How Grok search works](https://docs.getcited.in/concepts/platforms/how-grok-search-works.md): How xAI's Grok generates answers using X (Twitter) data and web search, and what brands should know about Grok's emerging role in AI visibility. - [How Perplexity ranks sources](https://docs.getcited.in/concepts/platforms/how-perplexity-ranks-sources.md): How Perplexity selects, ranks, and cites sources in its AI-generated answers — and why it matters for citation rate measurement. - [Configure competitors](https://docs.getcited.in/get-started/configure-competitors.md): Add competitor brands to Cited for share of voice, gap analysis, and competitive tracking across AI platforms. - [Configure positioning](https://docs.getcited.in/get-started/configure-positioning.md): Set your brand's target attributes so Cited can prioritize recommendations that align with how you want to be known. - [Create your first brand](https://docs.getcited.in/get-started/create-your-first-brand.md): Set up your first brand in Cited — what to prepare, what happens at each step, and when to expect your first AI visibility data. - [Invite your team](https://docs.getcited.in/get-started/invite-your-team.md): Add team members to your Cited workspace so marketing, content, and SEO teams can collaborate on AI visibility. - [Understanding your first report](https://docs.getcited.in/get-started/understanding-your-first-report.md): How to read your Cited dashboard — what each section shows, which metrics matter most, and where to focus first. - [What is Cited](https://docs.getcited.in/get-started/what-is-cited.md): Cited is a GEO platform that tracks how brands appear in AI-generated answers across ChatGPT, Perplexity, Gemini, Claude, and Grok. - [AI Overview](https://docs.getcited.in/glossary/ai-overview.md): Google's AI-generated answer box that appears at the top of some search results, synthesizing information from multiple sources. - [AI Shelf](https://docs.getcited.in/glossary/ai-shelf.md): The set of brands an AI platform mentions in response to a category query — the digital equivalent of retail shelf space. - [Answer Engine](https://docs.getcited.in/glossary/answer-engine.md): An AI platform that generates synthesized answers to questions rather than returning a list of links. - [Authority Signal](https://docs.getcited.in/glossary/authority-signal.md): Any indicator that a source is trustworthy and expert on a topic — editorial bylines, domain reputation, citation by other authoritative sources. - [Average Position](https://docs.getcited.in/glossary/average-position.md): The mean rank at which a brand appears in AI citation lists — lower is better. - [Brand Mention](https://docs.getcited.in/glossary/brand-mention.md): Any appearance of a brand name in an AI-generated response — the atomic unit of AI visibility measurement. - [Citation](https://docs.getcited.in/glossary/citation.md): A clickable link to a source URL in an AI-generated answer — distinct from a mention, which is a brand name without a link. - [Citation Rate](https://docs.getcited.in/glossary/citation-rate.md): The percentage of AI responses that include a direct link to a brand's domain as a source. - [Citation Source](https://docs.getcited.in/glossary/citation-source.md): The specific web page or domain an AI platform links to when citing a source in its response. - [Competitor Gap](https://docs.getcited.in/glossary/competitor-gap.md): A query where a competitor is mentioned by AI platforms but your brand is not — the highest-leverage optimization target. - [Content Freshness](https://docs.getcited.in/glossary/content-freshness.md): How recently a page's content was published or updated — a signal that influences AI citation probability. - [Crawl Budget](https://docs.getcited.in/glossary/crawl-budget.md): The number of pages a search engine or AI crawler will fetch from a site in a given time period. - [E-E-A-T](https://docs.getcited.in/glossary/e-e-a-t.md): Experience, Expertise, Authoritativeness, Trustworthiness — Google's content quality framework, increasingly relevant for AI visibility. - [Editorial Citation](https://docs.getcited.in/glossary/editorial-citation.md): A citation to an editorial publication rather than a brand's own domain — the most common citation pattern in AI-generated answers. - [GEO](https://docs.getcited.in/glossary/geo.md): Generative Engine Optimization — the practice of optimizing brand visibility in AI-generated search answers. - [GEO Confidence Score](https://docs.getcited.in/glossary/geo-confidence-score.md): A measure of how statistically reliable a brand's AI visibility metrics are, based on sample size and response variance. - [GEO Score](https://docs.getcited.in/glossary/geo-score.md): A composite assessment of a website's technical readiness for AI discoverability — distinct from mention rate, which measures outcomes. - [Grounding](https://docs.getcited.in/glossary/grounding.md): The process by which an LLM connects its generated text to specific factual sources or retrieved content. - [Hallucination](https://docs.getcited.in/glossary/hallucination.md): When an LLM generates information that is factually incorrect or fabricated, not grounded in real sources. - [Impact Score](https://docs.getcited.in/glossary/impact-score.md): A 0-100 score Cited assigns to each recommendation indicating priority relative to the brand's own data maturity. - [Glossary](https://docs.getcited.in/glossary/index.md): A-Z glossary of Generative Engine Optimization (GEO) terminology — definitions for every metric, concept, and technique in AI visibility. - [Intent Type](https://docs.getcited.in/glossary/intent-type.md): The category of purpose behind a user's query — informational, commercial, navigational, or transactional. - [LLM](https://docs.getcited.in/glossary/llm.md): Large Language Model — the AI models that power ChatGPT, Gemini, Claude, Perplexity, and Grok. - [llms.txt](https://docs.getcited.in/glossary/llms-txt.md): A plain-text file at a website's root that tells AI crawlers which pages to prioritize — the robots.txt equivalent for LLMs. - [Mention Rate](https://docs.getcited.in/glossary/mention-rate.md): The percentage of tracked prompts where an AI platform names a brand in its generated answer. - [Model Deprecation](https://docs.getcited.in/glossary/model-deprecation.md): When an AI provider retires or replaces a model version, potentially changing brand visibility behavior. - [Non-Branded Query](https://docs.getcited.in/glossary/non-branded-query.md): A search prompt that asks about a category or product type without naming any specific brand — the hardest test of AI visibility. - [Non-Determinism](https://docs.getcited.in/glossary/non-determinism.md): The property of LLMs that causes the same prompt to produce different responses across runs. - [Perplexity Sonar](https://docs.getcited.in/glossary/perplexity-sonar.md): Perplexity's API model for programmatic search queries — available in Sonar (standard) and Sonar Pro (advanced) variants. - [Position Bias](https://docs.getcited.in/glossary/position-bias.md): The tendency for users and LLMs to give disproportionate attention and credibility to items listed first. - [Prompt Engineering](https://docs.getcited.in/glossary/prompt-engineering.md): The practice of crafting specific inputs to LLMs to produce desired outputs — in GEO, the art of writing queries that reflect real customer language. - [Query Archetype](https://docs.getcited.in/glossary/query-archetype.md): A category of query structure — problem-first, context-specific, budget-anchored, comparison, recommendation-seeking, or feature-curious. - [Query Intent](https://docs.getcited.in/glossary/query-intent.md): The underlying purpose of a user's query — what they are trying to accomplish when asking an AI platform a question. - [RAG](https://docs.getcited.in/glossary/rag.md): Retrieval-Augmented Generation — the technique of combining LLM generation with real-time web retrieval. - [Recency Bias](https://docs.getcited.in/glossary/recency-bias.md): The tendency of LLMs — especially retrieval-augmented ones — to favor recently published or updated content. - [Schema Markup](https://docs.getcited.in/glossary/schema-markup.md): Structured data embedded in web pages that helps search engines and LLMs understand content type and meaning. - [Sentiment](https://docs.getcited.in/glossary/sentiment.md): How an AI platform characterizes a brand when mentioning it — positive, neutral, negative, or mixed. - [SERP](https://docs.getcited.in/glossary/serp.md): Search Engine Results Page — the traditional list of ranked links returned by a search engine, now being supplemented by AI-generated answers. - [Share of Voice](https://docs.getcited.in/glossary/share-of-voice.md): A brand's percentage of total AI mentions within its category — the primary competitive visibility metric. - [Source Affinity](https://docs.getcited.in/glossary/source-affinity.md): The observed preference of a specific AI platform for certain types or categories of sources. - [Strategic Alignment Multiplier](https://docs.getcited.in/glossary/strategic-alignment-multiplier.md): A 1.5x multiplier applied to an Impact Score when the recommendation's gap topic matches the brand's declared positioning. - [Structured Data](https://docs.getcited.in/glossary/structured-data.md): Machine-readable information embedded in web pages using standards like Schema.org — helps LLMs parse and cite content accurately. - [Topic Taxonomy](https://docs.getcited.in/glossary/topic-taxonomy.md): A hierarchical classification of topics and subtopics used to organize queries and analyze brand visibility by subject area. - [Cited Index Benchmarks — Methodology](https://docs.getcited.in/methodology/benchmarks.md): How Cited publishes empirical benchmark ranges for AI visibility metrics, what they measure, and what they don't. - [Data freshness and statistical confidence](https://docs.getcited.in/methodology/data-freshness.md): How to interpret the recency and reliability of Cited's visibility data — what the numbers mean and when to trust them. - [GEO Score Methodology](https://docs.getcited.in/methodology/geo-score.md): How the GEO Score measures your website's technical readiness for AI search engines across 3 pillars and 15 signals. - [How we score recommendation impact](https://docs.getcited.in/methodology/impact-scoring.md): The brand-relative 0-100 scoring formula Cited uses to prioritize recommendations. - [How we extract mentions and sentiment](https://docs.getcited.in/methodology/mention-extraction.md): How Cited identifies brand mentions and classifies sentiment from AI-generated responses — the parsing pipeline that turns raw text into structured visibility data. - [How we generate queries](https://docs.getcited.in/methodology/query-generation.md): How Cited creates the prompts used to measure brand visibility — consumer-authentic, jargon-free, intent-classified queries generated from brand and competitor intelligence. - [Refresh cadence and pipeline schedule](https://docs.getcited.in/methodology/refresh-cadence.md): When Cited's data pipeline runs, how often metrics are refreshed, and what determines the freshness of your dashboard data. - [Schema markup and platform tradeoffs](https://docs.getcited.in/methodology/schema-platform-tradeoffs.md): Why Cited's docs do not currently implement custom JSON-LD schema — and what we do instead to maintain LLM discoverability. - [What we don't do and why](https://docs.getcited.in/methodology/what-we-dont-do.md): Deliberate scope boundaries — the things Cited intentionally does not measure, track, or claim, and the reasoning behind those decisions. - [Which LLMs we track and why](https://docs.getcited.in/methodology/which-llms-we-track.md): The five AI platforms Cited monitors, which specific models are used, and the reasoning behind platform selection. - [Close competitor gaps in AI search](https://docs.getcited.in/playbooks/competitor-gap-workflow.md): A systematic workflow for identifying where competitors outperform you in AI visibility — and what to do about each gap. - [Improve your Perplexity citations](https://docs.getcited.in/playbooks/improve-perplexity-citations.md): A step-by-step guide to increasing how often Perplexity cites your domain as a source — the most actionable citation improvement workflow. - [Fix your llms.txt and robots.txt](https://docs.getcited.in/playbooks/llms-txt-and-robots.md): How to configure llms.txt and robots.txt so AI crawlers can discover and index your content — the technical prerequisites for AI visibility. - [Win editorial coverage that LLMs cite](https://docs.getcited.in/playbooks/win-editorial-coverage.md): How to earn mentions and citations in the editorial publications that AI platforms trust most — a PR strategy for the AI search era. - [Write content that gets cited](https://docs.getcited.in/playbooks/write-citable-content.md): Content patterns that increase the likelihood of being mentioned and cited by AI platforms — definition-first writing, entity-first sentences, and structured answers. ## OpenAPI Specs - [openapi](https://docs.getcited.in/api-reference/openapi.json)