We measure AI search visibility the way it should be measured.
Rectifies tracks how your brand is cited by ChatGPT, Perplexity, Claude, and Google AI Mode. Every number carries a confidence interval. Every methodology change is logged in public.
We track citations.
Real-time, multi-engine panel covering ChatGPT, Perplexity, Claude (grounded + baseline), Google AI Mode (via Gemini grounding), and Gemini direct.
We measure with rigour.
Bootstrap confidence intervals on every published metric. Mixed-effects logistic regression with random effects per engine.
We publish in the open.
Quarterly Hugging Face dataset with a citable DOI. Code on GitHub. Methodology versioned with a changelog.
Design partners
amvia.co.uk
B2B telecoms
“Citation share rose from 11% to 18% (95% CI: 14–22%) over 12 weeks.”Read the case study
comparefibre.co.uk
B2C/SMB fibre comparison
“Tracked the post-Sept 2025 ChatGPT/Reddit collapse and rebuilt our content strategy inside three weeks.”Read the case study
surfaceloop.com
EASM cybersecurity
“Identified Claude as the only engine citing competitor documentation consistently.”Read the case study
Why our methodology is public
Every metric on this site has a definition, a sample size, a model version, and a confidence interval. The /methodology page is versioned. The /changelog logs every change. Our prompt sets are published verbatim. Our evaluation code is on GitHub. Our quarterly dataset is on Hugging Face with a DOI.
This is what makes our numbers citable — and what makes them defensible when your CFO asks how they were calculated.
Read the methodology (v1.0)Design Partner
£750/mo
Founding cohort. 20 prompts, five engines, monthly report.
Professional
£2,500/mo
For in-house growth and SEO teams. 100 prompts, all engines, weekly dashboard.
Enterprise
£5,000+/mo
For agencies and enterprise teams measuring across multiple brands.
We are not the AEO platform with the prettiest dashboard. We are the AEO platform whose numbers your CFO can defend.