The Master Leaderboard
Click any column to re-sort. The AI Visibility Score (0–100) blends actual ChatGPT, Gemini, Perplexity and Google AI Overview citations from Ahrefs Brand Radar. Click any centre name to visit their site.
Top 10 in each AI engine
These are the share-worthy mini-rankings. Note how dramatically the leaders shift between engines. The brand winning ChatGPT isn't always the one Google's AI Overview cites.
Top 10 in ChatGPT
GPT-5.4Top 10 in Gemini
Gemini 2.5 ProTop 10 in Perplexity
Sonar ProCited in Google AI Overview
Brand RadarThe data, plotted
Two charts that tell the story at a glance: how AI visibility distributes across the 30, and whether traditional SEO authority (DR) actually correlates with AI visibility.
AI Visibility Score distribution
Each bar is one centre, sorted by score. The gap between leaders and laggards is the entire point.
Domain Rating × AI Visibility
If high DR guaranteed AI mentions, this would be a tidy diagonal. It isn't. High DR centres can still score zero.
What the data tells us
Six patterns we noticed pulling this data, useful both for centre owners benchmarking themselves and for parents trying to make sense of the noise.
AI Overview citations for Miracle Learning Centre, and DR is just 8
Authority isn't backlinks. Miracle Learning Centre has the second-highest Google AI Overview citation count on the list (66 responses) despite having only DR 8, driven entirely by deep, authoritative chemistry/physics explainer pages that AI engines treat as primary source material.
ChatGPT citations for EduKate Singapore, the highest of any centre
The ChatGPT crown belongs to EduKate. EduKate Singapore beats every other tuition brand on ChatGPT (31 cited responses vs MindChamps' 17 and BlueTree's 9). EduKate's evergreen English & creative-writing reading lists are heavily indexed by ChatGPT.
centres scored 30/100 or higher
The top tier is concentrated. Just over a quarter of tracked centres have meaningful AI visibility. The remaining 22 are essentially invisible to AI-search-using parents, including some with strong organic traffic and DR.
centres with zero AI citations across all 4 engines
One in three is invisible. Ten of the 30 tracked Multi-subject centres show zero citations on ChatGPT, Gemini, Perplexity AND Google AI Overview combined, including Future Academy, Crucible, KRTC and JustEdu. None of these brands shows up when parents ask AI for recommendations.
Zenith's monthly organic traffic, top of the list, ranks #5 on AI
Volume players don't always win AI. Zenith Education Studio drives ~4× the organic traffic of every other tracked centre but ranks #5 on AI Visibility. Volume buys leads from classical search; AI visibility buys positioning in the new search layer.
DR of Eye Level, and 0 AI citations on the SG site
Global DR doesn't translate to local AI visibility. Eye Level Singapore inherits the DR of its global parent domain (myeyelevel.com) but the SG-specific /Singapore/ subpath is essentially absent from AI engines' citation graph. AI visibility is local; DR is global.
How we built the score
Total transparency on inputs, weighting and limitations. Because a tracker is only as credible as its workings.
How the 30 were selected
We pulled the highest-traffic SG-targeting tuition centre domains from Ahrefs SERPs across "tuition centre Singapore", "best tuition Singapore", "math tuition Singapore", "english tuition Singapore", "science tuition Singapore", "chinese tuition Singapore" and "PSLE tuition". Aggregators (SmileTutor, Sassymama, parent blogs) and tutor matching platforms were excluded.
SEO data
Pulled 4 May 2026 via Ahrefs Site Explorer / batch-analysis: Domain Rating (DR), organic keywords ranked in SG, estimated organic traffic / month, total backlinks & referring domains.
AI citation data
All citation counts come from Ahrefs Brand Radar > cited-domains, queried per centre with a cited_domain_subdomains filter. Same source that powers the "AI citations" widget on the Site Explorer overview page.
Score formula
AI Visibility Score = ChatGPT score + Gemini score + Perplexity score + AI Overview score (max 25 each, total 100). Each engine's score is normalised by a reference count: min(responses, ref) / ref × 25. Reference points: ChatGPT 10, Gemini 5, Perplexity 10, AI Overview 50. Anything at or above the reference scores full marks; below it scales linearly.
Why responses, not pages?
Each engine reports two related numbers in Brand Radar: pages (unique URLs from the domain that AI engines have cited) and responses (number of distinct AI responses citing any of those pages). We use responses as the score input. It captures actual citation frequency rather than content footprint.
Limitations
Brand Radar's prompt coverage isn't exhaustive. It samples prompts AI engines are commonly asked, not every possible parent query. Citation counts skew higher for evergreen content (e.g. PSLE explainers cited across many parenting prompts) versus niche commercial content. Refresh cadence: quarterly.
About the tracker
Why does AI visibility matter for tuition centres?+
How is the AI Visibility Score calculated?+
How can a tuition centre improve its AI Visibility Score?+
How often is the data refreshed?+
Can my centre be added to the tracker?+
Why do some high-DR centres score zero?+
Is your centre on this list?
Want to see your full AI visibility profile, exact prompt-level test results, and a 90-day plan to climb the rankings? Get a free AI Visibility Audit from e-Alchemists. No obligation.