How a Specialty Thyroid Surgery Center Outranks Mayo Clinic, MD Anderson & Cleveland Clinic in AI Search
-
Ask any AI visibility tool how the Clayman Thyroid Center at the Hospital for Endocrine Surgery performs, and you get this:
SEMrush AI Visibility Score: 26 / 100 — Low "Rarely mentioned in LLM outputs compared to competitors."
Six months prior, that same score was 11.
On the surface, this looks like failure. A score labeled Low by a leading SEO platform. Barely visible. Rarely mentioned.
Except it isn't failure. And understanding why is the entire point of this case study.
-
The SEMrush AI Visibility score of 26 out of 100 is not wrong. It is measuring something real. But what it's measuring has almost nothing to do with revenue.
Across the full universe of thyroid-related keywords, this practice appears less frequently than WebMD, Healthline, the Mayo Clinic patient education library, and large academic medical centers with entire content teams publishing informational articles at scale. These aren't competitors. They're publishers. A patient reading "what is hypothyroidism" on Healthline is not choosing between Healthline and a thyroid surgeon. They aren't even close to making that decision yet.
And here's what makes it worse: 86% of the mentions SEMrush tracks come from AI Overview — Google's broad search layer that pulls from the entire web. The remaining visibility is almost entirely from ChatGPT. Six of the most widely used AI platforms patients use to make surgical decisions aren't meaningfully represented at all.
The result is a score that looks like a verdict. It isn't. It's a tool giving you mountains of data with no way to tell you which of it matters, which queries are sending patients to your competitors, or what to do about any of it.
-
The Clayman Thyroid Center is the highest-volume thyroid and parathyroid surgical practice in the United States. Patients travel from across the country — and internationally — specifically to be treated here.
The question worth asking wasn't "what is your AI visibility score." It was: when a patient who needs thyroid surgery opens ChatGPT and asks who they should see — does this practice show up?
To answer that, we built a custom query library of high-intent surgical queries and tested every one across six major AI platforms. Every query was assigned an Opportunity Score before testing began — calculated by multiplying Intent, Revenue Value, and Search Volume — so we knew exactly which queries were worth winning.
Then we went looking.
-
9 queries. 6 platforms. Every result scored.
Each query was scored 0–3 per platform based on visibility:
0 — Not mentioned
1 — Mentioned in a list
2 — Positive mention with context
3 — Top recommendation
+1 bonus — Cited or linked
Maximum possible score per query: 24 points across 6 platforms.
Queries were weighted by Opportunity Score — the product of Intent × Revenue × Volume — with a maximum of 64. This produced a Revenue-Weighted AI Visibility Score (R-AVS) that reflects not just how visible the practice is, but how visible it is where it matters most.
-
Headline Scores:
62 / 100 — AI Visibility Score (AVS) 68 / 100 — Revenue-Weighted Score (R-AVS) ~60% — AI Share of Voice on surgical queries
The R-AVS exceeding the AVS is the most important finding. It means this practice is more visible on high-revenue, high-intent queries than on average — the ideal outcome. They're not just visible. They're visible where it counts.
-
The data is compelling on its own. But what really validated this work was something much simpler.
Patients started telling the practice they found it through ChatGPT and Gemini. Not Google. Not a referral. AI.
That's not a fluke. It makes complete sense when you see the coverage numbers. If you're the top recommendation on "best hospital for thyroid surgery" across six AI platforms, patients are going to find you. They're going to call. Some of them are going to become surgical cases worth tens of thousands of dollars in revenue.
Six months ago the SEMrush score was 11. It's now 26 — a 136% increase. But the number that actually matters is the coverage rate on the queries that drive surgical volume. On those, coverage runs from 88% to 100%. That's not an accident. That's the result of focused, intentional work on the content and signals that AI platforms use to form recommendations.
-
What This Means for Your Practice
If your practice is like most specialty providers, you are being evaluated and recommended — or not — by AI systems millions of times per month. The patients who find you through AI are typically high-intent. They've done their research. They know what they need. They're ready to book.
The question is not whether AI visibility matters. It does. The question is whether you're visible on the queries that send those patients to you specifically — the high-intent, high-revenue queries where your competitors are either winning or where the space is still wide open.
A generic AI visibility score won't tell you that. The right analysis will.
The Takeaway?
Six months of strategic GEO work. Content rebuilt around the queries that actually drive revenue. Authority signals strengthened across every major platform.
But when we went to measure the results, the software said AI Visibility was “low”.
I knew that wasn’t right. Patients were calling saying they found the practice through ChatGPT and Gemini. Something wasn't adding up.
That disconnect — between what the software was telling me and what was actually happening in the real world — is exactly how Lowfruit was born.
We built a methodology to measure what the tools couldn't. And when we did, the picture was completely different.
Top recommendation on all six major AI platforms. Ahead of Mayo Clinic, MD Anderson, and Cleveland Clinic.