We Tracked 1,885 Pages Adding Schema. AI Citations Barely Moved.


We kicked off this study by analyzing 6 million URLs, and found that schema markup is much more common on pages cited by AI than pages that aren’t.

AI cited pages were almost three times more likely to have JSON-LD than non-cited pages.

That’s a big gap, and the kind of stat that gets shared in LinkedIn carousels and conference slides as proof that schema is an AI visibility lever.

But we weren’t satisfied with the data since it could easily have been correlation, not cause.

Schema markup tends to live on better-maintained, more technically sophisticated sites, and those same sites publish stronger content, build more authority, earn more links, and do all the other things that get pages cited.

Schema could be doing real work, but it could also just be riding the wave of every other signal.

So we couldn’t actually answer the question SEOs really care about: if I add schema to my page, will I get cited more by AI?

To find out, we ran a second study designed to isolate the effect of adding schema.

Here’s what we found.

We tracked 1,885 web pages that added JSON-LD schema between August 2025 and March 2026, matched them against 4,000 control pages, and measured citation changes across Google AI Overviews, AI Mode, and ChatGPT.

Adding schema produced no major uplift in citations on any platform.

AI sourceEffect on citationsVerdict
Google AIO−4.6%Small but statistically significant decline relative to matched controls; (both groups were declining together, but treated pages fell slightly faster)
Google AI Mode+2.4%Statistically indistinguishable from zero
ChatGPT+2.2%Statistically indistinguishable from zero

These percentages come from our most reliable analysis (a matched difference-in-differences [DiD] test).

In this test, both AI Mode and ChatGPT treated pages performed slightly better than control pages on average, but the differences are small enough that they could easily be random noise across thousands of URLs.

AI Overviews showed a 4.6% decline, which is small but statistically significant relative to matched control pages.

But that isn’t quite the full story—we’ll get into that in the next section.

So, overall, we can’t tell whether the schema did a tiny bit of good or nothing at all.

AI Overview citations on treated pages fell by 4.6% relative to control pages, and the result is “statistically significant” (the odds of seeing a gap this large by pure chance are about 1 in 2,500).

But before anyone reads this as “adding schema hurts your AI Overview citations”, there are two things you need to bear in mind.

  1. The absolute size is small. We’re talking about an average loss of around 12 daily citations per page, in a sample where most pages were getting hundreds.
  2. Both treated and matched control pages were already on a steep downward trajectory before schema was added—the kind of decline you’d expect from AI Overviews pulling back from these specific types of content for reasons unrelated to schema (e.g. a Google update changing what gets surfaced, the content getting stale, or Google not having recrawled the page recently).

That said, if adding schema had no effect on citations either way, we’d expect treated pages and matched controls to decline together at the same rate (which is broadly what we see for AI Mode and ChatGPT).

The fact that treated pages declined slightly more suggests schema had a small negative effect—but it could also reflect other factors.

We can’t tell which one it is from this data alone.

How we isolated the effect of adding schema

Using Brand Radar, Xibeijia pulled a few million URLs cited in AI Overviews.

She then retrieved the HTML history from our crawler database, labeled whether each URL contained



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top