What 107,000 pages reveal about Core Web Vitals and AI search

What 107,000 pages reveal about Core Web Vitals and AI search

Core Web Vitals AI visibility

As AI-led search becomes a real driver of discovery, an old assumption is back with new urgency. If AI systems infer quality from user experience, and Core Web Vitals (CWV) are Google’s most visible proxy for experience, then strong CWV performance should correlate with strong AI visibility.

The logic makes sense.

Faster page load times result in smoother page load times, increased user engagement, improved signals, and AI systems that reward the outcome (supposedly)

But logic is not evidence.

To test this properly, I analysed 107,352 webpages that appear prominently in Google AI Overviews and AI Mode, examining the distribution of Core Web Vitals at the page level and comparing them against patterns of performance in AI-driven search and answer systems. 

The aim was not to confirm whether performance “matters”, but to understand how it matters, where it matters, and whether it meaningfully differentiates in an AI context.

What emerged was not a simple yes or no, but a more nuanced conclusion that challenges prevailing assumptions about how many teams currently prioritise technical optimisation in the AI era.

Why distributions matter more than scores

Most Core Web Vitals reporting is built around thresholds and averages. Pages pass or fail. Sites are summarized with mean scores. Dashboards reduce thousands of URLs into a single number.

The first step in this analysis was to step away from that framing entirely.

When Largest Contentful Paint was visualized as a distribution, the pattern was immediately clear. The dataset exhibited a heavy right skew. 

Median LCP values clustered in a broadly acceptable range, while a long tail of extreme outliers extended far beyond it. A relatively small proportion of pages were horrendously slow, but they exerted a disproportionate influence on the average.

Cumulative Layout Shift showed a similar issue. The majority of pages recorded near-zero CLS, while a small minority exhibited severe instability. 

Again, the mean suggested a site-wide problem that did not reflect the lived reality of most pages.

This matters because AI systems do not reason over averages, if they reason on user engagement metrics at all. 

They evaluate individual documents, templates, and passages of content. A site-wide CWV score is an abstraction created for reporting convenience, not a signal consumed by an AI model.

Before correlation can even be discussed, one thing becomes clear. Core Web Vitals are not a single signal, they are a distribution of behaviors across a mixed population of pages.

Correlations

Because the data was uneven and not normally distributed, a standard Pearson correlation was not suitable. Instead, I used a Spearman rank correlation, which assesses whether higher-ranking pages on one measure also tend to rank higher or lower on another, without assuming a linear relationship.

This matters because, if Core Web Vitals were closely linked to AI performance, pages that perform better on CWV would also tend to perform better in AI visibility, even if the link was weak.

I found a small negative relationship. It was present, but limited. For Largest Contentful Paint, the correlation ranged from -0.12 to -0.18, depending on how AI visibility was measured. For Cumulative Layout Shift, it was weaker again, typically between -0.05 and -0.09.

These relationships are visible when you look at large volumes of data, but they are not strong in practical terms. Crucially, they do not suggest that faster or more stable pages are consistently more visible in AI systems. Instead, they point to a more subtle pattern.

The absence of upside, and the presence of downside

The data do not support the claim that improving Core Web Vitals beyond basic thresholds improves AI performance. Pages with good CWV scores did not reliably outperform their peers in AI inclusion, citation, or retrieval.

However, the negative correlation is instructive.

Pages sitting in the extreme tail of CWV performance, particularly for LCP, were far less likely to perform well in AI contexts. 

These pages tended to exhibit lower engagement, higher abandonment, and weaker behavioral reinforcement signals. Those second-order effects are precisely the kinds of signals AI systems rely on, directly or indirectly, when learning what to trust.

This reveals the true shape of the relationship.

Core Web Vitals do not act as a growth lever for AI visibility. They act as a constraint.

Good performance does not create an advantage. Severe failure creates disadvantage.

This distinction is easy to miss if you examine only pass rates or averages. It becomes apparent when examining distributions and rank-based relationships.

Why ‘passing CWV’ is not a differentiator

One reason the positive correlation many expect does not appear is simple. Passing Core Web Vitals is no longer rare.

In this dataset, the majority of pages already met recommended thresholds, especially for CLS. When most of the population clears a bar, clearing it does not distinguish you. It merely keeps you in contention.

AI systems are not selecting between pages because one loads in 1.8 seconds and another in 2.3 seconds. They are selecting between pages because one explains a concept clearly, aligns with established sources, and satisfies the user’s intent, whereas the other does not.

Core Web Vitals ensure that the experience does not actively undermine those qualities. They do not substitute for them.

Reframing the role of Core Web Vitals in AI strategy

The implication is not that Core Web Vitals are unimportant. It is that their role has been misunderstood.

In an AI-led search environment, Core Web Vitals function as a risk-management tool, not acompetitive strategy. They prevent pages from falling out of contention due to poor experience signals.

This reframing has practical consequences for developing an AI visibility strategy.

Chasing incremental CWV gains across already acceptable pages is unlikely to deliver returns in AI visibility. It consumes engineering effort without changing the underlying selection logic AI systems apply.

Targeting the extreme tail, however, does matter. Pages with really bad performance generate negative behavioral signals that can suppress trust, reduce reuse, and weaken downstream learning signals.

The objective is not to make everything perfect. It is to ensure that the content you want AI systems to rely on is not compromised by avoidable technical failure.

Why this matters

As AI systems increasingly mediate discovery, brands are seeking controllable levers. Core Web Vitals feel attractive because they are measurable, familiar, and actionable.

The risk is mistaking measurability for impact.

This analysis suggests a more disciplined approach. Treat Core Web Vitals as table stakes. Eliminate extreme failures. 

Protect your most important content from technical debt. Then shift focus back to the factors AI systems actually use to infer value, such as clarity, consistency, intent alignment, and behavioral validation.

Core Web Vitals: A gatekeeper, not a differentiator

Based on an analysis of 107,352 AI visible webpages, the relationship between Core Web Vitals and AI performance is real, but limited.

There is no strong positive correlation. Improving CWV beyond baseline thresholds does not reliably improve AI visibility.

However, a measurable negative relationship exists at the extremes. Severe performance failures are associated with poorer AI outcomes, mediated through user behavior and engagement.

Core Web Vitals are therefore best understood as a gate, not a signal of excellence.

In an AI-led search landscape, this clarity matters.

About The Author

ADMINI
ALWAYS HERE FOR YOU

CONTACT US

Feel free to contact us and help you at our very best.