What Is the True Role of CWV in AI Search?

What Is the True Role of CWV in AI Search?

We’re joined today by Anastasia Braitsik, a global leader in SEO and data analytics, to demystify one of the most pressing topics in our field: the real impact of Core Web Vitals on AI-driven search. In our conversation, Anastasia will challenge the common assumption that faster is always better, revealing insights from her analysis of over 107,000 pages. We’ll explore why looking at the distribution of performance data is far more insightful than relying on simple averages, what the subtle negative correlation between speed and AI visibility truly means for SEO strategy, and why we should start thinking of Core Web Vitals as a critical “gatekeeper” rather than a direct growth lever. She’ll also provide a practical framework for prioritizing technical fixes to protect a site’s most valuable content in this new AI-mediated landscape.

Many performance dashboards summarize thousands of URLs into a single average score. Given that performance often has a long tail of extreme outliers, how can teams effectively analyze performance distributions to develop a more precise and impactful AI optimization strategy? Please share some practical steps.

That’s the heart of the issue, isn’t it? We’ve become conditioned to look at a single number on a dashboard and feel either relief or panic. But when I visualized the data from over 107,000 pages, the truth was immediately obvious in the shape of the graph. We saw a heavy right skew, meaning a long tail of horrendously slow pages was dragging the site-wide average down. The median Largest Contentful Paint was actually acceptable, but the mean score suggested a crisis that didn’t reflect the reality for most users. The first practical step is to get away from averages. Teams need to visualize their performance data as a distribution to see that tail. This allows you to stop thinking about a site-wide problem and start identifying the specific pages or templates that are true outliers, because those are the individual documents an AI system actually evaluates.

A small negative correlation was found between Largest Contentful Paint and AI visibility (-0.12 to -0.18). What does this subtle relationship suggest about the “punishment” for poor performance versus the “reward” for good performance? Can you elaborate on the practical implications for technical SEOs?

This is a really subtle but crucial point. That small negative correlation tells us something very specific: the relationship isn’t about a reward for good performance, but a penalty for severe failure. When we looked at the data, pages with great Core Web Vitals didn’t reliably outperform their peers in AI results. There was no consistent upside. However, pages sitting in that extreme tail—the really slow ones—were far less likely to do well. The practical implication for SEOs is to reframe their thinking. You aren’t chasing a reward by making a fast page even faster. Instead, you’re avoiding a penalty by fixing a page that is so slow it creates a terrible user experience, leading to higher abandonment and weaker behavioral signals that AI systems learn from.

The concept of Core Web Vitals acting as a “gatekeeper” rather than a growth lever is compelling. Could you break down this distinction for us? Once a page meets basic performance thresholds, what other quality signals might an AI system prioritize when selecting what content to feature?

Absolutely. Think of it like this: passing Core Web Vitals gets your content into the venue, but it doesn’t get you on stage. The vast majority of pages in my dataset already met the recommended thresholds. When everyone has cleared the bar, clearing it doesn’t make you special; it just keeps you in the game. Once that basic technical gate is passed, the AI system’s selection logic shifts entirely. It stops caring whether your page loaded in 1.8 seconds versus 2.3 seconds. Instead, it prioritizes signals of actual value: Does this page explain the concept with clarity? Does it align with established, trustworthy sources? And most importantly, does it truly satisfy the user’s intent? Performance just ensures the experience doesn’t actively undermine those deeper qualities.

If fixing the worst-performing pages is more impactful than making good pages incrementally faster, how should teams adjust their workflow? Could you outline a process for identifying and prioritizing these “extreme tail” pages to protect a site’s most important content from technical debt?

The workflow needs to shift from a mindset of “optimization” to one of “risk management.” The current approach of chasing incremental gains across already acceptable pages is a waste of engineering resources in this context. The first step in a new process is to map your performance data as a distribution, not an average. Isolate that long tail of outliers. Next, cross-reference those poorly performing URLs with your most strategically important content—your cornerstone articles, your key product pages, the content you absolutely want AI systems to trust and cite. This creates your priority list. The goal is no longer to make everything perfect; it’s to strategically eliminate the catastrophic failures that compromise the content you depend on most. This protects your assets from being unfairly judged due to avoidable technical debt.

Poor user experience signals like high abandonment rates can indirectly harm AI visibility. How should content creators and technical SEOs collaborate to address this? What key metrics, beyond Core Web Vitals, should they monitor together to ensure their content is perceived as trustworthy by AI systems?

This is where breaking down silos becomes critical. A technical SEO might see a high LCP score, while a content creator sees high bounce rates on a new article, and they might not realize they’re looking at two sides of the same coin. The collaboration has to be built around shared outcomes. A page that is technically slow generates negative behavioral signals—like high abandonment—that an AI system can interpret as a sign of low-quality content, regardless of how well-written it is. They need a shared dashboard that goes beyond CWV. They should be looking at engagement metrics like dwell time and abandonment rates on a per-template or per-article basis. By monitoring these together, they can spot correlations where a technical failure is directly causing a behavioral problem that undermines the perceived trustworthiness of their content.

What is your forecast for the relationship between technical performance and AI-driven search?

My forecast is that technical performance will become even more solidified as “table stakes.” It’s the cost of entry, not the winning move. The obsession with chasing perfect scores on pages that are already good will fade as teams realize it doesn’t move the needle for AI visibility. Instead, the focus will sharpen on a more disciplined, defensive strategy: eliminating the extreme failures. The relationship will be less about competitive differentiation and more about foundational stability. Brands will need to ensure that their most valuable content is never at risk of being ignored or devalued by an AI simply because of a preventable technical failure. The real competitive ground will then shift back to where it belongs: the actual quality, clarity, and authority of the content itself.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later