A global leader in SEO, content marketing, and data analytics, Anastasia Braitsik has spent her career at the forefront of digital discovery. As artificial intelligence fundamentally reshapes how we find information, she has become a leading voice in navigating the shift from traditional search to what she terms Generative Engine Optimization. We sat down with her to unpack the implications of Microsoft’s new AI Performance dashboard and explore the new rules for content visibility in an AI-driven world. Our discussion covers the profound shift away from click-based metrics, the art of crafting “citation-worthy” content for AI consumption, and the technical signals that now determine whether a brand’s expertise is seen or silenced.
Microsoft recently introduced an AI Performance dashboard. How do its new metrics, like ‘grounding queries’ and ‘total citations’, fundamentally differ from traditional search analytics, and what new optimization strategies do they enable for publishers? Please share some specific examples.
It’s a complete paradigm shift, moving us from measuring traffic to measuring influence. For years, our world revolved around impressions, click-through rates, and average position. Those metrics told us, “How many people saw our link, and how many clicked it?” They were all about driving a user from the search results page to our website. The new metrics, especially ‘grounding queries’ and ‘total citations,’ answer a totally different question: “How often is our expertise used to build the answer itself?” You could have a page that gets cited hundreds of times, becoming a foundational source for AI, but generate very little direct traffic. This is a form of brand visibility we’ve never been able to measure before. For example, a publisher might discover a deep, technical guide on their site is frequently cited for a complex grounding query, even if it ranks on page three of traditional results. The new strategy isn’t to just push that page to rank higher; it’s to deepen its expertise and build a cluster of related, highly-structured content around it to “own” that entire topic within the AI’s knowledge base.
AI-generated answers can often satisfy users without a click to a source website. Given this reality, how should publishers redefine content ROI? What metrics, beyond traffic, can prove a page’s value and influence in this new generative search era?
This is the question keeping marketers up at night, but it’s a necessary evolution. We have to stop defining ROI solely by the volume of eyeballs on our own domain. The new ROI is about authoritativeness and high-intent conversions. The content itself mentions that AI search visitors can demonstrate 4.4 times higher value when measured by conversion rates. That’s a staggering number. It means the few people who do click through are incredibly qualified because the AI has already done the heavy lifting of answering their initial questions. So, the new proof of value comes from a blend of metrics. ‘Total citations’ becomes a brand awareness metric. The complexity of the ‘grounding queries’ you are cited for becomes a measure of thought leadership. And finally, you track the conversion rates of the traffic that does come through, which is likely to be much higher. A page’s value is no longer “did it get a click?” but “did it shape the answer and influence a highly-qualified user’s journey?”
Generative AI systems reportedly evaluate content in ‘chunks’ rather than as whole pages. What does this mean for content creation? Could you provide a step-by-step example of how a writer might structure a single paragraph to make it more ‘citation-worthy’ for an AI?
This is probably the most tactical and immediate change for every content creator. You have to start thinking of every paragraph, every headed section, as a potential, self-contained answer. It’s like creating a page of Lego bricks instead of a single sculpture. An AI needs to be able to pull one brick out, know exactly what it is, and trust that it’s a complete, factual piece of information. For example, let’s say we’re writing about the IndexNow protocol. A poorly structured paragraph might say, “IndexNow is a really important tool that many websites use to get their content indexed faster by search engines.” It’s vague and lacks substance. A ‘citation-worthy’ paragraph would be structured with atomic precision. It might start with a declarative sentence: “The IndexNow protocol is a real-time notification system that allows websites to alert participating search engines of content changes, such as additions, updates, or deletions.” Then, it would immediately support that with a verifiable claim or data point: “As of October 2023, the protocol sees adoption from 60 million websites daily, submitting 1.4 billion URLs for processing.” This structure—claim, evidence, context—makes the chunk verifiable, trustworthy, and incredibly easy for an AI to lift and cite accurately.
A new distinction is emerging between traditional search keywords and AI ‘grounding queries.’ How should a digital marketer’s research process change to identify and target these grounding queries, and what tools or methods would you recommend for this new kind of optimization?
Our research process has to get much more sophisticated. For two decades, we’ve focused on user intent—what words do people type into the search box? Now, we have to add a layer of machine comprehension—what phrases does an AI use to retrieve and validate information? The new AI Performance dashboard is our first real tool for this, as it explicitly shows us a sample of these grounding queries. The process begins there. A marketer should export their grounding queries and map them to their top-cited pages. You’ll likely see patterns where the AI is using more descriptive, technical, or entity-based phrases than your target keywords. The next step is to use this insight to build content that directly addresses the logic of the AI. This means structuring content with clear FAQ sections, using definitional sentences, and incorporating data and evidence that can be easily parsed. It’s less about matching a user’s casual language and more about providing the unambiguous, factual building blocks an AI needs to construct a reliable answer.
Research suggests that AI citations can frequently come from content ranking far below the top positions in traditional search results. What does this imply about the relationship between conventional SEO and Generative Engine Optimization, and where should publishers now focus their efforts?
It implies that the two are becoming decoupled, which is a massive realization for our industry. The data showing that ChatGPT citations reference content ranking 21st or lower about 90 percent of the time should be a wake-up call for everyone. It tells us that the signals an AI values for citation—like a very specific, verifiable fact in a well-structured paragraph—are not the same as the holistic authority signals Google uses to rank a whole page. Conventional SEO is still vital for driving discovery and traffic, but it’s no longer the only game in town. Publishers need to adopt a dual strategy. Continue your traditional SEO efforts for your primary commercial pages. But for your informational and expertise-driven content, the focus must shift to Generative Engine Optimization. This means obsessing over clarity, factual accuracy, structured data, and making every section of your content a potential, standalone answer. The goal is no longer just to be on page one; it’s to be in the answer.
The IndexNow protocol and fresh sitemaps are highlighted as critical for AI citation. For a business that hasn’t focused on these, what are the most immediate, practical steps to implement them, and what kind of impact on AI visibility can they realistically expect?
For any business that’s been sleeping on this, the wake-up call is now. The most immediate step is to check if your content management system has a native IndexNow integration or a plugin; many popular platforms do, making it a simple toggle-on. If not, a developer can implement the API key fairly quickly following the guidance at indexnow.org. It’s a low-effort, high-impact action. The second step is to audit your XML sitemap. It’s not just a list of URLs anymore; it’s a strategic document. Ensure it’s clean, automatically updated, and accurately reflects the lastmod (last modified) date. Microsoft was explicit that AI relies more heavily on these structured signals than traditional crawling. The realistic impact isn’t an overnight explosion in traffic, but a significant improvement in the speed and accuracy with which AI systems reflect your latest content. It means when you publish a critical update, the AI is far more likely to cite your fresh, correct information instead of an outdated source, which is absolutely crucial for maintaining relevance and trust.
What is your forecast for Generative Engine Optimization?
My forecast is that within the next 24 months, Generative Engine Optimization (GEO) will become a distinct and required discipline within every serious marketing team, sitting right alongside SEO. We will see the rise of “Citation Analysts” whose entire job is to analyze grounding queries and optimize content “chunks” for AI inclusion. The tooling, which is nascent today with Microsoft’s dashboard, will explode in sophistication, offering predictive analytics on citation probability and competitive analysis of who is winning the “answer war” for key topics. The very nature of content creation will bifurcate: we will have content designed for human engagement and persuasion, and a separate class of highly structured, fact-based content designed explicitly for machine consumption and citation. Ultimately, brands that master GEO won’t just be found; they will become the foundational layer of the AI-powered internet, establishing a level of authority and influence that a simple blue link could never achieve.
