Anastasia Braitsik is a prominent figure in the evolving world of digital media, specializing in the intersection of search engine optimization, complex data analytics, and performance marketing. As a global leader in the field, she has spent her career helping brands transition through the various eras of the web, focusing on how information is structured to satisfy both human curiosity and machine logic. This discussion explores the massive shift currently occurring within the Microsoft Advertising ecosystem, specifically focusing on the transition from a click-based search environment to the “agentic web.” Braitsik offers deep insights into how the move from simple rankings to AI selection is redefining visibility, the importance of structured product data in autonomous commerce, and the strategic adjustments necessary to thrive in a landscape where AI agents increasingly act as the primary intermediaries for consumer decisions.
AI Max for Search expands query matching and personalizes ad delivery across surfaces like Copilot and Bing. How does this shift affect traditional keyword strategies, and what specific steps should brands take to ensure their ads remain relevant without losing control over their messaging?
The introduction of AI Max marks a significant departure from the era of manual keyword bidding and moves us into a phase where intent and context are the primary drivers of visibility. In this new framework, traditional keyword strategies must become more fluid, as the system expands query matching to reach users across diverse AI-driven surfaces like Copilot and Bing. To ensure relevance, brands must focus on providing high-quality, comprehensive data inputs that the AI can use to personalize ad delivery without drifting away from the brand’s core message. I recommend that advertisers double down on their negative keyword lists and use clear, descriptive landing page content to provide the AI with the necessary guardrails. By doing so, brands can leverage the efficiency of automated matching while maintaining a firm grip on the narrative and ensuring their message appears in the right context.
New formats like Offer Highlights now surface key selling points directly within AI conversations. Beyond perks like free shipping, what data structures are most effective for these conversational placements, and how do you measure the ROI of a chat-based interaction compared to a traditional search click?
Offer Highlights are designed to bring high-value incentives, such as free shipping, directly into the flow of a natural conversation, making the advertisement feel like a helpful suggestion rather than an interruption. The most effective data structures for these placements are those that are highly modular and formatted within the Microsoft Merchant Center to be easily parsed by conversational models. We are moving toward a measurement model where we track the “selection” of a brand within a chat, which requires a more nuanced approach to attribution than a simple click-through rate. To measure ROI effectively, we must analyze the progression from a chat-based mention to a completed sale, often looking at how these highlights reduce friction and shorten the path to purchase. It is about valuing the quality of the interaction and the influence of the AI’s recommendation rather than just the initial entry point.
Analytical tools now show exactly how brands are cited in AI-generated answers and where competitors might be outperforming them. How do you interpret these citation metrics to improve brand authority, and what are the trade-offs when optimizing for AI “mention share” versus traditional SEO?
The ability to see specific AI Visibility metrics in Microsoft Clarity is a game-changer because it allows us to see exactly which pieces of our content are being used to form AI-generated answers. When we see a competitor being cited more frequently, it serves as a clear signal that our own data may be lacking the structure or clarity that the AI needs to feel “confident” in citing us. This creates a trade-off where we may need to prioritize concise, factual data points—optimizing for “mention share”—which can sometimes conflict with the longer, engagement-focused content usually preferred for traditional SEO. To improve brand authority in this space, you must ensure that your most important brand claims are supported by structured data that an AI can easily verify. It is a shift from writing for a human reader who might browse to writing for an AI agent that needs to extract a specific answer.
The Universal Commerce Protocol allows AI agents to discover and transact on product data more easily. What technical adjustments are required to make a product catalog “agent-ready,” and how does this shift toward autonomous transactions change the traditional customer journey from discovery to checkout?
Making a product catalog “agent-ready” requires a move toward the Universal Commerce Protocol, which ensures that every attribute—from price to specific shipping terms—is structured in a way that an AI can use to facilitate a transaction. This technical shift fundamentally alters the customer journey by allowing the AI to handle the “consideration” phase in the background, effectively collapsing the funnel from discovery to checkout into a single interaction. For the consumer, the experience is seamless and frictionless, as the agent does the work of comparing options and verifying details. For the brand, this means the technical health of the product feed is now just as important as the creative elements of the ad. If your data isn’t structured to these protocol standards, your products simply won’t be “seen” or “chosen” by the agents that are increasingly making these purchasing decisions.
Copilot Checkout now enables purchases directly within the AI interface to reduce friction. What are the practical implications for site traffic when the transaction happens off-platform, and how should businesses adjust their attribution models to account for these direct-to-agent sales?
The rise of Copilot Checkout means that a significant portion of the transaction process is moving off-platform, which will naturally lead to a decline in traditional website traffic metrics. This doesn’t mean the marketing is failing; rather, it means the site is serving as the data source while the transaction happens where the user is already engaged. Businesses must adjust their attribution models to prioritize “direct-to-agent” sales data and integrate their backend systems so that inventory and conversion records remain synchronized across these various surfaces. We have to move away from using site visits as a proxy for success and instead focus on the total volume of transactions facilitated by AI partners. It requires a more holistic view of the ecosystem where the brand exists across multiple interfaces simultaneously.
Advertisers can now describe ideal customers in plain language to build targeting segments automatically. When moving away from manual demographic toggles, how do you validate the accuracy of these AI-generated segments, and what steps do you take to maintain brand safety?
Moving to a model where we describe audiences in plain language allows for a much more sophisticated level of targeting than traditional age or gender toggles ever could. To validate the accuracy of these AI-generated segments, we have to look at the post-campaign data and see if the users being reached actually align with the intent of our descriptions. Brand safety remains a top priority, so we continue to layer these automated segments with strict exclusion criteria and manual oversight to ensure the AI doesn’t interpret a description too broadly. It’s a process of iterative refinement where we use natural language to guide the machine, but we rely on hard performance data to tell us if the machine is truly understanding our ideal customer profile. This approach allows for much more creative targeting, but it demands a higher level of vigilance from the human strategist.
As the focus shifts from ranking in search results to being selected by AI systems, what fundamental changes are needed in creative asset development? How do you ensure your brand voice remains distinct when an AI agent is the one delivering the message to the user?
When the primary “consumer” of your data is an AI agent, your creative assets need to be much more modular, allowing the system to pick and choose the most relevant elements for any given conversation. To keep your brand voice distinct, you must provide the AI with very clear, high-quality inputs that reflect your unique value propositions and tone. We are moving away from the “one-size-fits-all” ad copy and toward a model where we provide a library of “brand ingredients” that the AI can assemble. This ensures that even when an AI agent is delivering the message, it is doing so using the specific language and offers that define your brand. The goal is to be the most trusted and easily understood option for the AI, so that it consistently selects your brand over a generic alternative.
What is your forecast for the agentic web?
I forecast a future where AI-driven traffic and autonomous transactions grow at a rate that far outpaces traditional human search behavior, eventually becoming the dominant way people interact with the internet. In this “agentic web” era, the brands that win will be those that prioritize being “selected” by AI systems by having the most trustworthy, well-structured, and accessible data. We will see a massive shift in marketing budgets toward these agent-friendly protocols and AI-driven ad formats as the traditional search results page becomes just one of many touchpoints. Ultimately, the successful marketers of the future will be those who master the art of being understood by machines while still delivering value that resonates with the humans those machines serve. This transition represents a total reimagining of visibility, where being “found” is no longer enough—you must be “chosen.”
