What Is PMax Reporting Really Showing You?

What Is PMax Reporting Really Showing You?

A global leader in SEO, content marketing, and data analytics, Anastasia Braitsik is a digital marketing expert who has navigated the ever-changing landscape of Google Ads for top ecommerce businesses. When Performance Max launched, many advertisers dismissed it as a “black box,” but Anastasia saw the potential. Today, she joins us to discuss the dramatic evolution of the platform over the past 18 months. We’ll explore how transparency has improved, dive into practical optimization strategies using new reporting features, and uncover the critical balance between granular control and the data needs of Google’s machine learning.

Performance Max initially faced criticism for being a “black box.” Based on the past 18 months of updates, how has transparency improved for ecommerce advertisers, and what specific controls, like negative keywords or placement exclusions, have proven most impactful for gaining real performance control?

It’s amazing to think back to the launch and the initial reaction. It really felt like a step backward, especially coming from the granular control we had in Standard Shopping. Smart Shopping was the true low point of black-box advertising; it stripped away almost every lever we relied on—search term reporting, placement visibility, even basic negative keywords. The past 18 months have been a complete reversal. Google has brought back most of that functionality, and the impact has been profound. For me, the return of fully-functional negative keywords has been the biggest game-changer. We went from a token limit of 100, clearly meant only for brand safety, to full API support and shared lists. This allows us to actively sculpt performance and protect our spend, which is a world away from the initial hands-off approach.

The new campaign-level search term view is a significant step forward. Could you walk us through how an advertiser should use this data for optimization, and what are the key limitations to keep in mind, especially when analyzing the blended search and shopping performance data?

This new view is the breakthrough we were all waiting for. Before this, we had “search term insights,” which grouped queries into categories but offered very thin metrics. There was no cost data, which meant no ROAS, no CPC—it was essentially useless for genuine performance evaluation. The new campaign-level view finally anchors the data properly and gives us the metrics we need to make smart decisions. The first thing an advertiser should do is look for cost sinks and opportunities. However, the biggest limitation you have to remember is that this data is blended. A single search term reflects performance from both traditional search ads and Shopping ads combined. You can’t see a clean view of how a query performed in one format versus the other, so you have to analyze it with that context in mind and avoid making assumptions about a specific ad format’s performance based on that blended number.

Let’s talk about a practical optimization strategy. Could you provide a step-by-step example of how to identify underperforming search terms and then explain how advertisers can move beyond manual review by using automation or AI for more efficient, large-scale analysis?

Absolutely. A simple but very effective method is to first establish a baseline. Go into your campaign and calculate the average number of clicks it takes to get a single conversion. Let’s say it’s 20 clicks. The next step is to filter your search term report to find all the terms that have more than 20 clicks but zero conversions. These terms have had a fair shot to perform, they’ve spent your money, but they haven’t delivered. They become your primary candidates for adding to a negative keyword list. But manually sifting through thousands of search terms isn’t a good use of anyone’s time. For high-volume accounts, using the API is a must. For others, scripts are fantastic. You can even layer in AI for a semantic review, using simple formulas in Google Sheets to flag terms that are contextually irrelevant to your products, not just based on keywords. This lets you focus your time on the final approval, not the tedious discovery work.

While direct channel targeting in Performance Max is limited, advertisers can exert control through placement exclusions. Can you share your process for reviewing placement reports, particularly for YouTube, and what tools or methods you recommend for efficiently identifying and excluding irrelevant or brand-unsuitable placements?

This is where you can reclaim a lot of control, even if you can’t directly turn channels on or off like you can in a Demand Gen campaign. My process starts in the Report Editor, pulling a placement report. I’m specifically looking for spammy-looking domains and, on YouTube, content that’s completely irrelevant or brand-unsafe. The two big red flags are often political content and children’s content, which rarely drive meaningful performance for most ecommerce brands. If a placement just feels wrong, it probably is. To make this efficient, I use a few simple tools. If I see a long list of YouTube videos in a language I don’t speak, I’ll pull the titles into Google Sheets and use the =GOOGLETRANSLATE function for a quick translation. You can also use AI-powered formulas right in the sheet to do a semantic triage, flagging placements that don’t align with your brand’s core themes so you can review a much smaller, prioritized list.

Splitting campaigns by device can be tempting but risks fragmenting crucial conversion data. What key metrics should an advertiser analyze before making this decision, and at what point does a campaign have enough data volume to safely support this kind of segmentation without harming algorithmic performance?

This is a decision that should never be made lightly. The temptation is strong when you see a major performance disparity, but the risk is very real. Performance Max thrives on data; the more conversion volume it has, the better and more consistently it hits its targets. Splitting a campaign in two also splits that data, and you could end up with two underperforming campaigns instead of one strong one. Before even considering a split, you need to dig deep into item-level performance in the Report Editor. Segment by item ID and device to see if the disparity holds true for your key products or if it’s just an average. Also, look at how your performance on desktop versus mobile compares to major competitors. The most important factor, though, is volume. There isn’t a magic number, but if a campaign is already struggling to meet its goals or has low monthly conversion volume, splitting it is almost guaranteed to make things worse. You should only split when you are confident that both new campaigns will have enough data to support the machine learning effectively.

Do you have any advice for our readers?

My main advice is to treat Performance Max not as a “set it and forget it” solution, but as a powerful system that you can guide and refine. The days of it being an uncontrollable black box are over. Use the data that Google is now providing. Dive into the search term reports, be diligent with your placement exclusions, and use modern tools like AI and automation to make the process efficient. Don’t be afraid to apply controls based on what the performance insights are telling you. But always respect the algorithm’s need for data. Every decision, especially around segmentation like splitting by device, should be weighed against its potential impact on data volume. If you can master that balance between providing smart human oversight and giving the machine enough data to learn, you’ll be far ahead of the competition.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later