Are You Using Google’s Search Terms Report All Wrong?

Are You Using Google’s Search Terms Report All Wrong?

Max Tainer sits down with Anastasia Braitsik, a global leader in SEO, content marketing, and data analytics, to unpack the practical realities of using the Google Ads search terms report. She explains the living difference between “keywords” and “search terms,” digs into how search term match types uncover Google’s interpretation of intent, and describes when to broaden versus tighten targeting. Across Shopping, Performance Max, Search, DSA, and AI Max, Anastasia shares how she pivots data, chooses match types, manages negatives without blocking value, and reads “Other search terms” to steer strategy. Throughout, she grounds the conversation in workflow: from building pivots and layering views to linking queries with landing pages, adjusting feeds, and measuring whether URL expansion or RSA text assets actually moved conversion performance.

When you explain “keyword” vs. “search term” to a client, how do you illustrate the difference in real searches, and what story or metric shows why that distinction matters in performance and optimization decisions?

I tell clients a keyword is the instruction you hand Google, while a search term is what a real person actually typed that made your ad show. In practice, I show side-by-side rows: the keyword you chose and the user’s search term that matched. The lightbulb moment is when they see a broad keyword surfacing a tight, high-intent search term—and the reverse, where a decent keyword drags in irrelevant queries. The metric proof comes when we pivot by search term to see that performance gaps live in the search terms, not the keywords; that’s where we find the uplift by promoting strong search terms into their own keywords and trimming away the drifters.

You mention every search term has a match type. Can you walk through how you build a pivot by search term match type, what metrics you compare, and a time you changed bids or structure based on those findings?

I export the search terms report, include columns for search term match type, keyword, cost, conversions, CTR, and ROAS/CPA, then pivot with match type as rows and metrics as values. I compare conversion rate and CPA across “exact match close variant,” “phrase,” and “broad” categorizations to see where efficiency clusters. In one audit, “exact match close variant” terms carried the best CPA, while broader matches drove spend without proportional conversions, so I shifted a chunk of budget to ad groups seeded with those exact-aligned terms and tightened bids on the broader segments. The structural change wasn’t dramatic—just re-allocating budget and isolating winners—but the lift came from letting the best-performing search term match types lead the bidding and segmentation.

In audits, when a broad keyword generates “exact match close variant” search terms, how do you interpret that signal, and what step-by-step process do you use to refine match types or add new keywords?

That’s a reassuring signal that Google sees a very tight alignment between your intent and the user’s query, even though you used a broad keyword. My steps are: promote those high-performing search terms to their own keywords, clone the ad group with phrase/exact versions, and carry over any top ads and extensions. Then I cap the original broad keyword’s bid or move it to a supporting role so it can still discover new queries but not overshadow the exact winners. Finally, I monitor for cannibalization and let the new exact/phrase variants collect data before making further bid or budget shifts.

You flag a red line when 10%+ of search terms become negatives. How do you diagnose whether to tighten match types, pause AI Max, or reshape audiences, and can you share a before-and-after metric shift?

Once negatives hit 10% or more, I stop and zoom out. First I check the split of spend across search term match types and “Other search terms” to see if waste is concentrated in broad interpretations. If it is, I tighten match types or pause AI Max temporarily to narrow the inputs and stabilize quality. In a recent case, after hitting that 10% red line, we tightened targeting and reduced reliance on discovery while preserving proven terms; the result was a clear improvement in efficiency and far fewer negatives needed to maintain control.

Negative keywords have their own match types. How do you avoid conflicts with your “positive” keywords, and what checklist or workflow do you use to prevent blocking high-value queries by mistake?

I maintain precision by setting negatives at the narrowest match type that solves the problem and scoping them at the right level—account, campaign, or ad group—based on where the issue originates. My checklist is simple: confirm the negative’s match type, verify it won’t block exact or phrase winners, test the impact in a limited scope, and re-check search terms 48–72 hours later. I also tag “protected” high-value queries and keywords, so before I push a broad negative live, I cross-reference that tag list. Finally, I maintain a shared negative library with clear naming conventions so no one accidentally ships a global block that suppresses a profitable pocket of traffic.

For keywordless setups, how do you customize the DSA view to connect queries to landing pages, and can you describe a case where those pairings revealed a content gap you fixed for better conversion rates?

I switch the search terms report to the DSA view to see the paired landing page for each query. Then I group by URL to spot patterns: which pages attract which queries and how those pairs perform on CTR and CVR. In one instance, informational queries were mapped to a generic category page that lacked clear CTAs and product detail. We built a tailored landing page aligned to those queries—adding stronger messaging and intent-specific content—and saw conversion rates climb as the page finally matched the way people actually searched.

In AI Max view, you see landing pages and RSA headlines. How do you evaluate whether final URL expansion or text asset customization helped, and what specific headline or page tweak moved the needle?

I evaluate by isolating periods with URL expansion on vs. off and comparing conversion rate and CPA by search term cluster. In the AI Max view, I look at which RSA headlines fired for the converting queries and whether those headlines matched the landing page promise. A small but pivotal tweak was aligning a top headline with the exact phrasing found in high-performing search terms, while tightening the landing page hero to mirror that message; instantly, message match improved and so did post-click metrics. If expansion drives mismatched landings, I reduce it and let the best-performing fixed URLs and tailored headlines take the lead.

How do you use the “Other search terms” row when it outperforms visible queries, and can you detail the steps you took to broaden targeting—like shifting to broad match or audiences—and the resulting metrics?

When “Other search terms” outperforms the visible queries, it means there’s high-intent demand we’re not fully seeing yet. I start by relaxing constraints: introduce more broad match in tightly themed ad groups, loosen audience restrictions, or enable AI-driven discovery while preserving guardrails. I monitor the share of spend moving into the visible bucket and compare CPA and CVR; the goal is to pull winning queries out of the shadows and promote them. The net effect is more discoverable inventory and, when done with control, better efficiency because we lean into the hidden pool that was already producing.

When “Other search terms” spend heavily but underperform, how do you decide between exact match tightening, adding negatives, or switching to Target CPA/ROAS, and what timelines and thresholds guide you?

I look at spend concentration first: if “Other” is taking a big slice with weak returns, I tighten match types to exact or phrase so I can see and shape the traffic. If irrelevance is obvious, I add negatives with precise match types to avoid collateral damage. For bidding, I’ll test a more restrictive strategy like Target CPA or Target ROAS to force the system to prioritize better queries. I re-check within a few days to a week depending on volume, watching whether “Other” shrinks as quality rises; if not, I continue narrowing until efficiency stabilizes.

Adding the Keyword column in the report, how do you spot a single keyword that spawns irrelevant search terms, and what’s your playbook for pausing it versus restructuring ad groups or adding selective negatives?

With the Keyword column on, I sort by keyword and scan all the search terms tied to it. If one keyword repeatedly spawns irrelevant terms and requires a growing negative list, I pause it rather than play whack-a-mole. If it produces a mix—some gold, some junk—I break out a new ad group for the winners and add selective, tightly matched negatives to contain the spillover. That balance lets us preserve discovery without letting one unruly keyword pollute the ad group’s overall relevance.

For Shopping or Performance Max, how do you adjust product feeds to elicit better matches, and can you share a concrete example—attributes tweaked, timelines, and ROAS or CPA impact?

I start at the feed: titles, descriptions, and categorical attributes are your “keywords” in disguise. I rewrite titles to reflect how people search, enrich descriptions with intent-aligned phrasing, and clean up product types so Google can map queries more accurately. After those adjustments, I monitor the search terms report for better-aligned queries and compare efficiency; the payoff comes as the system matches products to the right searches more often. The improvement shows up not only in ROAS/CPA but also in fewer negatives required to keep the campaign clean.

When would you turn AI Max off versus just narrowing inputs, and what step-by-step testing plan (budgets, timeframes, success metrics) helps you make that call confidently?

I turn AI Max off when it keeps pulling irrelevant traffic despite narrowed inputs and conservative bidding. My test plan is phased: week one, constrain landing pages and assets, add clear negatives, and reduce audience looseness; week two, review the search terms and “Other” mix to judge whether quality stabilized. If performance doesn’t improve, I pause AI Max and shift budget to exact and phrase campaigns that are already proving themselves. Success is measured by cleaner query quality, steadier CPA/ROAS, and a reduced need to add more than that 10% negatives threshold.

How do you structure negative keyword lists across account, campaign, and ad group levels, and can you share an instance where list granularity prevented wasted spend without hurting reach?

I keep evergreen brand safety and obvious exclusions at the account level, campaign-specific blockers in campaign lists, and surgical negatives at the ad group level. This hierarchy prevents a single heavy-handed negative from suppressing profitable segments. In one build, ad-group-level negatives stopped a niche term from sidetracking product-focused queries, while campaign and account lists stayed lean; results were cleaner search terms and sustained reach. Granularity gave us precision without the unintended consequence of throttling good traffic.

What metrics do you monitor first in the search terms report (CTR, CVR, CPA, ROAS, query volume), and how do you sequence actions—from query mining to bid strategy changes—to capture quick wins?

I start with conversion rate and CPA/ROAS by search term match type and keyword, then look at CTR as a signal of message match. Next, I mine winners to promote into exact or phrase and identify losers for precise negatives. Then I adjust ads and landing pages to mirror the language of the winning queries, and only after that do I change bid strategies, if needed, to amplify the good. The sequence is discover, isolate, align, then scale—maximizing quick wins while avoiding knee-jerk bid changes.

Can you walk through a full weekly workflow: exporting the search terms report, building pivots, reviewing DSA/AI Max views, auditing “Other search terms,” and implementing changes, with a real campaign outcome?

Every week I export the search terms report with columns for keyword, search term match type, landing page, and assets where available. I build pivots by search term match type and by keyword-to-term mapping to see which combinations drive efficient conversions. Then I switch to DSA and AI Max views to check landing page pairings and RSA headlines; I adjust pages or assets to mirror top-performing query language. I audit “Other search terms,” decide whether to broaden or tighten, and implement exact promotions and surgical negatives. The outcome is predictable: fewer irrelevant queries, clearer alignment between ad text and landing pages, and a steady rise in efficiency without crossing the 10% negatives red line.

Do you have any advice for our readers?

Treat the search terms report like a living map of intent, not just a place to drop negatives. Promote proven search terms into their own keywords, use match types strategically, and customize DSA and AI Max views to connect queries with the right pages and assets. Keep a close eye on “Other search terms,” and let its performance tell you when to broaden or tighten. Most importantly, build a weekly rhythm—export, pivot, adjust, and measure—so your campaigns evolve with what real people are actually typing, not what you wish they would search.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later