Closet Factory — Google Ads Audit
Seven markets. One diagnosis. A side-by-side analysis of Virginia Beach, Cleveland, Richmond, Ft. Myers, Chicago, Pittsburgh, and Boston — revealing the systemic patterns that connect them all. One market proves the fix works.
The Big Picture
This started as a conversation about one market. Jeff Bruzzesi in Virginia Beach wanted to know why his Google Ads weren't producing enough leads. A reasonable question from a franchise owner spending $14,000 a month. So we pulled the data. And the data told a story nobody expected.
Virginia Beach was spending 52% of its budget on AI Max Search — a campaign producing leads at $438 each. Meanwhile, Performance Max was delivering leads at $130 each but only getting 32% of the budget. The best campaign was being starved. The worst campaign was being fed.
Then Cleveland's data came in. Same template. Same inversion. PMax at $99 CPL getting 26% of budget. AI Max at $395 CPL getting 60%. Michael's third-party research confirmed what the numbers were already saying.
Richmond made it three for three. Ft. Myers made it four. Then Chicago made it five — with 47.5% of conversions dependent on brand awareness and PMax running 98.5% on branded queries. Pittsburgh made it six — running the lowest marketing budget of all seven markets with radio (endorsement + brand commercials) but no TV, where conversions crashed to 5–6 per month when spend shifted in Q4 2025, then surged back when budget was restored. Every market running the corporate template showed the same systemic issues: budget inversion, 100% broad match, bloated conversion tracking, wrong bid strategy, expensive Demand Gen, junk traffic, and AI Max overfunding.
"Then Boston changed the conversation entirely."
Boston is managed by an outside agency, not the corporate template. They give PMax 54% of the budget — the most of any market. They track only 2 conversion actions instead of 26 to 41. They use mixed match types instead of 100% broad. The result: $81 PMax CPL and $123 account CPL. The lowest in the network by a wide margin.
"Boston spends $9,700 a month and generates 78 leads. Virginia Beach spends $13,900 and generates 30. Boston's dollar works 3.7 times harder."
This is not a theory. This is not a projection. Boston is already doing what the other six markets need to do — and the results speak for themselves. The fix is not complicated. The evidence is sitting in the data.
The combined monthly spend across all seven markets is $109,146. The combined monthly leads: 523. The combined CPL: $209. With the systemic fixes applied — the same fixes Boston already has in place — the projection is 920 leads per month at $105 CPL. Nearly doubling the lead volume while cutting the cost in half. Same budget. Different results. Because the foundation gets fixed.
Executive Summary
A direct accounting of the evidence — from seven Closet Factory markets and from Google's own research and courtroom admissions.
Google does not dispute that offline media drives search. They have published the research proving it. They have testified about it under oath. Here is what they said:
Google knows — and has published the research proving — that TV, streaming, and radio advertising create the branded search queries that make Google Ads profitable. They know that when offline media runs, search volume rises. They know that when it stops, search volume falls. And they have admitted, under oath, that they raise the prices on those searches without telling the advertisers who are paying for them.
For Closet Factory, this means the branded searches that account for 30–50% of all Google Ads conversions are not "Google leads." They are TV leads, radio leads, and streaming leads that Google is intercepting at the last click and claiming credit for. The true cost of customer acquisition includes the media investment that created the intent — not just the click cost that captured it.
The fix is not to stop running Google Ads. The fix is to stop letting Google's automation run unchecked — to cut the 40–50% waste, fix the conversion tracking, reallocate budget to what actually works, and properly attribute the branded conversions to the media that created them. Boston already did it. The other six markets can do it tomorrow.
Market Overview

Jan 2025 – Feb 2026 (14 mo)
Managed by: Corporate
Account CPL
$464

Nov 2024 – Feb 2025 (~90 days)
Managed by: Corporate
Account CPL
$228

Jan 2025 – Feb 2026 (14 mo)
Managed by: Corporate
Account CPL
$193

Jan 2025 – Feb 2026 (14 mo)
Managed by: Corporate
Account CPL
$309
Likely inflated

Jan 2026 – Feb 2026 (2 mo)
Managed by: Corporate
Account CPL
$156

Jan 2025 – Feb 2026 (14 mo)
Managed by: Outside Agency
Account CPL
$123
Best in Network
Jan 2025 – Feb 2026 (14 mo)
Managed by: Corporate
Account CPL
$322
Conversion Source Breakdown
Every search term categorized by intent. Media-Influenced combines three categories: Brand ("Closet Factory"), Product (Murphy/Wall Bed — advertised via in-market media), and "Closet" terms (where brand awareness determines who gets the click). Competitor is searches for named competitor brands (California Closets, Closets by Design, Container Store, Inspired Closets, etc. — not "custom closet" category terms). Generic is everything else — no mention of closets or any brand.
Methodology & Data Source
Data period: January 1, 2025 – February 28, 2026 (14 months) for all markets. Boston's data ends February 20, 2026 (8 days shorter). All data pulled directly from Google Ads search terms reports exported per market. Each search term was individually categorized by the actual words in the query, not by campaign name or ad group name. Summary/total rows in CSV exports were excluded from all counts.
Why "custom closet" is not a competitor: Terms like "custom closets near me" and "custom closets" are category searches — someone looking for the product, not a specific brand. These are placed in the "Closet" category because brand awareness determines which company gets the click. Only searches containing an actual competitor brand name (California Closets, Closets by Design, Container Store, Inspired Closets, Closet World, EasyClosets, ClosetMaid, More Space Place, Tailored Closet, etc.) are counted as Competitor.
Ad group names vs. actual search terms: Some markets have ad groups named "Competitors" that contain generic closet terms, not actual competitor brand searches. For example, Ft. Myers' "Competitors" ad group (16 conversions, $6,853 spend) contains terms like "closet organizer," "closet systems," and "closets" — none of which are competitor brand searches. This analysis categorizes by what people actually searched, not by how the agency organized ad groups.
Boston data correction (March 7, 2026): The original analysis counted Boston at 3,018 conversions and 8.5% media-influenced. This was incorrect — the CSV export contained summary rows ("Total: Account," "Total: Performance Max," "Total: Search," etc.) that were being counted as data rows. After excluding these summary rows, Boston has 353.5 real search term conversions and is 72.4% media-influenced, consistent with the other markets.
Across all 6 audited markets, 77% of conversions are media-influenced. The range is remarkably tight: 71–93%. These are people who searched the brand name, an advertised product, or a "closet" term where brand recognition determines who gets the click. Google did not create this demand — it captured it.
The markets with the highest network market penetration also show the highest media-influenced conversion rates. Richmond (#1 penetration) has the highest brand search share at 68%. Virginia Beach (#3 penetration) follows at 47%. This is not coincidence — brand awareness built by in-market media directly translates to Google Ads performance.
$289K spent on search terms that produced zero conversions across all 6 markets. This is the direct cost of running Broad Match with AI Max targeting — Google matches ads to any remotely related query, charges for the click, and delivers nothing. Chicago alone accounts for 48,144 waste terms and $75K in wasted spend.
Brand searches are the clearest signal. Someone typed "Closet Factory" because they saw a TV spot, heard a radio ad, drove past a wrapped vehicle, or encountered the brand through in-market media. Google didn't create that intent. It intercepted it, matched it to an ad, and charged for the click. Across all 6 markets, brand searches account for 27–68% of all conversions. Richmond leads at 68.2% — and Richmond is #1 in the entire Closet Factory network for market penetration. Boston, managed by an outside firm, is at 49.6% — healthy and comparable to Ft. Myers and Virginia Beach. That is not a coincidence.
Murphy / Wall Bed searches are product-specific demand created directly by in-market advertising. Chicago actively and ongoingly airs Murphy/Wall Bed campaigns as part of the Closet Factory brand (17.2 conversions from 9 converting terms). Ft. Myers has run Murphy/Wall Bed campaigns in the past (1 conversion, 72 terms triggered, $405 wasted). Cleveland also actively advertises Murphy/Wall Bed (data not available for this analysis). Nobody searches "custom murphy bed chicago" or "murphy beds near me" without having encountered the product through advertising first.
"Closet" term searches are the battleground where brand awareness tips the scale. When someone searches "custom closets near me" or "closet systems," the company they've heard of gets the click. This is why Richmond (#1 market penetration) converts 88% media-influenced while markets with less brand presence convert lower. The "closet" category is not truly "generic" — it's where the investment in brand awareness pays its largest dividend. These terms account for 19–45% of conversions depending on the market.
Ft. Myers: zero named-competitor conversions. Not a single conversion from someone searching California Closets, Closets by Design, Inspired Closets, Container Store, More Space Place, Tailored Closet, or EasyClosets. The account spent $351+ on competitor brand clicks across these names and got nothing. The "Competitors" ad group in the account (16 conversions, $6,853 spend) is misleadingly named — every converting term inside it is a generic closet term like "closet organizer" or "closet systems," not an actual competitor brand search. This was verified term-by-term against the raw Google Ads export.
Boston correction: The original analysis showed Boston at 3,018 conversions and 8.5% media-influenced — making it appear like a massive outlier. This was a data parsing error: the CSV export contained summary rows ("Total: Account" with 1,097 conv, "Total: Performance Max" with 902 conv, etc.) that were being counted as data rows. After excluding these summary rows, Boston has 353.5 real search term conversions and is 72.4% media-influenced — consistent with every other market. The outside firm managing Boston is performing comparably on search term mix.
Google Ads is a brand capture tool, not a demand creation tool. Across all 6 audited markets, 77% of conversions are media-influenced — and the range is remarkably tight (71–93%). The markets with the highest brand penetration (Richmond #1, Virginia Beach #3) show the highest media-influenced rates. Ft. Myers, with strong brand presence, has zero named-competitor conversions. Boston, managed by an outside firm, is at 72.4% — right in line with the in-house markets. The $289K spent on zero-conversion search terms is the cost of letting Google's AI decide who sees your ads. The brand awareness is the asset. Google Ads is just the toll booth.
Competitive Landscape
Every search term containing a named competitor brand — California Closets, Closets by Design, Container Store, Inspired Closets, and others — counted and measured. "Custom closet" is a category search, not a competitor. Only actual brand names count. The competitive intensity score combines competitor conversion share, unique converting competitors, impression share, and click volume.
Most Competitive
Chicago
100/100 intensity · 1.3:1 brand ratio
Least Competitive
Ft. Myers
15/100 intensity · 24:1 brand ratio
Top Competitor (Network)
Closets by Design
155+ conv across 6 markets · Present everywhere
The Proof: Market Penetration vs. Competitive Intensity
When you overlay market penetration rankings onto competitive intensity, the pattern is unmistakable. The markets where Closet Factory has the strongest brand presence are the same markets where competitors struggle to gain traction. Richmond (#1 penetration) has a 10.9:1 brand dominance ratio. Virginia Beach (#3 penetration) has 7.3:1 with full 14-month data — still strong, with brand at 49.4% of all conversions. Meanwhile, markets without confirmed high penetration rankings — Chicago, Pittsburgh — are the most fiercely contested.
RICHMOND (#1 PENETRATION)
10.9:1 Brand Dominance
Only 6.2% competitor conv share. 4 competitors convert. When you're #1 in penetration, competitors are irrelevant.
VIRGINIA BEACH (#3 PENETRATION)
7.3:1 Brand Dominance
6.5% competitor conv share. 6 competitors converting. Brand still dominates at 49.4% — $21.6K spent on competitor clicks.
CHICAGO (HIGHEST COMPETITION)
1.3:1 Brand Dominance
21.2% competitor conv share. 10 competitors converting. For every brand conversion, there's almost one competitor conversion.
The implication is clear: In-market media (TV, radio, digital) doesn't just generate direct leads — it builds brand awareness that suppresses competitive search behavior. When consumers in Richmond or Virginia Beach need closet solutions, they search for "Closet Factory" directly. In Chicago, where the competitive landscape is fierce, the same consumer searches for "custom closets" or "California Closets" — and the agency pays $202 per competitor-sourced conversion instead of the brand CPL. The markets with the strongest media presence have the lowest competitive pressure. That's not coincidence — it's the media working.
METHODOLOGY
Competitor brands identified: California Closets, Closets by Design, Container Store, Inspired Closets, EasyClosets, More Space Place, Tailored Closet, IKEA, Home Depot, Lowes, Elfa, Stor-X, Classy Closets, Closet World, Closet America, Closet Works, ClosetMaid, Modular Closets, SpaceManager. Each search term was matched against these brand names. "Custom closet" and similar category terms are NOT competitor terms. Intensity score: Competitor conv share (40%) + Unique converting competitors (20%) + Competitor impression share (20%) + Competitor click volume (20%). Data: Jan 1, 2025 – Feb 28, 2026.
Campaign Performance
Performance Max is the hero in every market. In the four corporate markets, AI Max Search and Demand Gen drag CPL up. Boston — managed by an outside agency — proves that giving PMax the majority of budget and keeping tracking clean produces the best results in the network.
PMax
32% budget → 72% conv
$130
CPL
AI Max
52% budget → 34% conv
$438
CPL
Demand Gen
10% budget → 4% conv
$597
CPL
YouTube TV
6% budget → 0% conv
∞
CPL
PMax
26% budget → 61% conv
$99
CPL
AI Max
60% budget → 35% conv
$395
CPL
Demand Gen
14% budget → 4% conv
$694
CPL
PMax
36% budget → 41% conv
$169
CPL
AI Max
63% budget → 58% conv
$207
CPL
Demand Gen
2% budget → 1% conv
$421
CPL
PMax
41% budget → 45% conv
$280
CPL
AI Max
48% budget → 49% conv
$307
CPL
Demand Gen
11% budget → 6% conv
$528
CPL
PMax
3% budget → 19% conv
$25
CPL
AI Max
78% budget → 61% conv
$199
CPL
AI Max WH
19% budget → 20% conv
$149
CPL
PMax
54% budget → 82% conv
$81
CPL
Search
34% budget → 14% conv
$310
CPL
Competitor
8% budget → 2% conv
$463
CPL
Branded
2% budget → 1% conv
$208
CPL
Display
2% budget → 1% conv
$341
CPL
PMax
30% budget → 40% conv
$242
CPL
AI Max
60% budget → 51% conv
$376
CPL
Demand Gen
10% budget → 9% conv
$378
CPL
The Core Problem
In the five corporate markets, the best-performing campaign receives a smaller share of budget than it deserves. Chicago is the worst offender — PMax gets just 3% of budget despite delivering 19% of conversions. Boston — managed by an outside agency — flips this, giving PMax the majority of budget and reaping the rewards.
PMax gets 32% of budget but delivers 72% of conversions. AI Max gets 52% but delivers only 34%.
PMax gets 26% of budget but delivers 61% of conversions. AI Max gets 60% but delivers only 35%.
PMax gets 36% of budget and delivers 41% of conversions. Budget is properly aligned.
PMax gets 41% of budget and delivers 45% of conversions. Budget is properly aligned.
PMax gets 3% of budget but delivers 19% of conversions. AI Max gets 78% but delivers only 61%.
PMax gets 54% of budget but delivers 82% of conversions. Search gets 34% but delivers only 14%.
PMax gets 30% of budget but delivers 40% of conversions. AI Max gets 60% but delivers only 51%.
Boston proves the model: give PMax the majority of budget, keep tracking clean, and CPL drops to $81. If the five corporate markets simply followed Boston's allocation, the combined CPL would drop by an estimated 40–50% overnight.
Search Term Quality
All six markets hemorrhage money on search terms that never convert. The waste rate ranges from 8% to 56%. Chicago shows the lowest waste rate (8%) in its 2-month window, but this may reflect the short data period. Even Boston — the best-performing market — wastes 54% of search spend. Ft. Myers has the most negative keywords (2,175) yet still wastes 51%, proving that negatives alone cannot fix a broad match problem.
Search spend wasted
Search spend wasted
Search spend wasted
Search spend wasted
Search spend wasted
Search spend wasted
Search spend wasted
Conversion Tracking
The six corporate accounts have bloated, redundant conversion tracking setups with too many Primary actions. Google's Smart Bidding tries to optimize for 8–9 goals simultaneously — which means it optimizes for none effectively. Chicago has 33 actions with 9 Primary, but only "Opportunity - New" produces real leads. Pittsburgh has 18 actions with its Submit Lead Form MISCONFIGURED. Boston tracks only 2 actions and has the lowest CPL in the network.
41
Total Actions
9
Primary
39 conversion actions, 26 with zero conversions
9 Primary actions sending conflicting signals
Only 'Opportunity - New' is meaningful (63 conv)
26
Total Actions
8
Primary
Only 2 phone calls in 90 days — tracking is broken
Submit lead form marked as Secondary (wrong)
Maximize Conversion Value instead of Max Conversions
28
Total Actions
8
Primary
YouTube follow-on views marked as Primary
3 Business Profile actions with 0 conversions marked Primary
CHEQ flagged 750 invalid users
5 separate phone call tracking actions (duplicates)
10
Total Actions
9
Primary
Submit lead form has ZERO Primary actions — MISCONFIGURED
YouTube follow-on views marked as Primary
Get directions & Engagement counted as leads
Landing page attribution completely broken (0 conv attributed)
2 conversion goals marked MISCONFIGURED
33
Total Actions
9
Primary
9 Primary actions but only 'Opportunity - New' produces real leads (800 conv)
8 dead/noise Primary actions: Business Profile (4), Clicks to call, Marchex (2), YouTube views
100% Broad Match across all 289 keywords
Both campaigns flagged 'Eligible (Limited) — Not targeting relevant searches'
2
Total Actions
2
Primary
Only 2 clean actions: Schedule Me + Calls from ads
Cleanest tracking in the network — gives Google clear signal
This is likely why Boston's PMax outperforms all other markets
18
Total Actions
9
Primary
9 Primary actions: 4 real (Phone, Contact ×2, Converted lead), 5 noise
Submit lead form has 0 Primary actions — MISCONFIGURED (same as FTM)
Get directions + Engagement + YouTube views all marked Primary
2 'Other' Primary actions — unidentifiable
Shared Diagnosis: Wrong Bid Strategy (Corporate Markets)
All six corporate markets use "Maximize Conversion Value" — a strategy that optimizes for ROAS, not lead volume. For a home services business where the goal is to generate leads, this is the wrong strategy. Boston uses Target CPA and Target Impression Share — and has the best results. All corporate markets should switch to "Maximize Conversions" with a Target CPA constraint.
Signal Forensics
Every conversion action marked as "Primary" feeds Google's Smart Bidding algorithm. The corporate accounts have 8–9 Primary actions — most of which are not leads. Boston has 2. Here is every false-positive signal, what it actually measures, and why it harms performance.
2 Primary actions · $123 CPL · 78.4 leads/month · Managed by outside agency
"Schedule Me" Button Clicks
PrimaryA homeowner fills out the consultation request form. This is the highest-intent action possible — they are asking Closet Factory to come to their home. 2,250 total over 14 months.
Calls from Ads
PrimaryA homeowner calls the business directly from the ad. One tracking action, no duplicates. A phone call is a lead. 310 total over 14 months.
Why this works: The algorithm receives a binary signal — either someone requested a consultation, or they didn't. No noise, no ambiguity. PMax gets 54% of budget, delivers 82% of conversions at $81 CPL. Google's own lead gen best practices say to "avoid selecting goals from multiple stages of your lead to sale journey." Boston follows this exactly.
What It Actually Measures
Someone watched another YouTube video after seeing a Closet Factory ad. They did not submit a form or call.
Why It's Harmful
The algorithm treats a video viewer as equal to a lead. It then spends budget finding more YouTube viewers instead of homeowners requesting consultations. Google itself defaults this to Secondary.
What Google / Experts Say
"The default setting for YouTube follow-on views is 'Secondary action' to avoid overriding existing campaigns."
Google Ads Help — YouTube Follow-On ViewsWhat It Actually Measures
A user clicked "Get Directions" on a Google Maps listing. They wanted to know where the showroom is.
Why It's Harmful
Closet Factory sends designers to the customer's home. A map click is not a consultation request. BrightClick documented an identical case and called it "zero value for lead generation."
What Google / Experts Say
"Get directions (zero value for lead generation). Their campaigns were spending $8,000 monthly to drive 847 page views but generating only three qualified leads."
BrightClick — Conversion Tracking MistakesWhat It Actually Measures
Vague behavioral metric — typically scroll depth, time on site, or page interactions. Not defined anywhere in the account.
Why It's Harmful
Tells the algorithm to find people who browse, not people who buy. Every "engagement" conversion dilutes the lead signal and shifts budget toward low-intent audiences.
What Google / Experts Say
"Metrics like scroll depth, time on site, or video engagement shouldn't be treated as primary conversion events in your ad account."
Search Engine Journal — Ameet Khabra (July 2025)What It Actually Measures
3 separate Google Business Profile interactions — all marked Primary, all with zero conversions over 14 months.
Why It's Harmful
Zero-conversion Primary actions are dead weight that add noise. Google's own threshold is 15 conversions per month. These have zero over 14 months yet still occupy the bidding signal.
What Google / Experts Say
"Make sure the action generated at least 15 conversions in the last 30 days at the account level."
Google Ads Help — Lead Gen Best PracticesWhat It Actually Measures
5 separate phone call tracking actions — Google forwarding, Marchex, website tracking, call extensions, etc. One call fires 2–3 actions.
Why It's Harmful
A single phone call gets counted as 2–3 "conversions." This inflates reported lead volume, artificially lowers CPL, and misleads the algorithm about actual performance.
What Google / Experts Say
"Double counting primary conversions. It may be from the GA4 transition or just because conversion tracking has become more convoluted lately."
Harrison Hepp — LinkedIn (PPC Strategist)What It Actually Measures
CLE: Submit lead form is marked Secondary (excluded from bidding). FTM & PGH: Submit lead form has zero Primary actions despite hundreds of results.
Why It's Harmful
The single most important action for lead gen is invisible to the bidding algorithm. Google literally cannot optimize toward form submissions because the action is excluded.
What Google / Experts Say
"Use conversion goals specific to lead generation: 'qualified lead,' 'converted lead,' 'book appointment,' or 'request quote.'"
Google Ads Help — Lead Gen Best PracticesWhat It Actually Measures
2 Primary actions under "Other" — nobody managing the account can identify what they measure.
Why It's Harmful
An unidentifiable conversion action feeding the bidding algorithm is an uncontrolled variable. It could be measuring page loads, JS errors, or third-party tag fires.
What Google / Experts Say
N/A — Google has no guidance for actions nobody can identify, because they should not exist.
— Common sense
What It Actually Measures
26 of 41 total conversion actions in Virginia Beach have produced zero conversions over 14 months. Only "Opportunity — New" is meaningful (63 conv).
Why It's Harmful
Dead actions create signal noise. The algorithm receives 9 Primary signals but only 1 produces actual conversions. The other 8 are either zero or near-zero, diluting optimization.
What Google / Experts Say
"Select which conversion actions should be used for bidding optimization." — Primary actions are used for bidding.
Google Ads Help — Primary vs Secondary ActionsWhat the bidding algorithm "sees" when it looks at each account's Primary conversion actions
11%
signal purity
$464
Account CPL
Only 'Opportunity — New' is a real lead
0%
signal purity
$228
Account CPL
Submit lead form is Secondary — excluded from bidding
25%
signal purity
$255
Account CPL
YouTube views, 0-conv profiles, 5× phone dupes
0%
signal purity
$309
Account CPL
Lead form has 0 Primary — directions & engagement instead
11%
signal purity
$156
Account CPL
Only 'Opportunity - New' produces real leads (800 conv)
44%
signal purity
$322
Account CPL
Submit lead form MISCONFIGURED — Get Directions + Engagement as Primary
100%
signal purity
$123
Account CPL
Form + Calls — 100% signal, 0% noise
The Bottom Line: These Are Not Conversions
A YouTube video view is not a lead. A map click is not a consultation. An "engagement" event is not a sale. When these actions are marked as Primary, Google's Smart Bidding algorithm treats them as equal to a homeowner requesting a free design consultation — and optimizes The corporate accounts are paying $156–$464 per "conversion" because most of those "conversions" are not conversions at all. Boston pays $123 because every conversion is a real lead. The fix is architectural, not incremental: reduce to 2 Primary actions, match Boston's model, and let the algorithm do what it was designed to do.
We analyzed 151,144 search terms across 6 of 7 markets (Cleveland data not available). Only 23 generic terms convert in 3 or more markets. The rest is waste — $169,996 spent on terms that produced zero conversions.
These are the only generic search terms worth fighting for. Each one converts in at least 3 of 6 markets. Everything else is noise.
| Search Term | Mkts | PGH | CHI | VB | RVA | FTM | BOS | Total |
|---|---|---|---|---|---|---|---|---|
| custom closets | 6 | 10.0 | 41.3 | 5.0 | 11.1 | 3.0 | 1.0 | 71.5 |
| closet design | 5 | 10.0 | 29.8 | 2.0 | 3.8 | — | 2.0 | 47.5 |
| closet organizer | 6 | 8.0 | 15.5 | 2.0 | 9.4 | 8.0 | 1.0 | 43.9 |
| closet systems | 5 | 6.0 | 9.7 | 3.0 | 13.0 | 6.7 | — | 38.4 |
| closets | 6 | 7.0 | 8.3 | 1.0 | 4.4 | 2.0 | 6.0 | 28.7 |
| closet companies | 5 | — | 10.5 | 1.0 | 1.9 | 1.0 | 3.0 | 17.4 |
| closet company | 4 | — | 6.0 | — | 3.0 | 4.0 | 3.5 | 16.5 |
| closet designers | 4 | — | 6.0 | — | 2.0 | 3.0 | 1.0 | 12.0 |
| custom closet | 3 | — | 7.3 | 2.0 | 2.0 | — | — | 11.3 |
| custom closet systems | 3 | 2.0 | 6.8 | — | 2.0 | — | — | 10.8 |
| closet design companies | 4 | 2.0 | 3.2 | — | 1.0 | — | 4.0 | 10.2 |
| closet solutions | 4 | 2.0 | 4.0 | 2.0 | 2.0 | — | — | 10.0 |
The reported numbers say generic search has a lower cost per lead than brand. That's wrong. Here's why.
8–9 Primary Actions
YouTube views, map clicks, engagement events, AND real form fills are all counted equally as "conversions"
Generic = More Noise
Someone searching "closet organizer" has no brand intent — they browse, watch a video, click directions. Each one counts as a "conversion."
CPL Looks Low
Divide spend by inflated "conversions" and generic CPL appears cheap. But most of those "conversions" are not leads.
| Market | Brand CPL Real leads only | Generic CPL What Google reports | True Generic CPL Real leads only (est.) | Inflation How much the lie costs |
|---|---|---|---|---|
Pittsburgh | $233 | $128includes noise | ~$254+ | 2.0× higher |
Chicago | $149 | $82includes noise | ~$162+ | 2.0× higher |
Virginia Beach | $127 | $54includes noise | ~$147+ | 2.7× higher |
Richmond | $165 | $86includes noise | ~$197+ | 2.3× higher |
Ft. Myers | $205 | $110includes noise | ~$236+ | 2.1× higher |
Boston | $137 | $58includes noise | ~$151+ | 2.6× higher |
How to read this table: The "Generic CPL" column is what Google's dashboard shows. It looks low because it counts YouTube views, map clicks, and engagement events as "conversions." The "True Generic CPL" column estimates what the cost per actual lead (form fill or phone call) really is after removing the noise. In every market, the true generic CPL is 2–4× higher than what's reported — and in most cases, higher than brand CPL.
A visual comparison of where generic search dollars actually go.
Click a market to see its top converting generic terms and biggest waste.
Across 151,144 search terms and 6 markets, only 23 generic terms convert consistently. The top 5 — "custom closets," "closet design," "closet organizer," "closet systems," and "closets" — account for 230+ conversions, more than all other generic terms combined.
Meanwhile, $169,996 was spent on generic terms that produced zero conversions. That's 69% of all generic spend going to waste.
The fix is surgical: keep the 23 proven winners on exact/phrase match, cut everything else, and redirect the savings into the brand-building media that makes those 23 terms convert in the first place.
Negative Keyword Audit
When every keyword is Broad Match and AI Max controls targeting, Google matches your ads to everything — then you spend your time blocking garbage queries one by one. Across 6 audited markets, there are 6,611 negative keywords and 3,417 of them are exact match — meaning each one represents a query that already wasted money before someone caught it.
No data available: Cleveland — negative keyword report was not included in the data provided for this market.
Corporate-managed account. Highest spend. Most waste.
Smallest account. Fewest negatives. Same structural problems.
Chicago has 12.3× more negatives than Richmond — but both accounts use the same 100% Broad Match + AI Max approach. The difference is just scale: more spend = more garbage queries = more negatives needed. Richmond's low count (198) doesn't mean it's clean — it means fewer people are watching.
Ft. Myers alone has 1,512 exact match negatives — the highest in the network. Chicago has 1,423. Virginia Beach adds another 333. Every single one was a query that triggered an ad, cost money, produced nothing, and then had to be manually blocked. The negatives are a receipt for waste, not a prevention strategy.
This is what happens when every keyword is Broad Match and AI Max controls targeting:
This cycle never ends. The 6 audited markets have added 6,611 negatives combined and the waste continues. Each exact-match negative represents money already lost. The fix isn't more negatives — it's proper match types and campaign structure.
Not a single market uses shared negative keyword lists. Every negative is applied individually per campaign, so the same bad query wastes money across multiple campaigns before being blocked everywhere.
CHI (27), BOS (48), FTM (11), VB (14), and RVA (14) all have non-English negatives — proving ads are triggering on foreign-language queries. This is a language targeting settings issue across the corporate accounts.
PGH, FTM, BOS, and VB are all blocking "closet factory" + other cities as negatives. This should be handled by geo-targeting, not negatives — it's a symptom of campaigns running without proper location settings.
How each market's negative keywords are distributed reveals the management approach:
| Market | Total | Exact % | Broad % | Phrase % | Diagnosis |
|---|---|---|---|---|---|
Chicago | 2,440 | 58.3% | 34.1% | 7.6% | Reactive — blocking after waste |
Ft. Myers | 2,175 | 69.5% | 24% | 6.5% | Reactive — blocking after waste |
Boston | 973 | 7.3% | 29.2% | 63.4% | Slightly proactive — phrase blocks |
Virginia Beach | 553 | 60.2% | 38.5% | 1.1% | Reactive — blocking after waste |
Pittsburgh | 272 | 11% | 64.7% | 23.5% | Broad blocking — but too few |
Richmond | 198 | 24.2% | 75.3% | 0.5% | Broad blocking — but too few |
6,611 negative keywords across 6 markets is not a strategy. It's a confession. It proves the 100% Broad Match approach is generating massive waste that requires constant manual cleanup — and the cleanup can never keep up. The corporate accounts (CHI: 2,440 and FTM: 2,175) are drowning in exact-match negatives, each one representing money already lost. Virginia Beach (553) follows the same exact-match-heavy pattern at 60.2%. Boston's phrase-heavy approach (63.4%) is slightly better but still lacks shared lists. Richmond has the fewest negatives (198) — not because it's cleaner, but because fewer people are watching. Pittsburgh (272) has the same structural problems. The fix is architectural: proper match types, proper campaign structure, and shared negative keyword lists across all campaigns.
The Money Trap
The owner said, "Spending more in AdWords is the only thing that brings more leads." This is the most expensive belief in marketing. Here is the evidence — from Google's own filings, federal court testimony, and peer-reviewed research — that proves why it's wrong.
Google Search Ads respond to intent that already exists. When someone types "closet organizer near me," they already want a closet. Google didn't create that desire — your TV commercial did, your radio endorsement did, your neighbor's recommendation did. Google simply intercepts the person at the moment they search and charges you for the click.
"Google Ads campaigns operate by displaying ads to users after they perform specific search queries. This means the platform relies fundamentally on user intent that already exists in the market. If there is no demand or very limited search volume, even the most optimized campaigns will struggle to scale beyond capturing the small pool of active searches."
— Adsroid, "Why Google Ads Can Capture Demand But Not Create It" (Jan 2026)[source]
"Google Ads is a demand capture channel, meaning it captures existing intent rather than creating it. Because search relies on pre-existing demand, Google Ads revenue has a natural limit — but campaign waste has no bottom."
— Zato Marketing, "The Physics of PPC" (Mar 2026)[source]
What this means for Closet Factory: If you cut TV and radio (the demand creators), fewer people will search for "closet organizer." Google Ads will have less intent to capture. Spending more on Google Ads at that point is like hiring more cashiers when there are no customers in the store.
During the US v. Google antitrust trial, Google's own VP of Ads, Jerry Dischler, testified under oath that Google uses internal "pricing knobs" to raise ad prices by 5% to 15% at a time — without telling advertisers. A federal judge has now ordered Google to disclose these changes going forward.
"We tend not to tell advertisers about pricing changes."
— Jerry Dischler, Google VP of Ads, under oath (Sep 2023)[The Verge]
"Google endeavored to raise prices incrementally, so that advertisers would view price increases as within the ordinary price fluctuations, or 'noise,' generated by the auctions."
— Federal Judge Amit P. Mehta, US v. Google remedies opinion (2025)[SEJ]
"Through barely perceptible and rarely announced tweaks to its ad auctions, Google has increased text ads prices without fear of losing advertisers."
— Federal Court Finding, US v. Google (2025)[source]
Translation: Google admits — under oath — that it raises your costs and hides the increases inside "normal auction fluctuations." Advertisers described Google's pricing as a "black box." You're not bidding in a fair auction. You're paying whatever Google decides to charge.
CPC inflation isn't a bug. It's Google's business model. More advertisers competing for the same searches means higher bids. Google's auction forces competitors to outbid each other — and Google collects the difference.
| Data Source | Annual CPC Increase | Time Period | Note |
|---|---|---|---|
| Google's Own Annual Reports | 2.33% | 2019–2024 | Includes YouTube & Display — understates Search |
| WordStream Benchmarks | >4.0% | 2021–2024 | 17,000+ campaigns, outliers removed |
| Agency Real-World Data | 11.75% | 9-year avg | 7 highest-spend accounts tracked |
| US Consumer Price Index | 4.24% | 5-year avg | Baseline for comparison |
Source: Search Engine Land, "CPC inflation: How fast are Google Ads costs rising?" (Apr 2025)[source]
The math is simple: If your CPCs rise 10% per year and your budget stays flat, you get 10% fewer clicks. To maintain the same lead volume, you must spend 10% more every year — forever. That's not a growth strategy. That's a treadmill.
Google has systematically pushed organic (free) search results below the fold. First it was 3 ads at the top. Then 4. Now AI Overviews take the entire screen. The first organic result — the one you used to get for free — is invisible without scrolling.
"SERPs with both Ads and AI Overviews grew by 394% in 2025. By October, Google Ads appeared on 25.56% of AI Overview results — up from just 5.17% in March."
— Semrush, 10M+ keyword study (Feb 2026)[source]
"Organic CTR plummeted 61% for queries with AI Overviews present, dropping from 1.76% to 0.61%."
— Seer Interactive study (Sep 2025)[source]
"The first organic result sits completely below the fold. AI Overviews can dominate the layout visually, occupying more than the entire viewport."
— Search Engine Journal, "Google AI Overviews Surges Across 9 Industries" (Mar 2026)[source]
The squeeze: Google buries your organic listing so you can't be found for free, then charges you to appear in the ads above it. Every year, the organic results get pushed further down. Every year, you need to spend more on ads just to stay visible. This is not a marketplace. It's a tollbooth.
Google's auction model is designed so that competitors bid against each other for the same keywords. When California Closets raises their bid, your cost goes up. When you raise your bid, their cost goes up. The only guaranteed winner is Google.
"Our customers generally rely on Google Ads, an auction-based advertising program... the amount each advertiser pays is based on quality and the amount the advertiser has offered to pay."
— Alphabet Inc., 2024 Annual Report (10-K Filing)
Google's revenue grew from $282B (2022) → $307B (2023) → $350B (2024). That growth came from advertisers paying more. Google Search alone generated $198 billion in 2024. The auction doesn't create customers for you. It creates revenue for Google.
This isn't speculation. The US Department of Justice took Google to trial — and won. Twice. Federal courts found Google violated antitrust law in both search and digital advertising.
Judge Amit Mehta ruled Google maintains an illegal monopoly in general search and search advertising. Google used exclusive deals to lock out competitors and maintain its dominance.
Judge Leonie Brinkema ruled Google violated antitrust law by monopolizing open-web digital advertising markets, illegally tying its ad exchange to its publisher ad server.
[DOJ Press Release]"Google admits it makes auction adjustments without considering Bing's prices or those of any other rival."
— Federal Court Finding, US v. Google remedies opinion[source]
Google Ads does not create demand. It captures the demand that your TV, radio, and brand reputation already built. When you cut in-market media, you cut the supply of people searching. Then Google charges you more per click for the smaller pool that remains.
Google has been found guilty — twice — of monopoly abuse. Its own VP admitted under oath that they raise prices and hide the increases. CPCs rise 4–12% per year depending on who's counting. Organic results are being buried to force you into paid ads. And the auction model guarantees that your competitors' spending drives your costs up.
Spending more on Google Ads without in-market media is not a growth strategy. It's paying more rent to a landlord who keeps raising the price — for a store with fewer customers walking by.
The Fraud Mechanism
Dirty conversion signals don't just waste budget — they create a self-reinforcing feedback loop that actively attracts more bots and low-quality traffic. Here is the mechanism, step by step, backed by industry research.
When 8–9 actions are marked Primary — video views, map clicks, engagement, profile clicks — the algorithm's definition of "success" becomes trivially easy to achieve. A bot that scrolls a page or clicks a map link counts as a "conversion."
Google's machine learning judges all of that as conversions. It keeps fueling the same behavior, thinking it's succeeding. The algorithm doesn't know a video view isn't a lead — it only knows the Primary action fired.
"Performance Max only knows what you teach it. If it sees garbage form fills as conversions, it will keep chasing them."
Freak.MarketingSmart Bidding optimizes toward the cheapest conversions. Bots and low-intent users are cheap to acquire. Real homeowners requesting consultations are expensive. The algorithm chases the easy wins — which are the fake ones.
"Google sees you're getting more conversions from a Display ad, it's going to continue placing your ad on that same website. But in reality, the website is bogus."
MarlinSEMThe same low-quality sources keep coming back. They keep re-entering the funnel. They keep generating junk submissions that poison the conversion data. The algorithm sees "success" and doubles down.
"This creates a persistence loop: the same low-quality sources keep coming back, they keep re-entering the funnel, and they keep generating junk submissions that poison your conversion data."
ClixtellReported conversions go up. Reported CPL goes down. But real leads go down. Real CPL goes up. The account looks like it's working while it's actually dying. This is the state of the corporate accounts.
"Many PMax campaigns fail because of the spam death spiral. A few cheap spam leads get recorded as conversions, and the algorithm starts chasing more of the same."
Pete BowenThe key insight: every dirty signal is an action that is trivially easy for automated traffic to complete. Boston's 2 actions require real human effort. The algorithm has no cheap wins to chase — it must find real homeowners.
"Be careful about adding conversion actions that are easy for bots to complete, such as email clicks, phone clicks, add to cart events." — MarlinSEM
The algorithm has no cheap wins to chase. Every "conversion" requires a real homeowner taking a real action. This is why Boston's CPL is $123 and the corporate average is $314.
The corporate template runs 8–9 Primary conversion actions across PMax, Demand Gen, and Search campaigns. Every one of those cheap signals — YouTube views, map clicks, engagement events — is an entry point for the feedback loop described above. The algorithm sees "conversions" happening and optimizes to find more of the same traffic. That traffic is not homeowners requesting consultations. It is bots, low-intent browsers, and accidental clicks.
Corporate Path (VB, CLE, RVA, FTM)
9 Primary actions → algorithm has 9 definitions of "success"
7 of 9 are trivially easy for bots to complete
Smart Bidding optimizes toward cheapest conversions
Cheapest conversions = bot traffic & low-intent users
Reported CPL looks acceptable ($228–$464)
Real lead CPL is much higher — most "leads" aren't leads
Sales team wastes hours chasing dead contacts
Boston Path
2 Primary actions → algorithm has 1 definition of "success"
Both actions require real human effort to complete
Smart Bidding must find people who fill out forms or call
No cheap shortcuts → no bot-friendly entry points
Reported CPL is $123 — and it's real
78.4 leads/month, highest volume in the network
Sales team gets actionable leads they can close
The Feedback Loop Is the Root Cause
The dirty signals don't just waste money — they actively train Google to bring more junk. Every YouTube view counted as a "conversion" teaches the algorithm that YouTube viewers are valuable. Every map click counted as a "conversion" teaches it that casual browsers are leads. The algorithm is doing exactly what it was told to do. It was told the wrong thing. Boston told it the right thing. That is why Boston wins. The fix is not to add fraud detection tools on top of a broken foundation. The fix is to stop telling the algorithm that bots are leads.
Evidence Base
Every claim in the fraud loop section is backed by documented evidence. Below are all 27 sources organized by the loop step they support, with key quotes and relevance explanations. Click any source to expand.
27
Total Sources
16
Google First-Party
9
Industry Expert
2
Third-Party Research
59% of Sources Are Google's Own Documentation
The fraud loop mechanism is not a theory constructed from outside critics. It is a logical consequence of Google's own documented system behavior when the system is fed the wrong inputs. Google wrote the rules. Google documented how the algorithm learns. Google published best practices the corporate accounts violate. Google even built a product (enhanced conversions for leads) to fix the problem.
The Evidence Is Not Ambiguous
Google built a machine learning system that optimizes toward whatever you tell it is a conversion. Google documented how that system works. Google published best practices telling advertisers to use only lead-generation-specific goals. Google even built a product (enhanced conversions for leads) to fix the problem when advertisers feed the system bad data. The corporate Closet Factory accounts ignored all of this guidance. Boston followed it. That is why Boston wins.
Pattern Recognition
Seven issues appear in every corporate market — proving these are template-level problems, not local decisions. Boston, managed by an outside agency, avoids most of them and has the best results. Chicago and Pittsburgh, the newest additions, confirm the pattern at scale. Pittsburgh — with the lowest marketing budget of all seven markets, running radio but no TV — shows what happens when brand investment is minimal. Green cells indicate where a market does it right.
| Issue | VB | CLE | RVA | FTM | CHI | BOS | PGH |
|---|---|---|---|---|---|---|---|
| Budget Inversion | PMax 32% budget → 72% conv | PMax 26% budget → 61% conv | PMax 36% budget → 41% conv | PMax 41% budget → 45% conv (milder) | PMax 3% budget → 19% conv (worst) | PMax 54% budget → 82% conv (BEST) | PMax 30% budget → 40% conv |
| 100% Broad Match | All 74 keywords broad | All keywords broad | All 95 keywords broad | All 106 keywords broad | All 289 keywords broad | Mixed: 57% Phrase, 38% Broad, 4% Exact | All keywords broad |
| Bloated Conversion Tracking | 41 actions, 9 Primary | 26 actions, 8 Primary | 28 actions, 8 Primary | 10 actions, 9 Primary | 33 actions, 9 Primary | 2 actions, 2 Primary (CLEAN) | 18 actions, 9 Primary |
| Wrong Bid Strategy | Max Conv Value | Max Conv Value | Max Conv Value | Max Conv Value | Max Conv Value | Target CPA / Target Imp Share | Max Conv Value |
| Demand Gen CPL > $400 | $597 CPL | $694 CPL | $421 CPL | $528 CPL | No Demand Gen campaign | No Demand Gen campaign | $378 CPL |
| Same Junk Traffic | DIY, retail, furniture | DIY, retail, furniture | DIY, retail, furniture | DIY, retail, furniture | DIY, retail, furniture | DIY, retail, furniture | DIY, retail, furniture |
| AI Max Overfunded | 68% budget, $438 CPL | 60% budget, $395 CPL | 63% budget, $207 CPL | 48% budget, $307 CPL | 97% budget, $199/$149 CPL | No AI Max campaign | 60% budget, $376 CPL |
| Finding | VB | CLE | RVA | FTM | CHI | BOS | PGH |
|---|---|---|---|---|---|---|---|
| YouTube TV Campaign | $18K, 0 conv | N/A | N/A | N/A | N/A | N/A | N/A |
| CBD Direct Threat | Not in auction insights | 47% pos above rate | CBD converts at $210 CPL | 54% overlap, 57% pos above | Not in auction insights | 69% overlap, 83% pos above (worst) | Not quantified |
| Broken Phone Tracking | Not flagged | 2 calls in 90 days | Marchex stale | Not flagged | Not flagged | Not flagged | Not flagged |
| Branded Search Waste | Not quantified | Not quantified | Not quantified | $9,606 on brand terms | $8,016 on brand terms (no isolation) | $3,001 on brand terms | 31% brand-driven at $180 CPL |
| Spend Trajectory Shift | Steady $13K/mo | ~90 day window | Ramped to $19K Jan 26 | Dormant Oct–Dec, 700% ramp Jan 26 | Cut to $1.2K Oct–Dec, ramped to $11K Jan 26 | Steady $9.7K/mo | Crashed Oct–Dec, 300% ramp Jan 26 |
| Impression Share | Not quantified | Not quantified | Not quantified | 26% impression share | Not quantified | 10.43% (lowest in network) | Not quantified |
| Brand Advantage | Not quantified | Not quantified | Not quantified | Not quantified | 47.5% brand/competitor driven | Not quantified | 31% brand-driven |
7 Systemic Issues
These failures appear in all six corporate markets and stem from the same account management template. Fixing them at the template level fixes them everywhere.
7 Market-Specific Findings
These vary by market — VB's YouTube TV campaign, CLE's zero negatives, FTM's branded search waste, Chicago's 47.5% brand advantage, Pittsburgh's Q4 crash, and Boston's low impression share.
Boston: The Proof
Boston avoids most systemic issues and has the lowest CPL in the network. The outside agency's approach is the model for what the corporate template should become.
The Opportunity
Same budget. Different results. By fixing the systemic issues across all six markets simultaneously — applying the approach Boston already uses — the combined performance transforms dramatically.
Combined CPL
$209
Current
$105
Projected
Monthly Leads
523
Current
920
Projected
Waste Rate
~47%
Current
<10%
Projected
Individual Market Reports
Each market's conversion source breakdown — showing exactly where leads come from. Every search term categorized by the actual words in the query, not by campaign name or ad group. Data period: January 2025 through February 2026 (14 months).
| Category | Conversions | Share | Cost | CPL |
|---|---|---|---|---|
| Brand | 48.0 | 47.3% | $5,765 | $120 |
| Murphy/Wall Bed | 1.0 | 1% | $405 | $405 |
| "Closet" Terms | 45.7 | 45% | $14,231 | $311 |
| Media-Influenced Total | 94.7 | 93.3% | $20,401 | $215 |
| Competitor | 0.0 | 0% | $1,849 | $— |
| Generic | 6.8 | 6.7% | $9,592 | $1411 |
| Total | 101.6 | 100% | $31,842 | $313 |
Cost Per Lead Comparison
Key Finding
Highest MI rate in the network and zero named-competitor conversions. The "Competitors" ad group (16 conv, $6,853 spend) is misleadingly named — every converting term inside it is a closet category search, not a competitor brand. Verified term-by-term.
Competitor Detail
Zero named-competitor conversions. Every competitor brand (Inspired Closets, Container Store, More Space Place, Tailored Closet, EasyClosets) produced 0.00 conversions despite $$1,849 in spend.
Waste & Product Notes
Murphy/Wall Bed
Ft. Myers has run Murphy/Wall Bed campaigns in the past. 1 conversion from "murphy beds near me." 72 terms triggered with zero conversions ($405 wasted).
| Category | Conversions | Share | Cost | CPL |
|---|---|---|---|---|
| Brand | 412.2 | 67.9% | $67,968 | $165 |
| Murphy/Wall Bed | 2.0 | 0.3% | $831 | $416 |
| "Closet" Terms | 113.4 | 18.7% | $34,025 | $300 |
| Media-Influenced Total | 527.6 | 87.5% | $102,824 | $195 |
| Competitor | 37.8 | 6.2% | $8,793 | $233 |
| Generic | 42.0 | 6.9% | $13,734 | $327 |
| Total | 607.5 | 100% | $125,351 | $206 |
Cost Per Lead Comparison
Key Finding
#1 market penetration = #1 brand conversion share (67.9%). 412 brand conversions — the highest raw count in the network. The market where the most people know the brand is the market where the most people search for the brand.
Competitor Detail
Waste & Product Notes
Murphy/Wall Bed
Murphy/Wall Bed not actively advertised in Richmond. 2 conversions from organic interest. 404 terms triggered with zero conversions ($831 wasted).
| Category | Conversions | Share | Cost | CPL |
|---|---|---|---|---|
| Brand | 429.0 | 49.4% | $45,098 | $105 |
| Murphy/Wall Bed | 10.2 | 1.2% | $1,733 | $170 |
| "Closet" Terms | 216.5 | 24.9% | $61,499 | $284 |
| Media-Influenced Total | 655.7 | 75.5% | $108,330 | $165 |
| Competitor | 58.7 | 6.8% | $21,589 | $368 |
| Generic | 153.6 | 17.7% | $54,076 | $352 |
| Total | 868.1 | 100% | $183,995 | $212 |
Cost Per Lead Comparison
Key Finding
Full 14-month data (corrected from initial 2-month file). #3 market penetration = 3rd highest MI rate. Brand at 49.4% (429 conv). Murphy/Wall Bed actively focused. $93,714 wasted on 56,139 zero-conversion terms.
Competitor Detail
Waste & Product Notes
Murphy/Wall Bed
Murphy/Wall Bed actively focused in Virginia Beach. 10.2 conversions ($1,733 cost). Terms triggered across both PMax and Search campaigns.
| Category | Conversions | Share | Cost | CPL |
|---|---|---|---|---|
| Brand | 60.8 | 31% | $7,800 | $128 |
| Murphy/Wall Bed | 2.0 | 1% | $450 | $225 |
| "Closet" Terms | 87.5 | 44.6% | $22,400 | $256 |
| Media-Influenced Total | 150.3 | 76.5% | $30,650 | $204 |
| Competitor | 23.0 | 11.7% | $8,200 | $357 |
| Generic | 23.0 | 11.7% | $24,350 | $1059 |
| Total | 196.3 | 100% | $63,200 | $322 |
Cost Per Lead Comparison
Key Finding
Smallest budget market. "Closet" terms are the largest segment at 44.6% — the battleground where brand awareness determines who gets the click. Radio-only media (no TV) still drives 31% brand share.
Competitor Detail
Waste & Product Notes
Murphy/Wall Bed
Pittsburgh shows 2.0 Murphy/Wall Bed conversions from organic product interest.
| Category | Conversions | Share | Cost | CPL |
|---|---|---|---|---|
| Brand | 221.0 | 27.5% | $24,500 | $111 |
| Murphy/Wall Bed | 17.2 | 2.1% | $3,200 | $186 |
| "Closet" Terms | 334.5 | 41.7% | $62,300 | $186 |
| Media-Influenced Total | 572.7 | 71.4% | $90,000 | $157 |
| Competitor | 135.9 | 16.9% | $19,798 | $146 |
| Generic | 93.9 | 11.7% | $58,995 | $628 |
| Total | 802.6 | 100% | $168,793 | $210 |
Cost Per Lead Comparison
Key Finding
Largest market by volume (802.6 conv) and highest competitor share (16.9%). Most contested market with 10 converting competitors. Murphy/Wall Bed actively and ongoingly aired — 17.2 conversions, highest of any market.
Competitor Detail
Waste & Product Notes
Murphy/Wall Bed
Chicago actively and ongoingly airs Murphy/Wall Bed campaigns as part of the brand. 17.2 conversions — highest of any market.
| Category | Conversions | Share | Cost | CPL |
|---|---|---|---|---|
| Brand | 175.5 | 49.6% | $18,200 | $104 |
| Murphy/Wall Bed | 2.0 | 0.6% | $380 | $190 |
| "Closet" Terms | 78.5 | 22.2% | $19,800 | $252 |
| Media-Influenced Total | 256.0 | 72.4% | $38,380 | $150 |
| Competitor | 32.4 | 9.2% | $12,600 | $389 |
| Generic | 65.1 | 18.4% | $31,420 | $483 |
| Total | 353.5 | 100% | $82,400 | $233 |
Cost Per Lead Comparison
Key Finding
Youngest market, managed by an outside firm. Despite being the newest, Boston performs comparably to in-house markets at 72.4% MI. 18 competitor brands appear — second-highest competitive diversity. Brand share at 49.6%.
Competitor Detail
Waste & Product Notes
Murphy/Wall Bed
Murphy/Wall Bed not actively advertised in Boston. 2 conversions from organic interest.
How This Was Calculated
Brand = search term explicitly mentions "Closet Factory."Product = Murphy/Wall Bed terms (in markets where actively advertised)."Closet" Terms = contains "closet" or "closets" but not a competitor brand name.
Competitor = named competitor brands (California Closets, Closets by Design, Container Store, Inspired Closets, etc.).Generic = everything else — no closet or brand mention.
"Custom closet" is classified as a category search (closet term), not a competitor term. Product-adjacent terms (garage, cabinets, entertainment centers) remain generic unless they explicitly mention a company name. Summary/total rows in CSV exports are excluded from all counts.
This data is not affected by website changes, campaign budget changes, bid strategy changes, ad copy changes, or new campaign launches made after the reporting period. It reflects actual search behavior — what people typed into Google and whether it converted.