Audience: marketing teams at mid-market B2B companies (growth and demand-gen leaders).
1. Background and context
Client: anonymized mid-market B2B SaaS vendor ("Client X"). Product: vertical SaaS for supply-chain analytics. Timeframe: 12 months. Situation: organic sessions dropped ~22% over 3 months while Google Search Console (GSC) rank metrics (average position, tracked keywords) looked flat. Competitors with lower technical SEO scores were converting higher-quality leads. Board/finance started scrutinizing the marketing track AI brand mentions budget and demanded clearer attribution and ROI.
Complicating factor: an emerging set of "AI Overviews"/AI Snapshot features (seen in Google experimental UIs and third-party tools) started surfacing competitors' brand names and content, but Client X did not appear. There was zero visibility into what generative models (ChatGPT, Claude, Perplexity) were using as sources for answers mentioning Client X or competitors.
Initial hypothesis options:
- GSC sampling or reporting lag—ranks are fine but impressions/clicks fall because of SERP format changes (AI Overviews, People Also Ask, featured snippets). Competitors are gaining distribution inside AI-driven layers even if their raw rankings are lower. Tracking/attribution issues are under-reporting conversions, especially cross-device and offline leads.
2. The challenge faced
Specific problems to solve:
- Diagnose why organic sessions and clicks fell despite stable tracked rankings in GSC. Measure and recover brand visibility inside AI-driven SERP features and in responses returned by commercial LLMs/generative search tools. Prove marketing ROI to finance with reliable attribution and reduce Cost per Lead (CPL). Understand why competitors with lower 'SEO scores' drove more qualified leads and replicate winning elements.
Constraints: limited dev bandwidth (2 sprints/month), strict privacy policies, and need for fast wins to avoid budget cuts.

3. Approach taken
High-level strategy: combine technical diagnostics, new visibility monitoring for AI layers, cross-channel attribution fixes, and content re-mapping to intent. We framed the approach as “visibility mapping + attribution hygiene + signal amplification.”
Core pillars:
Comprehensive SERP surface audit — not just rankings: organic positions, impressions, clicks, CTR, but also SERP features (AI Overview, snippet, PAA, images, video) per query. Model-sourced visibility checks — systematic querying of GPT/Claude/Perplexity and recording the outputs and sources to detect which properties/models cite which domains. Attribution and tracking overhaul — fix gaps (server-side GA4, enhanced conversions, call tracking, CRM UTM hygiene). Reconcile leads to sessions. Content-to-intent remap — prioritize topics that map to purchase intent and to surfaces likely used by AI Overviews (concise canonical answers, structured data). Competitive signal scraping — collect competitor content that shows up in AI Overviews and replicate signal attributes (format, schema, snippet-optimized answers).Analogy: We treated the search ecosystem like a lake where fish (leads) are moving below the surface. Traditional SEO tracked fish near the surface (SERP rankings), but AI Overviews are new currents under the surface. We needed sonar (model queries + SERP feature tracking) and better nets (attribution + structured content) to catch fish again.
4. Implementation process
Timeline: a 26-week effort broken into four phases. Each phase had measurable checkpoints.
Phase A — Audit & discovery (Weeks 1–3)
- GSC deep dive: exported query-level data, compared impressions vs. historical patterns, looked for sampling flags. SERP feature mapping: using a SERP API (BrightEdge/SERPstack/SerpApi) we recorded feature types per priority query set (top 500 queries). Screenshot checklist added for each query. Log-file analysis: 30 days of logs to detect crawl patterns, indexation delays, and emergent duplication affecting snippet generation. Small-scale model probing: automated queries to ChatGPT, Claude, Perplexity for 200 high-intent queries; recorded whether answers cited Client X, competitors, or neither. (See prompt templates below.)
Prompt template example used (automated): "For the query '', provide a short answer and list up to 3 sources (URLs) you used. If none, state 'no documents cited'." We logged model responses and citations into a dataset.
Phase B — Fix attribution and measurement (Weeks 4–8)
- Server-side tagging: deployed server-side GA4 tagging to reduce signal loss across browsers and ad blockers. Enhanced conversions & CRM stitching: implemented enhanced conversions for leads and configured GA4-to-CRM lead reconciliation using hashed emails and offline import. UTM/phone tracking hygiene: standardized UTMs, added dynamic call tracking numbers and stored source/medium in a first-touch CRM field. Event plan: mapped business events (MQL, SQL, demo booked) to GA4 and CRM with clear conversion definitions.
Quick win — within 2 weeks of fixes we reconciled +18% more leads that had been previously unattributed to organic.
Phase C — Content & SERP feature optimization (Weeks 9–18)
- Topical/content hub strategy: reorganized top-performing content into hubs that answered queries concisely and included canonical single-paragraph "summary answers" optimized for snippet capture and AI ingestion. Schema & answer markup: added FAQ schema, QAPage, WebPage schema with answerText fields for prioritized queries, and updated metadata for concise descriptions (<= 160 chars with clear facts). Passage-friendly formatting: used short paragraphs, bulleted lists, and explicit definitions so models and snippet algorithms can find clean extracts. Competitor mimicry: for competitor sources frequently cited by models, we audited the structure and created equivalent or better canonical answers with sources and data points. </ul> Analogy: If SERP features are islands, we built reliable bridges (structured answers and schema) from our content to those islands. Phase D — Model visibility program & amplification (Weeks 19–26)
- Model pinging schedule: weekly automated queries to the same query set, storing outputs to track whether Client X appears over time. Source seeding and PR: published short data-driven one-pagers and press posts that aggregated domain-level facts (these are easy for models to cite). Amplified via syndication partners and authoritative sites to increase the chance models index and use them. Backlink focus: targeted a small set of high-signal citations (industry associations, research sites) that models often reference. Ongoing telemetry: daily SERP-feature checks, weekly reconciliation of organic-attributed CRM leads, and monthly model-citation reports.
- Sales reported higher lead quality for leads coming from content hubs (shorter sales cycles by ~9 days). Finance accepted a 12-month marketing budget continuation conditional on quarterly reporting tied to the new event definitions and CRM reconciliation.
- GSC alone is insufficient. Rank and avg position hide SERP-format changes. You need SERP-feature telemetry and model-citation checks to understand real visibility. Measurement fixes reveal previously invisible demand. Server-side tagging + CRM stitching often recovers 10–25% of apparent drops. AI-driven surfaces favor concise, authoritative, well-structured answers with clear sourcing. Schema and canonical short answers materially increase capture rates. Competitor advantage often comes from distribution and citation networks, not purely SEO scores. Models and AI Overviews leverage external authority signals; invest in targeted PR and syndication for model-citable sources. Small, regular model probes act like diagnostic scans — they flag shifts faster than waiting for GSC updates.
- Deploy server-side GA4 tagging and enable enhanced conversions for leads. Standardize UTMs and ensure first-touch UTM is recorded in CRM for each lead. Add call-tracking with persistent session storage for phone leads.
- Export top 500 priority queries from GSC. Use a SERP API to map feature types for those queries and identify where AI Overviews/snippets appear. Run automated model queries (OpenAI/Anthropic/Perplexity) for 200 high-intent queries and log whether your domain is cited. Capture screenshots for representative queries and store in a shared folder. Example labels: GSC_Perf_Q1.png, Model_ChatGPT_Q1.png.
- Create canonical one-paragraph answers for every high-intent page and put them near the top of pages. Add FAQ/QAPage schema and other applicable schema types. Validate with Rich Results test. Republish and syndicate short data-led posts intended for model citation (press-ready one-pagers).
- Outreach to 6–8 high-authority domains for citation opportunities (industry reports, association pages). Weekly model probes to detect shifts; set a dashboard to show % of priority queries where your domain is cited. Monthly reconciliation of CRM leads to GA4 conversions; report uplift as "recovered attribution".
- Prompt: "For the query '', provide a concise answer (<=60 words) and list up to 3 URLs you used to build the answer. Return JSON: answer, sources: [...]." Store output to a table: query | model | timestamp | answer | sources[]. </ul> Quick ROI proof template to present to finance:
- Report month-over-month organic MQLs and CPL with two lines: (a) raw analytics, (b) reconciled analytics post-measurement-fixes. Include a table showing recovered leads (previously unattributed) and their conversion rates to revenue. Forecast modeled revenue lift from incremental MQLs at conservative conversion rates (e.g., 10% SQL-to-deal, deal size = average deal).