Cross-Platform Ad Reporting: One Dashboard Instead of Four
Every agency media buyer knows the morning ritual. Open Google Ads, pull yesterday’s numbers. Open Meta Business Suite, pull those numbers. Open TikTok Ads Manager. Open LinkedIn Campaign Manager. Paste everything into a spreadsheet. Spend 45 minutes trying to reconcile why the total spend across platforms doesn’t match what finance says went out the door.
Then a client asks: “What’s our overall ROAS?” And you realise the answer depends on which platform’s attribution you trust, which conversion window you’re using, and whether you’re counting view-through conversions or pretending they don’t exist.
This is the cross-platform reporting problem, and it’s costing agencies far more than the hour of spreadsheet work every morning. (If you’re trying to figure out where to allocate budget between Meta and Google, fragmented reporting makes that decision nearly impossible.) It’s costing you accuracy, client trust, and the ability to make allocation decisions based on real data instead of platform-flavoured fiction.
The Real Cost of Fragmented Reporting
The obvious cost is time. If your media buyer spends 45 minutes per client per day pulling and reconciling reports, that’s nearly 4 hours a day for a 5-client book. That’s half a salary spent on data entry.
But the less obvious costs are worse:
Attribution double-counting. A user clicks a Google ad on Monday, sees a Meta retargeting ad on Wednesday, and converts on Thursday. Google Ads claims the conversion. Meta claims the conversion. Your spreadsheet now shows two conversions where one actually happened. Multiply this across hundreds of conversions and your reported ROAS is fiction.
Inconsistent metrics. Google Ads calls it “Cost.” Meta calls it “Amount Spent.” TikTok calls it “Total Cost.” LinkedIn calls it “Total Spent.” They all mean roughly the same thing, but the calculation differs at the edges — some include VAT, some don’t, some include credits differently. When you’re comparing CPAs across platforms, these small differences compound into bad decisions.
Conversion window mismatches. Google Ads defaults to a 30-day click-through window. Meta defaults to 7-day click, 1-day view. TikTok uses 7-day click, 1-day view. LinkedIn uses 30-day click, 7-day view. If you’re comparing conversion rates across platforms without normalising the windows, you’re comparing apples to somewhere between oranges and watermelons.
Delayed data. Each platform updates at different cadences. Google Ads data is near-real-time. Meta can lag by several hours. TikTok’s reporting API occasionally takes a full day to stabilise. If you pull reports at 8am, you might be comparing today’s Google data against yesterday’s Meta data.
What Unified Reporting Actually Means
Unified reporting doesn’t mean dumping four CSVs into a single spreadsheet. It means three things:
1. Metric Normalisation
Every metric needs a single definition that applies across platforms. Here’s the mapping that actually works:
| Unified Metric | Google Ads | Meta Ads | TikTok Ads | LinkedIn Ads |
|---|---|---|---|---|
| Spend | Cost | Amount Spent | Total Cost | Total Spent |
| Impressions | Impressions | Impressions | Impressions | Impressions |
| Clicks | Clicks | Link Clicks | Clicks | Clicks |
| CTR | CTR | Link CTR | CTR | CTR |
| Conversions | Conversions | Results | Conversions | Conversions |
| CPA | Cost/conv | Cost per Result | CPA | Cost per Conversion |
Notice I’m using Meta’s “Link Clicks” rather than “All Clicks.” That’s deliberate — “All Clicks” includes likes, comments, and shares, which inflates click metrics to the point of uselessness when comparing against Google’s click metric.
2. Attribution Alignment
You have two options here, and you need to pick one:
Option A: Normalise to a common window. Set all platforms to the same conversion window (e.g., 7-day click, 1-day view) and compare on that basis. This is the cleanest approach but requires you to actually change platform settings, and some clients resist that.
Option B: Accept platform attribution but deduplicate. Let each platform claim conversions using its default window, but use a server-side source of truth (your CRM, Shopify, or GA4 with proper UTM tracking) to count actual unique conversions. Then calculate true CPA as total spend divided by deduplicated conversions.
Option B is more work but gives you the most honest numbers. It’s also what your clients will eventually ask for when they notice the platforms are collectively claiming more conversions than actually happened.
3. Single Source of Truth
The reports need to come from one system, assembled automatically, and presented in a format where the client can see cross-platform performance without needing to understand the underlying complexity. This means either a BI tool (Looker Studio, Tableau), a custom dashboard, or an AI agent that pulls and normalises the data on demand.
The Manual Approach (And Why It Breaks)
Most agencies start with Google Sheets or Looker Studio. It works initially. You set up data connectors, build some formulas, create a decent-looking dashboard. Then reality intrudes:
Connector maintenance. Supermetrics, Funnel.io, or whatever connector you’re using will break. API changes, auth token expiration, rate limits. Someone needs to babysit these connections, and that someone is usually the media buyer who should be optimising campaigns instead.
Schema drift. Platforms add and deprecate metrics regularly. Meta renamed half its API fields when it rebranded from Facebook. Google Ads has changed its reporting API three times in four years. Every change means updating your connectors, your transformations, and your dashboards.
Client customisation. Client A wants ROAS front and centre. Client B cares about lead volume. Client C wants to see frequency and reach. You end up maintaining a separate dashboard per client, each with its own quirks and breakages.
Historical data. When a connector breaks and you fix it two weeks later, you’ve lost two weeks of data. Backfilling is tedious, sometimes impossible, and never quite right.
The manual approach works for one client on two platforms. It collapses somewhere around three clients on three-plus platforms. That’s the point where you either hire a dedicated reporting person, invest in proper infrastructure, or accept that your reports are perpetually slightly wrong.
Building It Properly
If you’re going to invest in unified reporting, here’s the architecture that scales:
Data Layer
Pull raw data from each platform’s API daily (or more frequently for high-spend accounts). Store it in a warehouse — BigQuery is the obvious choice if you’re already in the Google ecosystem, but Snowflake or even PostgreSQL works for smaller volumes.
The key is storing raw platform data, not just the metrics you think you need today. Store impressions, clicks, conversions, spend, and every dimension you can get (campaign, ad group, ad, device, geo, placement). You’ll want dimensions you didn’t think of six months from now.
Transformation Layer
This is where normalisation happens. Map platform-specific field names to your unified schema. Apply conversion window adjustments. Deduplicate cross-platform conversions using your server-side source of truth.
dbt is the standard tool here, but honestly, for most agency setups, a well-structured set of SQL views gets you 80% of the way there.
Presentation Layer
Looker Studio for Google-native shops. Tableau or Power BI for enterprise clients. A custom dashboard if you’re building a product around it.
The presentation layer should answer exactly three questions for each client:
- How much did we spend, and what did we get for it?
- Which platform is performing best, and should we shift budget?
- What changed since last week, and why?
If your dashboard can’t answer those three questions in under 30 seconds, it’s too complex.
Where AI Fits In
The AI agent approach — where Alethia sits — is fundamentally different from the dashboard approach. Instead of building and maintaining a static dashboard, you ask a question in plain English and get an answer assembled from live platform data.
“What’s Client X’s cross-platform ROAS this month?” The agent pulls data from all connected platforms, normalises metrics, deduplicates where possible, and gives you a number with context. No connector maintenance, no schema drift, no dashboard that’s showing last week’s data because a token expired.
This doesn’t replace proper data warehousing for clients who need historical analysis and custom reporting. But for the daily operational question of “how are things going and what should I change?” — it’s faster and more reliable than any dashboard.
The practical advantage for agencies is time-to-value. A dashboard takes days or weeks to set up per client. An AI agent connection takes minutes: OAuth into each platform, and you’re pulling unified data immediately.
Practical Recommendations
If you’re building cross-platform reporting today, here’s what I’d actually recommend:
For 1-3 clients: A well-structured Google Sheet with Supermetrics or manual API pulls. It’s ugly but it works and the setup cost is near zero.
For 3-10 clients: Invest in a proper data pipeline. BigQuery + dbt + Looker Studio, or an equivalent stack. The upfront cost is 2-3 days of setup, but it saves hours per week once running.
For 10+ clients or scaling fast: You need automation. Either a dedicated reporting tool (Funnel.io, Whatagraph), an AI-powered platform that handles normalisation natively, or a custom build. The manual approach is no longer viable at this scale — you’ll spend more time maintaining reports than optimising campaigns.
Regardless of scale, the principle is the same: define your unified metrics once, automate the data pull, and spend your time on the analysis rather than the assembly. The agencies that figure this out first are the ones that can scale to 20 clients without proportionally scaling headcount. The ones that don’t are stuck in spreadsheet purgatory, hiring junior analysts to do work that a machine should handle.