Feb 18, 2026

How to measure TV & CTV ads without guesswork

Pranav Piyush

,

Co-founder, CEO

Connect with Pranav

You might be…

…preparing to launch TV or CTV for the first time, or return to it after years focused on digital. The creative is solid, the plan is sound and expectations are high. 

You launch a national $45K ad run. Six weeks later, the question comes back from leadership:

“Did TV actually work?”

For many marketing teams, the answer is murky. When conversions don’t show up clearly, they’re forced into uncomfortable decisions. They either pull their ads altogether, boost the frequency in hopes of driving more sales, or lean on household-level attribution that feels precise but still doesn’t deliver clarity.

None of these methods get teams closer to understanding TV’s true, causal impact.

For a $50M+ global telehealth platform that Paramark worked with, the answer depended entirely on how the results were measured. Early tests showed clear lift, while a later test with more spend, a new channel and a new SKU did not.

Without incrementality, those outcomes might have been blended together and treated as a single conclusion about TV. 

With incrementality, the team gained the confidence to scale what worked and the discipline to stop what didn’t.

Why TV & CTV are often misread, even with “better” CTV data

TV is one of the hardest channels to evaluate using modern attribution.

People do not click TV ads. They watch them, then they search later, buy on Amazon, convert weeks after exposure or purchase offline.

As a result, TV often looks invisible or inefficient on click-based dashboards.

It’s important to note that CTV data hasn't solved this problem either.

Many teams now rely on household IP or device matching to “measure” TV. If a conversion comes from a household exposed to a TV ad, TV gets the credit. If it doesn’t, the next digital touch does.

This feels accurate. But it still doesn’t answer the real question: What was the causal impact of your TV spend?

Exposure and conversion happening in the same household does not prove the TV ad caused the outcome. Proving cause and effect requires a controlled incrementality test.

Measuring TV with attribution leads teams into one of two traps. Some assume TV is working simply because it’s TV. Others pull spend because nothing shows up in attribution.

Neither approach holds up when it comes to budgeting and planning time.

The 3 most common myths about TV measurement

Three beliefs consistently push teams off course.

Myth #1: TV should drive clicks or immediate traffic.

TV is primarily an indirect channel. Its impact shows up as branded search, downstream conversions, retail velocity or trial starts.

Myth #2: TV is too top-funnel to measure.

This is only true if you rely on touch-based attribution. Marketing mix modeling and geo-based experiments were built to measure offline channels like TV.

Myth #3: More frequency equals more conversions.

In practice, TV ads almost always show diminishing returns past a certain point. Additional gross ratings points (GRPs) usually contribute very little incremental lift.

A real example: When TV worked, and when it didn’t

A $50M+ global telehealth platform worked with Paramark to run three TV tests in one year.

Test #1 (January/February 2025)

The brand first ran geo-based TV tests across three DMAs. Spend was in the tens of thousands of dollars. Two SKUs were tested, and the team tracked both upper-funnel signals, such as ‘Add to cart,’ and completed purchases.

The result was a 5–10% percent lift in core business metrics during the test window. It gave the team confidence to keep TV evergreen for the same SKUs and audiences.

Test #2 (August 2025)

Later in the year, the brand ran another test on a different TV channel and with a new SKU. This time, spend was $100K+ over roughly one month.

This test showed no measurable lift.

Rather than treating the test as a failure, they treated it as feedback. They learned which SKUs responded to TV, which channels did not deliver and where audience fit likely wasn’t right.

TV was neither universally good nor universally bad. But its success was specific.

What the data revealed

Here’s an example of a simplified TV readout from the telehealth brand:

Test

Spend

Channel Type

Metric Tracked

Incremental Lift

Jan Test

$45K

Linear TV

Add to Cart

+8%

Jan Test

$45K

Linear TV

Purchases

+6%

Aug Test

$120K

Linear TV (New channel)

Purchases

0%

Ultimately, the team unlocked clarity. They gained confidence to scale TV where they had verified lift, plus the discipline to pause where it did not. It also sparked ideas about what to test next.

They’re a great example of how TV becomes a controllable growth lever rather than a black box.

How leading teams approach TV measurement

For experienced teams, the goal is not perfect attribution. It is confident decision-making.

A practical approach looks like this:

  1. Measure incremental outcomes rather than clicks

  2. Use geo experiments or short pulse tests to create clean comparisons

  3. Track multiple metrics, not just sales, including add to cart, trials, or signups

  4. Reinforce learnings over time with marketing mix modeling as data accrues

Even a modest 4–6 week test can provide meaningful insight or direction.

One rule of thumb before you launch

TV is its own discipline.

Reusing Meta, TikTok or Instagram video creative rarely works at scale. Creative is the primary driver of success in both linear TV and CTV.

The teams that perform best invest time in understanding their audience and matching ad length to intent. 30–60 second ads are better for education and awareness. 15–30 second ads tend to work better when awareness already exists and action is the goal.

Importantly, strong TV results also depend on working with media buyers who deeply understand TV inventory, pricing dynamics and placement strategy.

The takeaway

TV and CTV reward leaders who are willing to learn with intention.

When TV is measured properly, teams do not just learn whether it worked. They learn where it worked, why it worked and what to do next.

That clarity is what allows marketing leaders to move from debating TV to confidently leading growth with it.

Sign-up. Stay Sharp

Get the latest podcasts, new content drops, and fresh insights.

By submitting this form you agree to our Privacy Policy

Discover how Paramark analyzed $2.2B+ in marketing spend
Discover how Paramark analyzed $2.2B+ in marketing spend

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy.

See Paramark in action

See Paramark in action

Demystify marketing measurement & growth

Marketing trends and tactics, plus the latest insights, experiments, and content drops from Paramark. Written by our CEO, delivered straight to your inbox. Sign up. Stay sharp.

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy

Demystify marketing measurement & growth

Marketing trends and tactics, plus the latest insights, experiments, and content drops from Paramark. Written by our CEO, delivered straight to your inbox. Sign up. Stay sharp.

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy