You don't have a data problem - you have a timing problem :)

Josh Grant

,

VP of Growth, Webflow

Connect w/ Josh

The culture that taught me to question everything

At Affirm, incrementality was religion.

You didn't ship something because it sounded smart in a meeting. You shipped it because you could prove it changed behavior. Every campaign wanted a holdout. Every channel wanted a counterfactual. If you couldn't measure it, it didn't count.

Operating in that culture long enough rewires how you think. You develop a deep skepticism toward anything that only works inside an attribution model, because you've seen up close how badly the numbers can lie when they're not tested properly. One question starts surfacing whenever a new initiative appears:

If we stopped doing this tomorrow, what would actually change?

A surprising amount of "performance" doesn't survive that question.

What the data revealed that nobody was looking for

The most useful lesson of my career came from a channel almost everyone believed in: lifecycle. Email, push notifications, reminder sequences. The programs every growth team points to when they need to show consistent revenue impact.

On paper, the numbers looked strong. Open rates healthy. Click rates healthy. Revenue attributed to the channel looked significant.

The problem was that word. Attributed.

So we tested it properly. Held out a segment, stopped touching them, and watched. The harder part wasn't the test. It was resisting the instinct to reframe the results when they didn't say what we expected.

We weren't driving nearly as many repeat purchases as the attribution model suggested. A meaningful share of those customers were already going to transact. The lifecycle channel was taking credit for behavior that was already in motion.

The problem wasn't bad tracking. It was that the customers most likely to receive and engage with lifecycle messages were already the most likely to convert. The channel looked powerful because it was fishing in the most crowded part of the pond.

But something else showed up in the data. Something we weren't looking for.

Customers exposed to the lifecycle experience were significantly more likely to try a new Affirm product. They moved into higher loan amounts. They engaged with merchant categories they had never touched before. The channel wasn't converting the already-converted.

It was expanding the relationship.

That single realization changed everything. The goals, the sequencing, the creative, the success metrics. The budget didn't shrink. It just moved toward what was actually working.

This is what I mean when I say data doesn't always tell you what you want to hear. But if you're willing to sit with it long enough, it often reveals something far more valuable than what you were originally looking for.

That experience made me rigorous. Skeptical of gut feel dressed up as strategy. Hard to fool with a good-looking dashboard.

It also made me slower than I should have been.

The environment that taught me to trust the signal

Then I joined Webflow, and everything shifted.

Nobody was waiting for the perfect proof point. There was an appetite for conviction I hadn't encountered before. Bets were made because the direction felt right, not because the model blessed it. Speed mattered. Boldness was genuinely rewarded.

It made me uncomfortable at first. I kept asking for holdouts that didn't exist. I wanted to run the test before committing. I was searching for certainty in an environment that had learned to move on signal.

Then we made a call that changed how I think about leadership entirely.

The insight was simple, but we saw it early. The way buyers discover products was beginning to shift underneath everyone's feet. AI was changing how search behaved. The old model, rank for keywords, capture clicks, win, was giving way to something different. Buyers were getting synthesized answers. Cited sources. Recommendations embedded directly inside AI responses. Discovery was becoming less about who ranked highest and more about who got included in the answer at all.

We didn't coin a trend or plant a flag. What we did was start asking a harder question: if this is how buyers are going to find things, what does that mean for the tools they use to build and manage their web presence?

The answer pointed somewhere most people weren't looking yet. AEO wasn't just changing marketing strategy. It was changing the requirements for the CMS and website infrastructure underneath it. The content had to be structured differently. The sites had to be built differently. The way teams published, updated, and organized information had to change to be legible not just to humans browsing pages, but to AI systems synthesizing answers.

That was the bet. Not that we had invented a new category, but that we could see where the infrastructure requirements were heading before the market fully caught up.

We had no study. No holdout group could model a counterfactual future. What we had were signals stacking up from multiple directions and a clear-eyed sense that the traditional playbook wasn't going to be enough on its own.

There was no clean data to greenlight the decision. At some point it stopped being a measurement problem and became a judgment call.

So we moved. Not because the data said go, but because the signals were strong, the timing felt right, and we understood the cost of waiting.

That taught me something no holdout group ever could.

In fast-moving markets, waiting for proof becomes its own kind of risk. The window doesn't stay open. By the time the evidence is overwhelming and everyone agrees, the advantage is already gone.

The question most leaders are asking wrong

Leadership conversations tend to frame all of this as a personality question. Are you data-driven or instinct-driven? Do you move fast or do you move carefully?

I've come to think those are the wrong questions entirely.

The real skill isn't having a style. It's recognizing which type of problem is actually in front of you.

Some problems are optimization problems. The system exists and you're trying to improve it. These deserve rigorous testing, because the risk of misreading performance is real, it compounds quietly, and fixing it later costs more than getting it right the first time.

Other problems are orientation problems. The question isn't how to improve the system. The question is whether the system itself still makes sense. These require judgment, because no amount of testing will tell you whether you're optimizing something that's already becoming irrelevant.

Confusing the two is where I've seen the most talented teams go wrong. The over-testers build immaculate dashboards while the strategy slowly fossilizes. The over-believers move fast and repeat the same mistakes with increasing confidence. Neither approach compounds. Both feel productive right up until they don't.

When to run the test

When the channel is mature and the real question is whether it's actually working. When attribution error could be quietly distorting how you spend, hire, and plan. When you're refining something that already exists, like messaging, sequencing, or conversion mechanics, and incremental improvement is the goal. When the market is stable enough that waiting a few weeks for a cleaner signal won't cost you anything meaningful.

When to make the bet

When the ground is shifting and historical data has become an anchor instead of a guide. When the cost of being late is higher than the cost of being wrong. When signals are stacking from multiple directions and the pattern is hard to ignore even though no single data point is conclusive. When moving will teach you something faster than any model ever could.

The only thing that actually compounds

Two environments shaped how I think about all of this.

One taught me to respect signal. To pressure-test assumptions. To ask what's actually incremental and be willing to hear an uncomfortable answer.

The other taught me to respect timing. To recognize when the window is open and act before the moment passes.

I needed both lessons. I didn't know that when I was learning either one.

Instinct without measurement eventually becomes self-flattery.

Measurement without courage eventually becomes maintenance.

The leaders who compound aren't the ones who always test or always bet. They're the ones who know which moment they're standing in.

Know which moment you're in.

Sign-up. Stay Sharp

Get the latest podcasts, new content drops, and fresh insights.

By submitting this form you agree to our Privacy Policy

Discover how Paramark analyzed $2.2B+ in marketing spend
Discover how Paramark analyzed $2.2B+ in marketing spend

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy.

See Paramark in action

Demystify marketing measurement & growth

Marketing trends and tactics, plus the latest insights, experiments, and content drops from Paramark. Written by our CEO, delivered straight to your inbox. Sign up. Stay sharp.

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy

Demystify marketing measurement & growth

Marketing trends and tactics, plus the latest insights, experiments, and content drops from Paramark. Written by our CEO, delivered straight to your inbox. Sign up. Stay sharp.

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy