The Brandformance Podcast

How a former neuroscientist built a smarter way to think about email targeting ft. Daniel Brady from Orita

Or listen where you get your podcasts

Or listen where you get your podcasts

Episode Highlights

Transcript

Behind the expert

Daniel “DB” Brady, co-founder at Orita (and an actual neuroscientist), spent years studying the brain before switching into machine learning. Now he’s building tools that help brands get more out of their first-party channels like email, SMS, and direct mail.

In this conversation, DB breaks down why experimentation is still rare in lifecycle marketing, how to make tests trustworthy instead of black-boxy, and why the biggest unlock is knowing who’s ready to listen before you decide what to say.

The gist

  • Most brands have the tools for testing, but the work to run testing well is still a mess.

  • “Who’s ready to listen” matters more than “what do we want to say.”

  • Email isn’t “cheap” anymore. It has a gatekeeper and you can wreck deliverability fast.

  • Personalization helps most in the middle of your list, not at the top or bottom.

DB’s view: Stop guessing and start with who’s ready to listen

DB’s path into marketing wasn’t planned. He got pulled from neuroscience into machine learning because tech moved fast. He described the shift in a way that’ll make any former academic nod: he made a discovery early in his PhD, then spent years repeating work to support it. In tech, you ship, learn, and change the business quickly.

That mindset shows up in how he talks about lifecycle marketing. Brands are good at writing messages. They’re less good at asking a basic question first: who’s ready to hear it. DB framed that as the missing piece. If no one’s going to listen, it doesn’t matter how polished the email is.

Orita focuses on first-party channels once someone’s on your list (customer or prospect). The goal is not only “who should I message more,” but also “who should I message less,” so you don’t burn trust, tank deliverability, or annoy your way into getting ignored.

Why testing is still rare even when the tools exist

Pranav asked the obvious question: Klaviyo, Braze, Iterable and others have testing features. Some even have holdouts baked in. Why don’t most marketers use them?

DB’s answer: even when the feature exists, it’s still hard to implement across the whole program. Testing one campaign is doable. Testing your whole lifecycle program (flows plus campaigns, with changing segments and content) gets complicated fast. You need discipline, math, and a way to keep track of what’s running and what the results mean.

That’s the gap Orita is trying to cover. They keep the marketer experience simple while doing the “under the hood” work: holdouts, test frameworks, and the stats that make results usable.

Trust is built with examples first, math second

Pranav pushed on a real problem: ML can feel like a black box. DB’s answer was practical.

First, align on goals and timing. He gave a simple example: you don’t do incrementality testing on Black Friday. You don’t hold out 10% of your list on the biggest day of the year just to learn that Black Friday emails work. 

Second, show your work. DB said they’ve done everything from exposing who’s in holdouts to packaging the data to sharing the definition of tests and even code snippets. Most people won’t run the code, but being open builds trust.

Third, start with human-level examples. DB said the stats should come last. You show a marketer a specific profile and explain why the model thinks this person is worth contacting now (or not). Once the story makes sense at the person level, then the lift number feels believable.

A real example: Send 40% fewer emails and get more clicks

One of the early tests that kickstarted Orita was blunt: their model suggested a brand could email about 40% fewer people and make the same money. The brand tried it for a month, and the A/B test matched what Orita predicted.

Then something surprising happened: the brand got more unique clicks than the month before, even though they emailed fewer people. DB said they stumbled into deliverability. By emailing fewer people (and likely cutting out low-engagement sends), inbox placement improved. More of the remaining list actually saw the emails.

The lesson isn’t “always email less.” It’s that email has a gatekeeper. You can win by being more selective, even if it feels wrong at first.

Quote snacks

  • “It doesn't matter what you say if no one's going to listen.”

  • “It's not the time for incrementality testing on Black Friday.”

  • “Marketing is still such a critical thing.”

  • “Our brains are very good at change detection.”

  • “You now don't have a guarantee that the people who would be the most responsive to your emails are going to see it.”

Why it matters

If you’re treating email like a free printing press, you’re going to pay for it later.

Inbox providers have gotten stricter. Brands that “blast everyone” because acquisition slowed down can hurt deliverability quickly. Then you’re stuck climbing back out while your best customers stop seeing your messages.

DB’s other point is the one most teams skip: testing isn’t only about finding wins. It’s also about finding where your effort doesn’t matter. If you can email fewer people and not lose revenue, you’ve just freed up room to be more thoughtful, less annoying, and more effective across the channels that cost real money.

Practical next steps

  • Start by agreeing on timing. Don’t set up holdouts on the days you can’t afford them (DB’s example was Black Friday).

  • Pick one clear problem to solve first: deliverability slipping, list fatigue, low clicks, or “we email too many people because we’re scared not to.”

  • Run one controlled test that reduces sends for a slice of your list. Keep a similar slice as holdout. Look at results over a week, two weeks, and 30 days.

  • Build trust with examples before dashboards. Show a few real profiles and explain why they should be contacted now (or not).

  • Treat email as a high-risk channel, not a cheap channel. The cost shows up later as spam placement and lost visibility.

  • If you’re going to personalize, focus effort on the middle of the list. Top people just need a touchpoint. Bottom people won’t engage no matter what you write.

Episode Highlights

Transcript

Behind the expert

Daniel “DB” Brady, co-founder at Orita (and an actual neuroscientist), spent years studying the brain before switching into machine learning. Now he’s building tools that help brands get more out of their first-party channels like email, SMS, and direct mail.

In this conversation, DB breaks down why experimentation is still rare in lifecycle marketing, how to make tests trustworthy instead of black-boxy, and why the biggest unlock is knowing who’s ready to listen before you decide what to say.

The gist

  • Most brands have the tools for testing, but the work to run testing well is still a mess.

  • “Who’s ready to listen” matters more than “what do we want to say.”

  • Email isn’t “cheap” anymore. It has a gatekeeper and you can wreck deliverability fast.

  • Personalization helps most in the middle of your list, not at the top or bottom.

DB’s view: Stop guessing and start with who’s ready to listen

DB’s path into marketing wasn’t planned. He got pulled from neuroscience into machine learning because tech moved fast. He described the shift in a way that’ll make any former academic nod: he made a discovery early in his PhD, then spent years repeating work to support it. In tech, you ship, learn, and change the business quickly.

That mindset shows up in how he talks about lifecycle marketing. Brands are good at writing messages. They’re less good at asking a basic question first: who’s ready to hear it. DB framed that as the missing piece. If no one’s going to listen, it doesn’t matter how polished the email is.

Orita focuses on first-party channels once someone’s on your list (customer or prospect). The goal is not only “who should I message more,” but also “who should I message less,” so you don’t burn trust, tank deliverability, or annoy your way into getting ignored.

Why testing is still rare even when the tools exist

Pranav asked the obvious question: Klaviyo, Braze, Iterable and others have testing features. Some even have holdouts baked in. Why don’t most marketers use them?

DB’s answer: even when the feature exists, it’s still hard to implement across the whole program. Testing one campaign is doable. Testing your whole lifecycle program (flows plus campaigns, with changing segments and content) gets complicated fast. You need discipline, math, and a way to keep track of what’s running and what the results mean.

That’s the gap Orita is trying to cover. They keep the marketer experience simple while doing the “under the hood” work: holdouts, test frameworks, and the stats that make results usable.

Trust is built with examples first, math second

Pranav pushed on a real problem: ML can feel like a black box. DB’s answer was practical.

First, align on goals and timing. He gave a simple example: you don’t do incrementality testing on Black Friday. You don’t hold out 10% of your list on the biggest day of the year just to learn that Black Friday emails work. 

Second, show your work. DB said they’ve done everything from exposing who’s in holdouts to packaging the data to sharing the definition of tests and even code snippets. Most people won’t run the code, but being open builds trust.

Third, start with human-level examples. DB said the stats should come last. You show a marketer a specific profile and explain why the model thinks this person is worth contacting now (or not). Once the story makes sense at the person level, then the lift number feels believable.

A real example: Send 40% fewer emails and get more clicks

One of the early tests that kickstarted Orita was blunt: their model suggested a brand could email about 40% fewer people and make the same money. The brand tried it for a month, and the A/B test matched what Orita predicted.

Then something surprising happened: the brand got more unique clicks than the month before, even though they emailed fewer people. DB said they stumbled into deliverability. By emailing fewer people (and likely cutting out low-engagement sends), inbox placement improved. More of the remaining list actually saw the emails.

The lesson isn’t “always email less.” It’s that email has a gatekeeper. You can win by being more selective, even if it feels wrong at first.

Quote snacks

  • “It doesn't matter what you say if no one's going to listen.”

  • “It's not the time for incrementality testing on Black Friday.”

  • “Marketing is still such a critical thing.”

  • “Our brains are very good at change detection.”

  • “You now don't have a guarantee that the people who would be the most responsive to your emails are going to see it.”

Why it matters

If you’re treating email like a free printing press, you’re going to pay for it later.

Inbox providers have gotten stricter. Brands that “blast everyone” because acquisition slowed down can hurt deliverability quickly. Then you’re stuck climbing back out while your best customers stop seeing your messages.

DB’s other point is the one most teams skip: testing isn’t only about finding wins. It’s also about finding where your effort doesn’t matter. If you can email fewer people and not lose revenue, you’ve just freed up room to be more thoughtful, less annoying, and more effective across the channels that cost real money.

Practical next steps

  • Start by agreeing on timing. Don’t set up holdouts on the days you can’t afford them (DB’s example was Black Friday).

  • Pick one clear problem to solve first: deliverability slipping, list fatigue, low clicks, or “we email too many people because we’re scared not to.”

  • Run one controlled test that reduces sends for a slice of your list. Keep a similar slice as holdout. Look at results over a week, two weeks, and 30 days.

  • Build trust with examples before dashboards. Show a few real profiles and explain why they should be contacted now (or not).

  • Treat email as a high-risk channel, not a cheap channel. The cost shows up later as spam placement and lost visibility.

  • If you’re going to personalize, focus effort on the middle of the list. Top people just need a touchpoint. Bottom people won’t engage no matter what you write.

Episode Highlights

Transcript

Behind the expert

Daniel “DB” Brady, co-founder at Orita (and an actual neuroscientist), spent years studying the brain before switching into machine learning. Now he’s building tools that help brands get more out of their first-party channels like email, SMS, and direct mail.

In this conversation, DB breaks down why experimentation is still rare in lifecycle marketing, how to make tests trustworthy instead of black-boxy, and why the biggest unlock is knowing who’s ready to listen before you decide what to say.

The gist

  • Most brands have the tools for testing, but the work to run testing well is still a mess.

  • “Who’s ready to listen” matters more than “what do we want to say.”

  • Email isn’t “cheap” anymore. It has a gatekeeper and you can wreck deliverability fast.

  • Personalization helps most in the middle of your list, not at the top or bottom.

DB’s view: Stop guessing and start with who’s ready to listen

DB’s path into marketing wasn’t planned. He got pulled from neuroscience into machine learning because tech moved fast. He described the shift in a way that’ll make any former academic nod: he made a discovery early in his PhD, then spent years repeating work to support it. In tech, you ship, learn, and change the business quickly.

That mindset shows up in how he talks about lifecycle marketing. Brands are good at writing messages. They’re less good at asking a basic question first: who’s ready to hear it. DB framed that as the missing piece. If no one’s going to listen, it doesn’t matter how polished the email is.

Orita focuses on first-party channels once someone’s on your list (customer or prospect). The goal is not only “who should I message more,” but also “who should I message less,” so you don’t burn trust, tank deliverability, or annoy your way into getting ignored.

Why testing is still rare even when the tools exist

Pranav asked the obvious question: Klaviyo, Braze, Iterable and others have testing features. Some even have holdouts baked in. Why don’t most marketers use them?

DB’s answer: even when the feature exists, it’s still hard to implement across the whole program. Testing one campaign is doable. Testing your whole lifecycle program (flows plus campaigns, with changing segments and content) gets complicated fast. You need discipline, math, and a way to keep track of what’s running and what the results mean.

That’s the gap Orita is trying to cover. They keep the marketer experience simple while doing the “under the hood” work: holdouts, test frameworks, and the stats that make results usable.

Trust is built with examples first, math second

Pranav pushed on a real problem: ML can feel like a black box. DB’s answer was practical.

First, align on goals and timing. He gave a simple example: you don’t do incrementality testing on Black Friday. You don’t hold out 10% of your list on the biggest day of the year just to learn that Black Friday emails work. 

Second, show your work. DB said they’ve done everything from exposing who’s in holdouts to packaging the data to sharing the definition of tests and even code snippets. Most people won’t run the code, but being open builds trust.

Third, start with human-level examples. DB said the stats should come last. You show a marketer a specific profile and explain why the model thinks this person is worth contacting now (or not). Once the story makes sense at the person level, then the lift number feels believable.

A real example: Send 40% fewer emails and get more clicks

One of the early tests that kickstarted Orita was blunt: their model suggested a brand could email about 40% fewer people and make the same money. The brand tried it for a month, and the A/B test matched what Orita predicted.

Then something surprising happened: the brand got more unique clicks than the month before, even though they emailed fewer people. DB said they stumbled into deliverability. By emailing fewer people (and likely cutting out low-engagement sends), inbox placement improved. More of the remaining list actually saw the emails.

The lesson isn’t “always email less.” It’s that email has a gatekeeper. You can win by being more selective, even if it feels wrong at first.

Quote snacks

  • “It doesn't matter what you say if no one's going to listen.”

  • “It's not the time for incrementality testing on Black Friday.”

  • “Marketing is still such a critical thing.”

  • “Our brains are very good at change detection.”

  • “You now don't have a guarantee that the people who would be the most responsive to your emails are going to see it.”

Why it matters

If you’re treating email like a free printing press, you’re going to pay for it later.

Inbox providers have gotten stricter. Brands that “blast everyone” because acquisition slowed down can hurt deliverability quickly. Then you’re stuck climbing back out while your best customers stop seeing your messages.

DB’s other point is the one most teams skip: testing isn’t only about finding wins. It’s also about finding where your effort doesn’t matter. If you can email fewer people and not lose revenue, you’ve just freed up room to be more thoughtful, less annoying, and more effective across the channels that cost real money.

Practical next steps

  • Start by agreeing on timing. Don’t set up holdouts on the days you can’t afford them (DB’s example was Black Friday).

  • Pick one clear problem to solve first: deliverability slipping, list fatigue, low clicks, or “we email too many people because we’re scared not to.”

  • Run one controlled test that reduces sends for a slice of your list. Keep a similar slice as holdout. Look at results over a week, two weeks, and 30 days.

  • Build trust with examples before dashboards. Show a few real profiles and explain why they should be contacted now (or not).

  • Treat email as a high-risk channel, not a cheap channel. The cost shows up later as spam placement and lost visibility.

  • If you’re going to personalize, focus effort on the middle of the list. Top people just need a touchpoint. Bottom people won’t engage no matter what you write.

Sign-up. Stay Sharp

Get the latest podcasts, new content drops, and fresh insights.

By submitting this form you agree to our privacy policy.

Discover how Paramark analyzed $2.2B+ in marketing spend
Discover how Paramark analyzed $2.2B+ in marketing spend
Discover how Paramark analyzed $2.2B+ in marketing spend

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy.

See Paramark in action

Demystify marketing measurement & growth

Marketing trends and tactics, plus the latest insights, experiments, and content drops from Paramark. Written by our CEO, delivered straight to your inbox. Sign up. Stay sharp.

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy

Demystify marketing measurement & growth

Marketing trends and tactics, plus the latest insights, experiments, and content drops from Paramark. Written by our CEO, delivered straight to your inbox. Sign up. Stay sharp.

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy

Demystify marketing measurement & growth

Marketing trends and tactics, plus the latest insights, experiments, and content drops from Paramark. Written by our CEO, delivered straight to your inbox. Sign up. Stay sharp.

By providing your contact info, you agree to receive communications from Paramark. You can opt-out at any time. For details, refer to our Privacy Policy