Cromulent
← Back to blog

Beyond open rates: why optimizing for the wrong metric costs you sales

Justin T. Huang

Every email marketing dashboard puts open rate front and center. It updates fast, it moves predictably with subject-line changes, and it feels like a real measure of campaign performance. The trouble is that open rate is not the metric you actually care about. Revenue is, and the two are correlated only loosely.

The clickbait trap

Take two subject lines for the same promotional email.

Subject A: "You won't believe what we just launched."

Subject B: "New running shoes: $40 off this week."

A will almost certainly win on open rate. It exploits a curiosity gap, and human readers are reliably bad at leaving curiosity gaps unresolved. B states what is inside, which lets readers who are not in the market for shoes (or who do not care about $40 off) correctly decline to open.

Now ask which subject line sells more shoes. B, almost always. The people who opened A opened it because they were curious, not because they wanted shoes, and many of them feel mildly conned when the email turns out to be a promo, which is not a feeling that drives conversion. B selected for the people who were plausibly going to buy, which was the entire point of sending the email.

The open-rate metric rewarded A. The business wanted B. This is not a failure of the team. It is a misspecified objective function.

Engagement is not a proxy for value

The general form of the problem appears throughout digital marketing. Engagement metrics (opens, clicks, views, time-on-page) are easy to collect because the platform owns the instrumentation. Downstream outcomes (purchase, retention, customer lifetime value, complaint rate, unsubscribe) are harder to collect and often require joining data across systems.

Teams therefore optimize for what is easy to measure and hope the rest follows. It does not always follow, and quite often it actively diverges. A subject-line strategy that lifts opens by 4% and unsubscribes by 0.2% is almost certainly destroying value once you compound the effect across a year. Any honest measurement framework has to include the costs, not just the clicks.

What measuring the right thing requires

The fix is not technically exotic, but it does require a few commitments most teams have not made.

You have to define the business outcome before the campaign goes out, in concrete terms. Not "engagement." A specific, attributable downstream event: first purchase, repeat purchase, ninety-day retention, margin-weighted revenue. If you cannot say it in one sentence, you will not optimize for it, regardless of how sophisticated the rest of the stack is.

You have to accept noisier and slower signals than the dashboard offers. Conversion data is noisier than open data because there is less of it and it arrives later. That is a feature of the world, not a flaw in the data pipeline. Experimentation designs that pool information across campaigns, use hierarchical models, and report probabilistic statements ("80% probability that B outperforms A on 30-day conversion") are the right response. Demanding frequentist significance on every campaign-level conversion comparison will leave you permanently underpowered, which produces the worst kind of result: a system that runs experiments and learns nothing.

You also have to instrument for the long arc. Unsubscribe rate, complaint rate, and list fatigue are slow-moving negative externalities of aggressive engagement optimization. They do not show up in this week's dashboard. They show up six months later as a list that no longer responds, and at that point the loss is hard to attribute to the optimization that caused it.

The metric that pays

Optimizing for open rate is a different objective from optimizing for revenue. The two are correlated just enough to look reasonable on a dashboard and uncorrelated just enough to quietly burn money for years before anyone catches it. The work of a serious measurement program is to identify the outcome that actually matters to the business, build the instrumentation to estimate it with whatever noise and lag it comes with, and then let the optimization run against that target. Everything upstream (the clever subject line, the bandit, the segmentation) is in service of that estimation problem, not a substitute for it.