6 Comments

Thanks for walking through all of this! Really cool to see the progression in detail.

I'm curious how you realized connectors were not only the best determinate for value, but also that willingness to pay differed among them? Was it based on intuition from talking to users and sales calls? Did you uncover in max diff survey?

And when figuring out each of these different version, did you launch each iteration of pricing as an a/b test?

Expand full comment
author
Jun 19·edited Jun 20Author

Thanks, Luciano!

Great questions.

Re: Connectors – a lot was learned qualitatively via direct conversation with leads. Also, we could see in usage and data that customers with more than one connector were typically more engaged/better customers for us. On the willingness to pay front, a lot is predetermined by how much the customer is already paying for the tools they are connecting to. For example, Salesforce is usually used by larger companies and is expensive. We also incur higher costs to run specific connectors, which informs how much we charge for them too.

Re: A/B testing - I've personally never A/B tested pricing. In past roles, we've either done a lookback analysis (aware of the caveats and trade-offs), i.e., comparing previous cohorts (old pricing) to newer cohorts (new pricing), or tested in specific geographies before committing to a global rollout—also with its caveats.

At Equals, we've simply changed for all new customers with each new version, knowing that at some point, we'll want to migrate customers on legacy plans to current pricing, ensuring they're treated fairly with grandfathering, etc.

The primary reason for never opting for an A/B test is the need to learn and act as quickly as possible. Split tests simply take longer, especially in B2B when you're at an earlier stage and your sample size is naturally smaller. Typically, you also need to wait at least a few months to see how cohorts on newer pricing age to understand the impact of your changes on expansion, contraction, and churn over time. e.g., ACVs on new pricing might look great, but perhaps they don't expand as well and churn faster.

Hope that helps!

Expand full comment

Thanks for taking the time to give such a detailed response. That is helpful! I really like the point about judging willingness to pay for connectors based on the price of the connectors themselves. Super smart and not something I thought of.

Re: testing — At my company today we are more “prosumer” and so operate closer to a consumer modal. And while we have decent sample sizes, we still typically have to wait months to fully understand impact of our split tests on pricing — we also have complexity of gradual user journey were adoption takes time based on real world behavior/constraints. So I 100% understand that challenge being greater with true B2B.

Really interesting perspective. Thanks again for taking the time to respond :)

Expand full comment
author

No worries. Glad to help.

I guess I should have mentioned that I have A/B tested pricing page design in the past, but only when the underlying model for each variant is the same.

Expand full comment

Great post matt - how did you decide on that shift from self-seve startup focus to sales-led (smb/mid-market focus) ? Did you have them running always side-by-side or did you start with a sales - first Motion and was self-serve an experiment which didn't work out or bring enough conviction?

Expand full comment
author

We started gated when the product first launched. We then opened up self-serve and about 3 months ago went sales-only. Self-serve wasn’t an experiment so much as our product, positioning, and pricing were not optimized to make it a successful motion for us at this stage. We’re hopeful we can make it work in the future though.

Expand full comment