A tale of two studies

Conjoints and consequences series: Part 2

In part one of the Conjoints and consequences series, we explored the power of Conjoint analysis and its potential to transform product development. Today, we'll examine two specific studies, highlighting what worked well and where a more complete understanding of Conjoint could have led to more informed decisions.

The first study: Uncovering customer values

The first study, "Uncovering customer values," set out to identify the most important outcomes for customers, with the goal of setting a new direction for the business.

Using Conjoint analysis, participants were presented with various combinations of companies, prices, and outcomes and asked to choose their next lead generation provider.

The aim was to use these insights to redefine which outcomes to monetize and to establish a new long-term vision for the business, aligning it more closely with customer needs.

Initial indicators

Let's explore the early indicators from this study that played a crucial role in the organization's decision to move forward with the new outcome-based pricing model.

Real-world example: The first study

In analyzing the study, it's clear that brand played a pivotal role in influencing customer choices and the organization's decision to move forward.

It was about three times more important than outcome and price, which were valued similarly.

This strong brand preference suggested an opportunity for the company to experiment with innovative strategies. Leveraging their solid brand recognition, they could take calculated risks, confident that their reputation would buffer early-stage roadbumps.

Statistical rigor pays dividends

The results revealed a high confidence in the data, thanks to a rigorous design that captured real customer preferences instead of poor-quality noise. The exciting projections sweetened the deal further.

Produce high-quality data

To improve data quality and confidence in your survey results, it's crucial to filter out responses that do not demonstrate thoughtful engagement. By applying criteria like these, you ensure that only quality responses are left, leading to more reliable and meaningful results.

Speeders: remove those who complete the survey in less than one-third of the median.

Straight-liners: remove those with a std dev below 0.5 in likert scales or matrixes

Attention checks: remove those who fail attention checks.

Open-ends: remove those with gibberish, random characters, or nonsensical answers.


Inspiring too much confidence

The results of the first study instilled high confidence in stakeholders and decision-makers alike.

The impressive statistical rigor ensured everyone took the findings seriously. However, this focus on “the math” led to other important factors being overlooked, especially when the exciting projections came back. The collective excitement somewhat skewed the objectivity of the overall findings, as many were eager to do right by their customers if it also promised the accompanying increase in market share.

Moreover, the strong influence of brand as a driver of choice created an abundance of confidence. This made it easy to fall back on their top ranking and further overlook other critical elements, which we’ll examine in part 3.

The second study: Understanding willingness to pay

Fast forward to the second study, where I was in the thick of it as a recent joiner. I took charge of the modeling, analysis, and reporting, having joined after the study had completed data collection. This research was crucial for refining the pricing strategy and accommodating for customer nuance as we rolled out the new pricing model.

By analyzing customers' preferences and behaviors, the study aimed to:

Determine the proportion of customers willing to pay for various product configurations

Understand price elasticity to develop an optimal pricing strategy

In theory, these guide posts would help the organization align the new pricing model with customer expectations and maximize market potential.


Unleashing the power of share of choice modeling

The study wasn't just about understanding preferences; it was also about predicting real-world behaviors. We used share of choice, which you might remember estimates market behavior by simulating customer choices and changing product concepts, prices, and the competitive landscape. This helped us:

The do’s and don’ts of share of choice

What to do

Compare product performance in competitive scenarios: See how your products stack up against competitors for strategic planning and analysis.

Generate hypotheses for experiments: Use the data to identify promising areas for testing, validate assumptions, and refine offerings.

Understand price elasticity and optimal pricing: Analyze price sensitivity to determine the best pricing strategy, ensuring it aligns with customer expectations and maximizes revenue.

+Confidently generalize market preferences if verticals show similar trends, and identify unique opportunities by considering nuances in utility value.

What not to do

Extrapolate simulated market share to revenue or CVR: Simulated share shows potential preferences, not actual buying behavior. Converting it directly to revenue or using it as conversion rates (CVR) can lead to misleading projections and overly optimistic forecasts.

Stop at the first iteration of feature bundle experiments: One iteration isn’t enough for solid conclusions. Continuous testing refines hypotheses and validates assumptions. Stopping early can miss critical nuances.

Leapfrog ahead with Conjoint

Our second study showcased the transformative power of conjoint analysis and how choice simulators can unblock barriers to getting. ahead. A team member aptly compared it to getting a near-BMW experience from a tricked-out Toyota—an insightful metaphor that perfectly captures our findings.

Work smarter not harder

By leveraging the choice simulator, we simulated different scenarios and tested various bundling strategies.

This allowed us to see beyond how we defined our current capabilities and envision how we could deliver significant value by repurposing what we already had.

While we weren't ready to fully monetize every outcome down the funnel, the simulations showed us that bundling key features could drive substantial value right now. This would ideally help us to maximize our current capabilities to achieve impactful market results, even as we continued to build towards our long-term goals.

Sparking a frenzy

The simulated projections for willingess to pay were promising (read: hefty). Customers were excited about this higher-valued outcome and they were willing to pay more for it. The results provided a clear roadmap for pricing strategies. The majority of the team was all in, sparking a Conjoint craze. Everyone was asking for Conjoint, regardless of whether it was the right fit (which we'll get to later).

A leg up over conventional techniques

Being only weeks into the job with no established relationships or demonstrated wins, it was easy for some to overlook this unfamiliar and “weird” survey method. Despite the powerful results we were seeing, a few key stakeholders doubted the findings.

They dismissively referred to it as "conjoint shmonjoint market research hand wavey stuff."

Their skepticism led them to insist on launching over 80 live A/B experiments to understand the revenue and conversion implications of various switches to the new pricing model. If they had incorporated our findings, they could have gotten there with fewer, more intentional experiments.

In the end, we reached the same conclusions with Conjoint analysis, saving valuable time and resources. While A/B testing has its uses, it often requires extensive time, resources, and customer exposure to multiple changes, which can lead to customer fatigue and inconsistent results.

The Conjoint offered a robust alternative:

Efficiency: Provides signal much faster and with fewer resources than traditional A/B tests.

Reduced risk: Mitigates the risks associated with frequent changes in live testing environments.

Holistic understanding: Delivers comprehensive understanding of customer preferences and behaviors, offering deeper insights than isolated A/B tests.

Next time: The “Conjoints and consequences” retro

The initial studies provided valuable signals but didn’t paint the full picture.

The organization charged forward without thoroughly testing their assumptions about the new outcome-based pricing model. They failed to confirm whether the proposed offerings aligned with customer needs and whether they could deliver the promised outcomes. These decisions led to significant challenges down the road, highlighting the importance of comprehensive research and validation in product development.

In the final post of the “Conjoint and consequences series”, we'll conduct a full retrospective with recommendations on how to integrate Conjoint at the right time, combine it with the right elements, and design it in the right way to leverage the full power of the technique.

‘Til next time, I’m Bianca

You can find the BriteNote here and watch the SpotLite here

Previous
Previous

ASK’EM: smarter decisions

Next
Next

The price of assumptions