The retro & future focus

Conjoints and consequences series: Part 3

In part one of the Conjoints and Consequences series, we explored the power of conjoint analysis and its potential to transform product development. Last time, we examined two studies, highlighting what worked well and where a deeper understanding of conjoint methodology could have led to more informed decisions.

Today, we’re conducting a full retrospective: what we got right, where things went sideways, and, most importantly, how to do better next time.

TL;DR: Future-proofing your decisions with Conjoint studies

If you take nothing else away from this post, let it be this: Conjoint analysis is only as strong as its inputs, and even the best signals can be undermined by poor execution.

  1. Ground your experimentation in customer realities.

  2. Validate your projections before making expensive bets.

  3. Use follow-up A/B testing for refinement—not as a crutch for uncertainty.

A great conjoint doesn’t just help us predict—it prevents costly mistakes. But only if you trust the process. In these two studies, we saw how skipping key validation steps—whether by ignoring how customers define value or by charging ahead with unchecked projections—can turn a promising direction into an expensive lesson.

Both examples remind us that even the best data can lead us astray if we don’t pressure-test it against reality.

Study 1 learning: Speak like your customers do, or pay for it later

Situation The team, brimming with confidence, believed they understood their product’s value and their customers well enough to skip qualitative research. Instead of validating assumptions, they designed the conjoint study from an internal perspective.

At its core, the study aimed to measure outcome-based pricing, shifting from a high-volume, transactional pricing structure (e.g., paying per engagement) to a model that charged based on delivering a successful outcome. The problem? The study failed to define "success" the way customers actually experienced and valued it.

The cost Over time, a fundamental disconnect emerged: the business defined value as quantity of outcomes delivered, and optimized for more. However, customers desired “goodness of fit,” from the start. This disconnect created runaway costs, mistrust in the model, and erosion of confidence.

Reflect Thorough qualitative research before conjoint analysis would’ve revealed customers’ actual priorities, and how they describe them in their own words. By failing to anchor the conjoint study in real customer language and priorities, we measured the wrong things—and set pricing expectations that clashed with what customers were willing to accept.

By aligning product value with customer priorities from the outset, we build strong market fit, enhance satisfaction, and avoid costly post-launch corrections.

Kiki with Curtis: A solid foundation for Conjoint design

Let’s talk about how to level up Study 1 with Curtis Combo, our resident conjoint expert.

In Study 1, we saw how bad inputs lead to bad outputs, resulting in misinformed decisions. A well-structured conjoint study isolates customer trade-offs, ensuring attributes and levels reflect real-world decision-making.

One common mistake? Using vague, abstract categories that don’t align with how customers actually evaluate choices. Instead of internal product tiers or marketing language, customers weigh clear, tangible trade-offs that impact their experience.

To get reliable results, attributes and levels must be specific, measurable, and relevant—otherwise, you risk misguided decisions, wasted resources, and product decisions that miss the mark.

Curtis Combo’s critical considerations to crafting a credible Conjoint

1. Start with qualitative research

Customers don’t think in feature lists; they weigh trade-offs based on what matters to them. Use qualitative research to uncover how they actually make decisions—not how we assume they do.

Before designing your conjoint, conduct in-depth interviews or observational research to understand:

2. Define precise, measurable levels and attributes using customer language

A conjoint study is only as good as the attributes and levels it tests—garbage in, garbage out. If respondents can’t clearly distinguish between options, the results won’t provide meaningful signal.

Conjoint design best practices:

  • Attributes: 4-7 (more than this increases cognitive load and reduces data quality).

  • Levels per Attribute: 2-4 (too many levels make it difficult for respondents to process trade-offs effectively).

Attributes are the key features that drive customer decisions—distinct aspects of the product that matter in real-world trade-offs.

Levels define the specific variations within each attribute, providing the comparison points that reveal customer preferences. Each level should be precise, measurable, and reflect how customers naturally evaluate choices.

If respondents have to decode jargon, the data gets noisy, leading to unreliable signal. Instead, use the language they actually use to make decisions—clear, familiar, and intuitive.

When crafting attributes and levels, avoid vague, ambiguous levels like Basic, Premium, Enterprise. These categories force respondents to guess what each level includes, leading to inconsistent interpretations and unreliable data.

Instead, define quantifiable, distinct levels that focus on a single construct—ensuring that customers evaluate one clear trade-off at a time rather than interpreting broad, bundled differences.

4. Validate assumptions early

A well-structured conjoint should feel intuitive—respondents should be able to evaluate trade-offs without hesitation. If participants struggle to understand the attributes and levels, your study is at risk of collecting noisy, unreliable data that leads to misinterpretation and poor decision-making. Testing before launch ensures that choices reflect real-world decision-making and that respondents engage meaningfully.

How to test & refine your Conjoint design

Cognitive testing: Conduct think-aloud interviews where respondents verbalize their thought process while completing the survey. If they ask clarifying questions or hesitate, the wording or levels need to be refined.

Highlight testing: Present attributes and levels in a paragraph format and ask respondents to highlight any words or phrases they find unclear. If patterns emerge in what’s flagged, those elements likely need refinement for better comprehension.

Bringing it all together: Lessons from Study 1

The first study revealed a critical misstep: assuming we understood customer preferences without validating them. By skipping deep qualitative research, the team designed a conjoint that reflected internal assumptions rather than real customer trade-offs. The result? A misalignment between what the business optimized for and what customers actually valued—leading to costly pivots, wasted resources, and a disconnect between product positioning and market expectations.

Had the study followed the principles outlined—grounding attributes in customer motivations, structuring precise and measurable levels, prioritizing customer language, and validating assumptions early—the team could have uncovered customer priorities upfront, saving time and effort spent course-correcting later.

This underscores a simple truth: conjoint is only as strong as the inputs it’s built on. Get those wrong, and the output will be just as flawed.

Study 2: Validate feasibility before scaling, and don’t let A/B testing replace informed decision-making

Situation The conjoint study provided strong evidence that customers were willing to pay for a high-value outcome. The potential demand looked massive, and leadership saw an opportunity to scale quickly. However, the projections assumed perfect execution—without verifying if the business could consistently deliver the promised outcome at scale.

Instead of stress-testing these assumptions, the team took the revenue projections as a given and charged ahead, leading to a costly disconnect between expectation and reality.

The cost Once the pricing model rolled out, cracks formed quickly. Customers expected one thing but received another, leading to pushback. Trust in the pricing model deteriorated, slowing adoption. The business assumed strong demand but failed to account for execution risks—leading to unexpected churn and lower-than-expected uptake.

This wasn’t a failure of conjoint—it was a failure to validate feasibility before treating projections as reality. And here’s where things took another turn: rather than re-evaluating the execution risks, the team opted to run 80+ live A/B tests to "find the right price" in-market.

Their intent made sense, they were looking for certainty. But the execution—constantly adjusting live pricing to see what stuck—was costly.

Constant price changes frustrated customers, eroding trust in the pricing model. Engineering resources were drained implementing, maintaining, and analyzing dozens of pricing experiments. The experiments produced noise, not signal.

A/B testing isn’t the problem—using it to “figure things out” instead of refining a validated hypothesis is.

Reflect A great conjoint study simulates customer preference—but preference alone isn’t enough. Before scaling a pricing model, businesses must stress-test whether they can reliably deliver the outcome at scale. Strong demand on paper doesn’t guarantee success in practice. That’s where rigorous validation steps come in:

1. Bayesian modeling – Integrate historical data to temper projections with real-world uncertainty.

2. Operational feasibility testing – Run small-scale prototypes to identify potential pitfalls before a full rollout.

3. Scenario planning: Use simulations to explore best, worst, and likely outcomes.

Conjoint helps us see where customers find value—but only when paired with operational validation can we ensure that value translates into sustainable business impact.

How to avoid Data Gremlins: Reality checks before you leap

Introducing the Data Gremlin: Your new Gobo Guide to avoiding data errors

You’ve met Curtis Combo, our resident expert in building ideal products with conjoint analysis—but today, it’s time to introduce the Data Gremlin. If Curtis Combo and the other Survey Monsters help us ask the right questions, the Data Gremlin makes sure we don’t fool ourselves with bad answers.

Born from a late-night coding error and an ill-timed joke, the Data Gremlin represents every misstep, assumption, and overconfident projection that sneaks into our data when we’re moving too fast.

They’re not just a troublemaker—they’re a survivor of bad data decisions, a sage of statistical facepalms, and a champion of rigorous, grounded analysis. They’ve made every mistake in the book and learned the hard way.

Just as Curtis Combo ensures we build conjoint studies that future-proof our decision-making, the Data Gremlin keeps us from misinterpreting our results, over-indexing on wishful thinking, or setting fire to our decisions with faulty assumptions.

Avoiding common conjoint pitfalls: The Data Gremlin’s checklist

How to spot Data Gremlins in your Conjoint

When it comes to conjoint analysis, the excitement of promising market simulations can often lead teams into a classic data gremlin trap: taking projections at face value without rigorous validation. Conjoint is a powerful tool for modeling customer preferences, but its accuracy depends on disciplined design, well-structured inputs, and, most importantly, stress-testing against real-world constraints.

Even the most well-designed conjoint study can go off the rails if not properly validated. This misstep often results in inflated expectations, flawed pricing strategies, and product-market mismatches that are expensive to correct.

Multi-source validation: Don’t let a single data stream guide your decisions

Conjoint results should never be analyzed in a vacuum. Instead, they must be triangulated with qualitative research, market intelligence, and behavioral data to ensure the alignment of customer intent with actual purchasing behavior.





Bayesian modeling: Turning projections into predictions

One of the ways we can do this is by conducting Bayesian modeling to integrate historical data, bridging the ideal world of survey responses with the messy realities of market behavior.

The Data Gremlin loves when teams get overconfident with a single data stream—it's a sure way to set the stage for a strategic facepalm. The solution? Always validate conjoint insights through multiple lenses to ensure you’re not building a strategy on wishful thinking.

Imagine you’re preparing to launch a new feature, and your conjoint study shows that 60% of customers would choose this premium option. That’s exciting—but should you believe it? The Data Gremlin would warn you: without grounding that prediction in reality, you might be setting yourself up for a fall.

Bayesian modeling offers a reality check by incorporating historical data as priors

One of the most effective ways to reality-check conjoint simulations is through Bayesian modeling. Instead of taking raw survey data at face value, Bayesian methods incorporate historical data as priors, creating a bridge between the controlled world of survey responses and the unpredictable realities of market behavior.


How bayesian inference works:

Imagine you’re preparing to launch a new feature, and your conjoint study suggests that 60% of customers would choose this premium option. That’s exciting—but should you believe it? The Data Gremlin would warn you: without grounding that prediction in reality, you might be setting yourself up for a fall.

  • Prior expectation: Your initial assumption based on past data.

  • Likelihood function: The fresh data from your conjoint study considering ideal conditions.

  • Prediction error: The difference between your expectation and the conjoint prediction. This gap is a critical signal of how much your projections may be misaligned with reality.

  • Posterior distribution: Combines these signals, giving you a more tempered prediction.

By blending fresh signals with historical benchmarks, Bayesian modeling prepares you for a spectrum of potential outcomes—not just the best case. It’s like having a built-in reality check that ensures your pricing strategies are grounded in what customers will actually do, not just what they say they’ll do in a survey.


The second pitfall: A/B testing as a replacement for decision-making

What happened: Searching for certainty in the wrong place

When projections fell short of expectations, the team needed answers—fast. Instead of revisiting the conjoint study or grounding their strategy with robust validation methods, they turned to 80+ live A/B tests to "find" the right price. It wasn't a structured, hypothesis-driven approach but a costly scramble for clarity.

This wasn't an experimentation problem—it was a validation problem. Instead of using A/B testing as a refinement tool, it became a high-risk method for discovery, leading to mixed signals and wasted efforts.

When A/B testing works—and when it doesn't

A/B testing is a powerful tool when used with surgical precision. It’s perfect for fine-tuning specific elements—like messaging, UI design, or validating targeted hypotheses. But when you wield A/B testing like a sledgehammer to find foundational strategies, you risk smashing through the nuances and missing the bigger picture. Instead, use it as a scalpel to refine well-formed ideas, not as a blunt instrument to carve out your entire strategy.

Bringing it all together: Lessons from Study 2

Lesson 1: Validate feasibility before committing to revenue projections

The second study highlighted a critical gap—not in identifying the right product strategy, but in testing whether that strategy could be executed at scale. While the conjoint study accurately gauged customer willingness to pay, the team didn't verify whether the business could consistently deliver the promised outcome.

This gap led to a cycle of over-promising and under-delivering, where customers expecting high-quality results instead faced inconsistent experiences.

Trust eroded, and adoption stalled.

Had the study incorporated robust validation techniques like Bayesian modeling , the team could have simulated real-world scenarios and adjusted expectations before launching at scale. Methods like this one serve as a reality check, ensuring forecasts aren't just optimistic projections but are grounded in historical data and market conditions.

This reinforces a simple truth: projections aren’t promises. They need rigorous stress-testing before they're treated as a strategic foundation.

Lesson 2: And don’t let A/B testing replace informed decision making

A/B testing is a powerful refinement tool, but in Study 2, it became a reactionary crutch. Rather than using it to validate a well-grounded hypothesis, the team ran 80+ live A/B tests to "find the right price"—an approach that introduced confusion, drained engineering resources, and left customers feeling like guinea pigs in a pricing experiment.

The mistake wasn’t in testing but in abandoning the simulations from the conjoint study. The team turned to A/B testing, not as a scalpel to fine-tune pricing, but as a sledgehammer to bludgeon out an answer.

To avoid this misstep, A/B testing should:

This underscores a critical truth: A/B testing is a tool for refinement, not a substitute for decision-making. The real problem wasn’t just a pricing misstep—it was a failure to validate before launching and an over-reliance on A/B testing to fill knowledge gaps.

A strong foundation in conjoint analysis, paired with rigorous validation techniques, can prevent this cycle of uncertainty and correction. By aligning customer expectations with business realities, we avoid reactionary experiments and instead make informed, strategic decisions that build trust and drive sustainable growth.


Smarter, better, faster: The path forward

These studies were a step in the right direction, but they also underscored a hard truth—moving forward without validating key assumptions leads to costly missteps. The key takeaways?

Principles —> Practice

  1. Integrate qualitative insights—don’t assume you already know what matters to customers.

  2. Speak your customers' language—ground conjoint attributes in how they think and make trade-offs.

  3. Balance revenue forecasts with execution realities—pressure-test feasibility before scaling.

  4. Use structured, phased rollouts—prevent friction by aligning pricing with real-world constraints.

A great pricing model isn’t just about what customers say they’ll pay—it’s about ensuring they get the value they expect, at a price that works for both them and the business.

By pairing rigorous research with disciplined validation, we can build models that don’t just look good in a spreadsheet, but actually work in the real world.

Coming soon: The ultimate guide to Conjoint analysis

To close out our Conjoints & Consequences series, I’m publishing a comprehensive conjoint whitepaper—a one-stop guide to designing, programming, and analyzing conjoint studies.

This will be the go-to resource for product teams, researchers, and data scientists looking to leverage conjoint analysis for high-impact decisions.

‘Til next time, I’m Bianca

Previous
Previous

Quiet the Debbie Doubters

Next
Next

ASK’EM’s blueprint