PROVEN FRAMEWORKS

Measure what matters, decide what's next, and govern systems responsibly.

I built decision systems for Indeed, Splunk, and Sylvan Labs to measure end-to-end product experience and performance, guide strategic direction and investment, and evaluate outcomes across deterministic and probabilistic product ecosystems.

Each connects user experience to product performance and business outcomes through modeled signals, prioritized actions, and system-level accountability. I align what users need with what the business values so decisions are grounded, defensible, and tied to ROI, development speed, and decision quality.

01 / Measure

ASK'EM

Connect experience & performance signals that move outcomes.

$2.5B+
growth
in attributed growth through GM-level key metric improvements and roadmap decisions.

Deterministic  ·  Built for Indeed  ·  Featured in TechTarget

What it's for

Understand which parts of the experience actually move product performance and business outcomes.

How to use it

Instrument key moments, model drivers, rank actions by expected impact.

Why it matters

Replaces anecdotal decisions with quantified, defensible prioritization grounded in real user behavior and impact.

My rationale

I created ASK'EM at Indeed when business outcomes were changing in ways our success metrics couldn't diagnose.

We needed a better leading indicator of customer experience and risk.

My impact
Sr. Director of UXR, Indeed

"Bianca was the mastermind behind delivering our CSAT measurement system, developing it into a truly groundbreaking analytic framework that allowed us to identify specific opportunities to better serve our customers."

Sr. Director of UXR

Indeed

02 / Decide

AAIM UP

Set goals, attribute impact, forecast outcomes, and decide where to act.

$210M+
retained
in at-risk revenue mitigated and ~10% retention lift YoY through model-driven intervention.

Deterministic  ·  Built for Splunk

What it's for

Decide what to fix across products and silos.

How to use it

Combine sentiment, telemetry, and performance into a shared health model, assign ownership, and trigger interventions based on expected impact.

Why it matters

Aligns product, design, and GTM teams on a single definition of success and a clear path to action.

My rationale

I built AAIM UP with U3 at Splunk when we needed end-to-end visibility, accountability, and intervention across a suite of products.

The job was going from reactive to proactive. We needed to connect experience, performance, and business outcomes across a product suite, and make it clear what to fix, who owned it, and why it mattered.

My impact
CS Director, Splunk

"This work gave the team a clear, defensible path to reduce support load at scale."

CS Director

Splunk

03 / Govern

SHIP

Operationalize reliability, drift, and evaluation in AI native experiences.

300+
reached
practitioners reached at UXDX in March 2026 and 2x faster model convergence at Sylvan Labs.

Probabilistic  ·  Built for Sylvan Labs  ·  Featured in UXDX

What it's for

Manage systems where outputs are not fixed and behavior changes over time.

How to use it

Define acceptable behavior, instrument outputs, monitor drift and failure modes.

Why it matters

Prevents silent degradation and ensures AI systems remain reliable, interpretable, and aligned as they scale.

My rationale

I developed SHIP for Sylvan Labs because traditional performance and evaluation metrics for deterministic systems don't apply to probabilistic ones.

SHIP can be used to design, develop, and evaluate generative, agentic, and predictive workflows.

My impact
CTO, Sylvan Labs

"She elevated our thinking from standard predictive models to a structured and adaptive system for understanding revenue drivers.

The result was a robust subsystem we could plug directly into our product and deploy across customers."

CTO

Sylvan Labs

The Decision System

Together, they form the decision system I use to connect user experience, product performance, and business outcomes.

Teams act earlier, reduce guesswork, and tie decisions directly to measurable performance indicators and revenue impact.

USE CASES + APPLICATION

These frameworks show up at different points in the product and customer lifecycle. Together, they define what to measure, how to decide what to do, and how to evaluate and improve over time. Click on each framework to explore how they work in practice.

Discovery
Build
Launch
Iterate
Retain & expand
Scale
ASK'EM
AAIM UP with U3
SHIP
Discovery

Define which customer moments to instrument before building.

Build

Instrument measurement points tied to experience-critical workflows.

Launch

Establish key metric baselines tied to specific product interactions.

Retain

Monitor leading indicators of dissatisfaction, churn risk, and other business outcomes.

Launch

Establish baseline product health & perfomance across portfolio.

Iterate

Route ROI ranked interventions to teams and attribute intervention impact.

Retain & expand

Identify experience breakdowns and increase adoption across customer segments.

Build

Define acceptable behavioral model ranges before launching a generative or agentic feature.

Launch

Instrument model outputs and human interaction patterns for performance evaluation.

Iterate

Monitor drift, reliability, trust, and failure modes over time.

Scale

Govern multi-model ecosystems with shared evaluation and intervention layers.

Developed Use Cases
  • We didn't trust our current metrics to explain what's happening
  • We were about to build or scale and need to define what success looks like
  • We needed to tie specific user moments to business outcomes
MY process
  • I started with the interactions that shape experience and outcomes
  • I instrumented those moments directly in product, for different customers and use cases
  • I used signals to model drivers, forecast risk, and prioritize what to fix
Why it matters
  • It gave us leading indicators that drive business and performance outcomes
  • It replaced unhelpful metrics with decision intelligence teams can act on and defend
  • It set the foundation. If this layer is wrong, everything built after it is too
Developed Use Cases
  • We already had data, but it was fragmented across teams and tools
  • We were struggling to decide what to fix across products, not just within one
  • We needed alignment between product, design, and business on what "good" looks like
MY process
  • I brought sentiment, telemetry, and performance into a single success model
  • I defined success through the lens of usefulness, usability, and ubiquity
  • I routed interventions based on expected impact and influence
Why it matters
  • We assessed impact of product changes on key metrics
  • We created shared definitions of success across teams
  • We changed teams from reacting to problems to systematically prioritizing what to do next
Developed Use Cases
  • We realized probablistic systems don't behave the same way over time
  • We were working with generative, agentic, or predictive systems
  • We couldn't rely on traditional metrics to tell us about system performance
MY process
  • I defined what acceptable behavior actually looks like before launch
  • I instrumented outputs and interactions, not just inputs
  • I monitored drift, surfaced failure modes, and adjusted evaluation as the system evolved
Why it matters
  • We prevented invisible model degradation as systems change over time
  • We made probabilistic systems observable and governable
  • We kept system behavior consistently aligned with human expectations