PROVEN FRAMEWORKS
Measure what matters, decide what's next, and govern systems responsibly.
I built decision systems for Indeed, Splunk, and Sylvan Labs to measure end-to-end product experience and performance, guide strategic direction and investment, and evaluate outcomes across deterministic and probabilistic product ecosystems.
Each connects user experience to product performance and business outcomes through modeled signals, prioritized actions, and system-level accountability. I align what users need with what the business values so decisions are grounded, defensible, and tied to ROI, development speed, and decision quality.
ASK'EM
Connect experience & performance signals that move outcomes.
growth in attributed growth through GM-level key metric improvements and roadmap decisions.
Understand which parts of the experience actually move product performance and business outcomes.
Instrument key moments, model drivers, rank actions by expected impact.
Replaces anecdotal decisions with quantified, defensible prioritization grounded in real user behavior and impact.
I created ASK'EM at Indeed when business outcomes were changing in ways our success metrics couldn't diagnose.
We needed a better leading indicator of customer experience and risk.
"Bianca was the mastermind behind delivering our CSAT measurement system, developing it into a truly groundbreaking analytic framework that allowed us to identify specific opportunities to better serve our customers."
Sr. Director of UXR
Indeed
AAIM UP
Set goals, attribute impact, forecast outcomes, and decide where to act.
retained in at-risk revenue mitigated and ~10% retention lift YoY through model-driven intervention.
Decide what to fix across products and silos.
Combine sentiment, telemetry, and performance into a shared health model, assign ownership, and trigger interventions based on expected impact.
Aligns product, design, and GTM teams on a single definition of success and a clear path to action.
I built AAIM UP with U3 at Splunk when we needed end-to-end visibility, accountability, and intervention across a suite of products.
The job was going from reactive to proactive. We needed to connect experience, performance, and business outcomes across a product suite, and make it clear what to fix, who owned it, and why it mattered.
"This work gave the team a clear, defensible path to reduce support load at scale."
CS Director
Splunk
SHIP
Operationalize reliability, drift, and evaluation in AI native experiences.
reached practitioners reached at UXDX in March 2026 and 2x faster model convergence at Sylvan Labs.
Manage systems where outputs are not fixed and behavior changes over time.
Define acceptable behavior, instrument outputs, monitor drift and failure modes.
Prevents silent degradation and ensures AI systems remain reliable, interpretable, and aligned as they scale.
I developed SHIP for Sylvan Labs because traditional performance and evaluation metrics for deterministic systems don't apply to probabilistic ones.
SHIP can be used to design, develop, and evaluate generative, agentic, and predictive workflows.
"She elevated our thinking from standard predictive models to a structured and adaptive system for understanding revenue drivers.
The result was a robust subsystem we could plug directly into our product and deploy across customers."
CTO
Sylvan Labs
The Decision System
Together, they form the decision system I use to connect user experience, product performance, and business outcomes.
Teams act earlier, reduce guesswork, and tie decisions directly to measurable performance indicators and revenue impact.
USE CASES + APPLICATION
These frameworks show up at different points in the product and customer lifecycle. Together, they define what to measure, how to decide what to do, and how to evaluate and improve over time. Click on each framework to explore how they work in practice.
Define which customer moments to instrument before building.
Instrument measurement points tied to experience-critical workflows.
Establish key metric baselines tied to specific product interactions.
Monitor leading indicators of dissatisfaction, churn risk, and other business outcomes.
Establish baseline product health & perfomance across portfolio.
Route ROI ranked interventions to teams and attribute intervention impact.
Identify experience breakdowns and increase adoption across customer segments.
Define acceptable behavioral model ranges before launching a generative or agentic feature.
Instrument model outputs and human interaction patterns for performance evaluation.
Monitor drift, reliability, trust, and failure modes over time.
Govern multi-model ecosystems with shared evaluation and intervention layers.
- We didn't trust our current metrics to explain what's happening
- We were about to build or scale and need to define what success looks like
- We needed to tie specific user moments to business outcomes
- I started with the interactions that shape experience and outcomes
- I instrumented those moments directly in product, for different customers and use cases
- I used signals to model drivers, forecast risk, and prioritize what to fix
- It gave us leading indicators that drive business and performance outcomes
- It replaced unhelpful metrics with decision intelligence teams can act on and defend
- It set the foundation. If this layer is wrong, everything built after it is too
- We already had data, but it was fragmented across teams and tools
- We were struggling to decide what to fix across products, not just within one
- We needed alignment between product, design, and business on what "good" looks like
- I brought sentiment, telemetry, and performance into a single success model
- I defined success through the lens of usefulness, usability, and ubiquity
- I routed interventions based on expected impact and influence
- We assessed impact of product changes on key metrics
- We created shared definitions of success across teams
- We changed teams from reacting to problems to systematically prioritizing what to do next
- We realized probablistic systems don't behave the same way over time
- We were working with generative, agentic, or predictive systems
- We couldn't rely on traditional metrics to tell us about system performance
- I defined what acceptable behavior actually looks like before launch
- I instrumented outputs and interactions, not just inputs
- I monitored drift, surfaced failure modes, and adjusted evaluation as the system evolved
- We prevented invisible model degradation as systems change over time
- We made probabilistic systems observable and governable
- We kept system behavior consistently aligned with human expectations