Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are there any PMs or devs out there who A/B test many/most bug fixes or performance improvements?

In principle, a minor bug fix and a feature aren't all that different. There's a control experience and a new experience, and an A/B test can compare their performance.

In practice, I find many features are run with stat sig analysis of their impact, whereas UX bugs are either not fixed (not important enough) or fixed, without measuring their impact.

Of course experience-breaking bugs should be deployed at 100%. But what about all those UX bugs that are in a flow which is shipped and kinda working already?

Wouldn't it be interesting to see that a UX "fix" actually performs worse? Or, wouldn't it be nice to know when a UX fix produces a measurable performance increase?

My anecdote: A scrolling bug on a mobile web sign-in page. It's 1 field form - email address. On mobile web, you can't scroll to see the email field when you have the on-screen keyboard open. This page is the top of funnel for a promotional offer via ads. It's obviously broken, but it's already shipped and (surprisingly?) actually kinda working. Would you fix it at 100% rollout, or do it via an experiment, or not fix it? Why?

Curious if anybody has other experiences to share about this idea :-)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: