> This gets back to the idea of preventing frustration. I determined that it would be more frustrating to have autocorrection “guess wrong” and erroneously fix broken typing.
I work on automation a lot, and this is so hard to communicate to the endless stream of non-tech types that just want to “tweak” the algorithms to be a little bit smarter. Trust is an asymmetric proposition, requires lots of positive inputs to slowly accrete it; it takes very few negative inputs to rapidly dissolve it. It is better to be reliably/predictably helpful some of the time, than it is to usually be helpful but sometimes dishelpful all of the time.
Or as Thumper put it (kinda): “If you don’t have nuthin nice to add, don’t add anything at all.”
I agree and see Thumper's Law as a combination of the Principle of Least Surprise and the concept of Loss Aversion in the sense that a pleasant surprise, like a correctly guessing what you were going to type, is greatly outweighed by the pain of an unpleasant surprise, like when the phone changes "it's" to "it's" even though I typed and wanted the former.
I think the key insight here is that corrections you can trust are great - you can just let the autocorrect do its thing & stay in flow state. Corrections that you have to check every time because they’re not reliable enough are worse than useless because they force you to check every single correction, completely breaking flow.
The right amount of flow state is hard to achieve. I'll type two words wrong but the autocorrect actually gets it right, but by the time I've realized that, I've already hit delete several times or are on the third word, have lost the autocorrect, and lost my focus and flow. My brain's just not timed the right way for the auto correct's default settings
In terms of automation, I'm a fan of the 80% solution. 100% automation tends to be fragile in the non-perfect case so often it's useful to automate most of it and let a human make important decisions.
Yup. Heuristics over algorithms is The Correct Answer™ (for UX).
I once wrote a bespoke query parser for an in-house full text search app. Spent most of my time doing fit and finish for the domain specific stuff. Looked at logs of actual user searches, before and after. Even did usability testing. (Who does that any more?)
End result was "invisible", because it just worked. Totally unattainable if I'd relied solely on stock tool stack.