Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I built my early career entirely around CRO / testing and moved over time into product / ux / app optimization.

Huge, crazy, insane amounts of time are WASTED by humans dickering around with interfaces that they don't understand and that are not personally optimized.

One of the things I don't hear many people talk about, but I am particularly interested in, is the coming and continued improvement of adaptive & personal interface design.

A challenge that any single interface has is that it's difficult to set-up and qualify a test on a small cohort group level (men over 70 years old that wear glasses are homeowners, drink wine, and live in California is an actual target class we can easily devise from current ad tech for instance).

It's challenging because - NOT ENOUGH DATA - eg very hard to run experiments and achieve statistical significance, let alone biforcate your alreadly limited resources to drive to that level of granularity.

But imagine an adaptive UX or set of UX preferences.

EG - Take the same inputs -> eye tracking / natural language feedback (speech!) / interface observation / time to goal / etc <- and then let a flush ML / AI come up with a set of experiments and pathway.

Key to not completely confuse and blow users off path will be some kind of throttling mechanism - adaptations that settle you into the UX like your body's settling into the couch cushions.




I disagree there, thinking back to the days of when Office 'customized' it's UI to how you used the product (constantly moving menu items, shudder!), I'd rather have a consistent UX that didn't exactly fit my patterns than an adaptive one.

The problem with many interfaces, especially on consumer products, is that they're not discoverable, and oftentimes hide things behind inane levels of menu. Interface isn't a competitive advantage (although it should be!) so manufacturers don't invest in it.


Yup, it's also important to be able to walk over to your friend's machine and help them do something.


I spend a good amount of time doing support. I still play around with plugins, tools, and interfaces, but I try really hard to stick with defaults. One thing I do often do is remap Caps Lock to CTRL and it surprised me how often this catches people (and drives me nuts when I'm using their computer).

I have a Logitech Harmony 700 (a very mainstream universal remote), I don't care for it but it's the best I could find, because I use a receiver and Apple TV. Whenever I have guests it's always a mystery for them how to use it.


This alone is why I wouldn't consider DVORAK. If we have to live in a top-down driven world, I'd at least be open to standards being driven that waay


> But imagine an adaptive UX or set of UX preferences.

I can only imagine how difficult support from friends and family would become.

"Click on the widget" "I don't see the widget" "I'm on the same page and I see the widget" "Oh, I have to click 'Show all" to see the widget"


> EG - Take the same inputs -> eye tracking / natural language feedback (speech!) / interface observation / time to goal / etc <- and then let a flush ML / AI come up with a set of experiments and pathway.

I like that... but that's going to take a lot of work to keep it from becoming the contemporary equivalent of "microsoft clippy" but from Hell.

Has anyone pursued or published about such an approach yet?


[flagged]


That's a completely orthogonal issue. I'm curious how you even came up with that post considering the thread context.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: