Hacker News new | past | comments | ask | show | jobs | submit login
Defeating Feature Fatigue (2006) (hbr.org)
71 points by kosei on March 31, 2019 | hide | past | favorite | 39 comments



13 years later and it seems we've gone too far in the opposite direction. Dumbed-down, unconfigurable interfaces with no "depth" seem to be the norm for new apps now.



I don't agree here. We have a renaissance of systems that provide depth for creators:

- Airtable

- Notion

- Coda

- ...


Looking at their respective tours, they don't feel very deep either - they smell like a few features layered in fat, and put inside a web app.


Dig deeper.


One has to keep in mind the Microsoft Word paradox: 99% of the people have no use for any single feature of yours, but when you have 600 of those removing them means that every single one of your users will miss something and will look into something that does it.

That of course doesn't mean that both it's perfectly fine to have a niche product that perfectly solves a problem few people have, some features really aren't used by anybody. It's just that things are not always simple.


I've heard this many times, and it does ring at least somewhat true in my experience, but does anyone have a link to actual studies relating to this?

I get the feeling that the 'people using feature x' curve follows a power law with rapid dropoff rather than a more linear shape, but that's just intuition, and I'd like to calibrate against known data.


I hit this with the gsuite gmail redesign. They used to have a button to set up catch all emails. They removed it and the alternative is a ui with tons of options but nothing simple for that use case. So, a solution is to keep easy stuff easy which runs the advanced config behind the scenes, but also have the advanced config available for power users sitting behind an “Advanced” screen.


Having a plug-in system like atom is much better than trying to account for every possible use case.


Well, mostly. But it completely fails when there is any social element to the software (even exchanging save-files).


It's basically a "long tail" of features. It's why people shop at Amazon, and not at their local bookstore.


I think that is also Netflix vs Blockbuster. Netflix had no problem with showing you some odd French Film from the 1950's.

I do think that the rise Amazon + Big Box Store + Offshoring meant resulted the disappearance of the power of distribution buyers. Manufacturers typically sell into a distribution chain. Old school the buyers in that chain would enforce some standard of quality. AKA their were a couple of of buyers for the various distributors that knew every model of toilet brush on the market. And if a model sucked they wouldn't order any it no mater how cheap it was.


Funny this article comes just before the iPhone revolution which puts everything in your pocket and has swallowed the gps, radio, camera, iPod, phone and pda in one single device.


People forget just how much of a usability revolution the iPhone was compared to the not-so-smartphones that preceded it.


I think the iPhone gets too much credit then it deserves. I owned a smartphone with Gps, wifi, full touch screen, apps, Windows, etc years before iPhone. It was a growing market.


It also had fewer features than other phones when it was released and people complained endlessly about it.


So I guess the relevant question there is, has the smartphone managed to overcome "feature fatigue" for all the formerly-separate devices it combines? I would say "Yes, partially," because a software interface is more flexible than a hardware one, and you can at least partly rely on a common language in your UI to guide people along. And if the user wants the clock but not the radio, they can just uninstall the radio and forget about it, instead of having to look past it every time to find the clock.


The iPhone is really a platform


But it is a device that does everything. In the end it doesn't matter much which software module delivers the functionality. It is still a device in your pocket that does it. And even a stock iPhine today is more capable than feature phones of those days. People manage to navigate that complexity rather well, all things considered.


The complexity is modular, tactile, solution-oriented, and opt-in.

The iPhone would be unusable if it came installed with every possible app. Making apps opt-in gives users the power to decide which features they do/don't want. And apps are sold as limited tools for specific tasks, not as a rat's nest of do-everything features - like many desktop apps.

Apple could have split apps into different hardware categories, so instead of apps that used the camera you had Camera Apps as a separate category to Microphone apps. But that would have been exactly the wrong approach.

The Apple approach makes user benefits obvious and keeps the technology subservient to them, which is as it should be.

Although having said that, it's hard to imagine today's Apple making that choice and getting it right in the same ways.


The "Apple approach", once they enabled 3rd party apps, was pretty similar in this respect as the incumbent smartphones and PDAs. The app store as a one stop center, and forbidding sideloadingf, was arguably ahead.


Solution: show only the main features and hide additional features under an "advanced features" or "settings" button.


Then if you need those features on a regular basis, you have to make a lot more clicks to access that hidden menu. I personally really like the Microsoft menu paradigm where I can either click on the File, Edit, ... menus or press Alt and then a memorized sequence of characters. It allows for discoverability as well as efficiency.

The Emacs equivalent also works nice: press a memorized key binding for something that you do frequently, or press M-x to type into an autocompleted (and fuzzy searched, with the right packages) list of commands for some particular functionality.


Or, better yet, a UI builder accessible through that "advanced features" menu that lets the user decide what is accessible and easy.


Or, in case of Emacs: just bind the M-x feature you're using to a key.

The overall principle is sound, IMO: present the "simpler" UI up-front to not overwhelm newcomers, but have advanced features available and a means to bring them up to front of the UI, for repeat users.


Like Firefox! I agree, that’s a very nice feature.


I also love that I can put things in Word's Quick Access toolbar and then Alt+1 will do the first thing, etc.


I think the article, and the usual discussions, are missing the insight that whether or not the features should be bundled together depends on whether the features are completely orthogonal to each other, or whether they (excuse the word) synergize well.

Why the mouse pad with clock, calculator and FM radio was a dumb idea? Because each feature was orthogonal, and them being in a mouse pad actually reduced utility. For optimal use, I want the clock be in visual range, FM radio be in audio range and within arm's reach, the calculator is something I might want to reposition or take with me when I get up, etc., while the mouse pad forces a particular location on my desk. It's the "mouse pad" ingredient that breaks this - as modern smartphones show, FM radio + clock + calculator together is a good idea[0].

Compare that with complex and feature-full applications like Word, Excel, Photoshop, or Blender. There, you may not use 90% of the features (everyone uses different 10%, though), but they all work on the same "work piece", and interact well with each other. As long as they're mostly out of the way when you don't need them, they're fine in the same application - and splitting them out would degrade each of them[1].

Compare that with Emacs - in particular its utility as mini IDE + TODO manager + dayplanner + better Jupyter + e-mail client + a bunch more of stuff, at the same time. You could say all of these things should be their own applications, and you'd be right. They sort of are, if you see Emacs as a Lisp runtime with a text editor app bundled by default. The reason some people choose this combo is because for text-UI applications, Emacs offers the level of integration that's much superior to what regular operating systems give you. A bunch of completely orthogonal features end up reinforcing each other - improvements to IDE carry over to editing your e-mail, you can quickly glue together e-mail with your TODO list, etc.

My point being, bundling features is bad when they interfere with each other; it's OK if they complement each other; it's very much desirable if they reinforce each other.

--

[0] - For frequent use, hardware calculator with real buttons is better, though.

[1] - Sort of. Power users appreciate tools like imagemagick to quickly do some of the things you'd do in a bitmap editor, without having to start up a larger environment. Or, more importantly, the ability to operations in batch mode. But just because a power user might use imagemagick to batch-generate thumbnails, doesn't mean Photoshop should lose the "resize image" function.


Love this line:

> Put simply, what looks attractive in prospect does not necessarily look good in practice. Consumers often become frustrated and dissatisfied with the very cornucopia of features they originally desired and chose. This explains a recent nationwide survey that found that after buying a high-tech product, 56% of consumers feel overwhelmed by its complexity.


I was just talking to my friend about how his dad, whose VHS/DVD player combo broke and he wanted to get a new VHS/DVD combo. I asked why he wouldn't just get two separate players. You can get better, cheaper separate players. His dad just really wants the combo!


If I cared about VHS, I'd probably go for the combo too - separate players means double the number of power outlets, video input cables, and remotes.

Bundling a VHS/DVD player together feels more sensible than bundling a TV into a fridge, or a calculator into a mousepad, but I don't know if I could come up with a principle behind why.


Someone discovered unix philosophy. (1) do one thing and do it well; (2) promote composition (write primitives, let people script). To add a pessimistic tone tho, bloat is benefic because the more things you pack the quicker one will break, leading the consumer to buy your upgrade. Also: mega-apps favor vendor lock-in (the reciprocate also being true). Ok i just realized that second piece of unix philosophy is deeply anti-monopolistic; too bad the capitalistic game (especially the last XaaS plateform-capitalism trend) incentivizes complete market domination. I'm amused to see how well these guidelines can be framed in an anti-capitalistic ideology when the people at bell labs who wrote them (or the journalists from hbr) probably didn't think like that at all.


So you recommend carrying a Phone, Camera, GPS device, music player, laptop, barometer, torch etc. instead of one Phone as long as all o/p in text.


(1) i never talked about text mode (2) i never talked about smartphones, which specifically occur in other threads. It's easy demolishing straw-men. Yet i don't want a car with a screen, i fridge with a clock, a watch with a microphone. I could go on for several pages. Btw are you serious with that barometer?


My phone is a worse music player than my 2007 iPod was. Less storage, much lower battery life, slower UI. It doesn't have streaming but the Euler diagram of 'times when I want to use a music player' and 'times when I don't have LTE' is almost circular.


Unix does nothing well - including composition, which totally fails with most non-text data types.

Unix was bodged together with nails and a glue gun using offcuts and left-overs from more sophisticated operating systems. It's the proverbial jack-of-all-trades OS.


But it's a step hill to climb for new users. My first year of CS involved an intricate shell assignment and took a considerable time investment. Ultimately I also leveraged a lot of Bash (one, feature-rich tool) for large parts.

The days of dropping kids in front of a text prompt and expecting them to figure it out are over.


I don't think bash is feature rich: you have to call a program to do arithmetic! It's BASIC with pipes. If you really want to script in a unix-traditional way you should go for perl i guess. Also: i didn't want to imply going text-mode, scripting probably is going to be keyboard-driven but you may have graphical composition too. Tiling window managers usually get this "graphical shell" right by letting you choose/replace most components (status bar, window decorations, keybindings, launcher-menu, notification viewer). And about learning curve: i think we should really acknowledge that with any tool either you go fast or you go far. I'm not sure what's so unusual about struggling with a programming assignment in first year when for most other things of life it usually takes something from a couple years to a decade to be considered "good enough to stop learning" (or maybe start advanced training) (writing, basic math, music, sport, craft).


Maybe if we got them started earlier than college CS, it would be easier.

I learned how to use the MSDOS prompt growing up as a teenager. Most of my programming was done in an IDE, though -- including the first few years of my career. I had also worked a little bit with Linux command line, so I wasn't completely unfamiliar.

At age 26, I joined FAANG and was forced to adapt to developing in Linux 100% of the time. The transition wasn't difficult. I felt comfortable with the Linux command line after a few months, with some help from coworkers and Stack Overflow searches when necessary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: