Hacker News new | past | comments | ask | show | jobs | submit login

UI was pretty much solved in the 90's including the problem of instant feedback. Then abandoned for shiny stuff.

Personally I set up i3 to open most windows asynchronously, so my flow isn't interrupted. It's great, but takes a bit getting used to windows not randomly stealing focus. It's not for everyone though.




Can you be more specific about what was “solved” in the 90s? The platforms upon which apps are run now and the technologies as well as the expected capabilities of those apps have drastically changed. Not all UI development has changed just for being shiny.

I see little reason why building a UI today cannot be a superset of what was solved in the 90s so I’m curious to know what that solved subset looks like to you


Just start with the fact that all desktop programs had a menu bar where you could find every feature - and at least on Windows - also a shortcut for that feature.

This was broken with Ribbon and the hamburger menus that every application seems to have switched to for no other reason it seems than to copy Chrome.

To be fair Ribbon is somewhat useable again, but the in the first version I have no idea how people were supposed to find the open and save functions :-)

Other problems:

Tooltips are gone. Yes, I can see they are hard to get right on mobile, but why remove them on desktop while at the same time when the help files and the menus were removed?

The result is even power users like me have to hunt through internet forums to figure out how to use simple features.

Back in the nineties I could also insert a hyphen-if-needed (I have no idea what it is called but the idea is that in languages like Norwegian and German were we create new words by smashing other words together it makes sense to put in invisible hyphens that activates whenever the word processor needs to break the word and disappears when the word is on the start or in the middle of a line and doesn't have to be split.)

Back in the nineties I was a kid on a farm. Today I am a 40+ year-old consultant who knows all these things used to be possible but the old shortcuts are gone and I cannot even figure out if it these features exist anymore as documentation is gone, tooltips are gone and what documentation exist is autotranslated to something so ridiculously bad that I can hardly belive it. (In one recent example I found Microsoft had consistently translated the word for "sharing" (sharing a link) with the word for "stock" (the ones you trade).


IMO ribbon menus are when implemented correctly actually better than a menu bar. It gives icons and the benefit of a GUI to old text-menus.

Hamburger menus I disagree with but sort of understand the logic of - they're basically like making 'fullscreen mode' the default mode, and then the hamburger menu button just sort of temporarily toggles that off. It makes perfect sense on mobile (I don't think that's what you're talking about though), and on the desktop it can make sense in a web browser when you have, essentially, 3 sets of chrome - you have the desktop window, the browser's chrome, and then the website's chrome all before you get to the website's content.


A related detail, even things like icon design have gone in a strange direction. In the interest of simplicity they've gone from recognizable to rather amorphous blobs. A button for print has gone from a clearly recognizable image of a printer, enough you could probably even guess the model number, to the icon being a rounded square with another rounded square sticking out the middle top. Many of these newer icons are just too abstract and similar to one another to be recognizable, IMO, and I think the user experience suffers.


And since saving to a floppy is not a thing anymore...


What’s a printer? Does it make cheap iPads?


You list a bunch of unrelated things that has absolutely nothing to do with the topic: UIs back in the day had less latency, by not caring about accessibility, internationalization, etc - but I’m quite sure they were way worse off in terms of properly handling blocking operations: you surely know the Solitaire effect


At the very least, you knew that when the UI locks up, it's actually doing something. These days, previously-blocking operations may be run async, but the result is that, every now and then, the UI will miss the signal that the async job completed (or failed). An UI that regularly desyncs from reality is a relatively new problem.


> At the very least

I don’t care if the food is bad because it is too salty, or because it is overcooked, I still won’t eat it.


You'll reconsider when your only alternative is food that's cooked perfectly, but also bacteriologically contaminated.


There were all kinds of standards, guidelines that made people recognize what the program is doing and how to operate it. Nowadays, UIs are mostly defective, trying its best to hinder effective usage and hide/strip functionality. There are of course progress too, but something was lost that make applications today much less intuitive and hard/tough to use.


I don't understand what specifically is of concern here. Do you have exact examples?

Some platforms publish a list of recommended guidelines which are effectively a standard. For example here's one from Apple about when and how to use charts in an application: https://developer.apple.com/design/human-interface-guideline...


Some of it may be found in older HIG sources sure: https://en.m.wikipedia.org/wiki/Human_interface_guidelines

Also they were called GUI standards or UI at the time.

The modern equivalents called UX isn't reflecting the same conglomeration of standards and conventions though. So not talking about the newer stuff.

I'm no expert on it, and it required specialized expertise. It's been abandoned for mobile interfaces and the modern UX stuff, which often optimizes for design over functionality.

If you've never used old software, it's hard to explain. But old Apple or Microsoft GUI standards would cover the basics, but you'd also need to study the applications and how they presented their GUI.


While I broadly agree with the UX/HIG/design guideline issues that are common in modern software... literally none of them have anything to do with the technicals of how quickly the UI reacts to actions and renders the next state. You can have responsive UI in all of them. And all the modern ones also say "jank is bad, don't do that".


Back in the days™ there were widget libraries as part of the OS, which followed the guidelines by the OS vendor. This gave a foundation for somewhat similar behavior and standardisation of behavior. This gives usability.

Nowadays man's applications are web apps, build without such frameworks, with less UI research and even where frameworks are used they are often built with a somewhat mobile first approach.


One made up example that attempts to embody the complaint:

If I visited a site dedicated to hamburgers today, I would not be surprised if the "Log Out" button was presented as an image of a hot dog. It would be a mystery to me what that hot dog did until I clicked on it.

Compare this to 90's UI, where it would pretty much be unheard of to do something like that. It would have been a joke, or a novelty. These days that sort of ambiguous interface isn't presented as a joke or a novelty - it's the real product.


For example, Apple's human interface guidelines mandated that you have to give the user instant feedback (I think they even talked about how many milliseconds of delay are tolerable). A correctly programmed application in OS 9 would give instant feedback on each menu item selected, button pressed, and so on.

They later gave this up and almost everything else in their very reasonable guidelines based on actual research when they switched to OS X in a hurry and multithreaded everything. The early Mail application was a disaster, for example. Generally, people started complaining a lot about the spinning beach ball of death.

In contrast, modern UX guidelines are mostly about web design, how to make web pages look fancy. They also recommend instant feedback, but many libraries and application designs don't really support it.


You are replying out of context: if your file is on a network drive it doesn't matter if you have shiny UI, or text based terminal, you gonna wait for the round trip with your UI being unresponsive


That doesn't follow. Your UI can be responsive while the round trip happens. The application won't literally stop working 100%.


Depending on the application. Take single-window editor (think notepad) for example. If the "load" command is blocking, what can you do?

You cannot allow editing - the existing buffer will be replaced once load completes. You can show the menu, but most options should be disabled. And it will be pretty confusing for user to see existing file remain in read-only mode after "open" command.

The most common solution if you expect loads to be slow is a modal status box which blocks entire UI, but maybe shows progress + cancel button. This definitely helps, but also a lot of extra code, which may not be warranted if usual loads are very fast.


Well, notepad is a bad example because it has tabs now. If the load command is blocking, show an indicator of that inside the tab. The user can switch to the other tab and have a full editing experience - this buffer obviously will not be replaced.

If you pick another single-window app, my response is: that is a decision they chose to make. They can also choose to go multi-window or tabbed just like notepad did.


At the very least: allow the user to close the window or cancel the in-progress load.


You still don’t block the render thread.


Yep. Non-blocking threads was a thing in the 90's also. BeOS probably the poster-child of that.


Only if the programmer wasn't on fast local storage when they tested their code.


> solved in the 90s

Ya, you know, unless you didn't speak English, needed accessibility, had a nonstandard screen size, had a touch screen, etc


My biggest issue with i3 (which I generally love and have used for 5+ years) is that if I switch to a workspace, launch an application with dmenu, and switch to a different workspace before the application loads, the application will load on the current workspace instead of the workspace I originally launched the application from. Anyone have a solution?


If you use the same workspace for the same apps, I believe people configure i3 to force apps on the same workspace.

Not my cup of tea, but I got the same problem.

Going to try this: https://faq.i3wm.org/question/2828/open-application-and-fix-...


Same with Sway.


Anything stealing focus should be a crime. It's the reason I use i3 as well. When I use Mac OS X sometimes, things just popping to the front to offer updates or whatever makes me want to just drop the laptop in the thrashcan. Who comes up with crap like that?


Sounds great, how did you do that? What's the config option?


I believe it may depend on i3 version, but you can make it work with the no_focus command, and ie. any window title, all, etc.: https://askubuntu.com/questions/1379653/is-it-possible-to-st...




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: