Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Almost everything on computers is perceptually slower than it was in 1983 (2017) (twitter.com/gravislizard)
461 points by fao_ on Dec 19, 2019 | hide | past | favorite | 405 comments


Computers are perceptually slower because we've replaced CPU/memory slowness with remote latency. Every click, often every keypress (for autocomplete), sometimes even every mouse hover, generates requests to who knows how many services in how many different places around the world. No wonder it's slow. Speed of light. I see this even using my development tools at work, where actions I didn't really want or ask for are being taken "on my behalf" and slowing me down. I notice because I'm on a VPN from home. The developers don't notice - and don't care - because they're sitting right on a very high-speed network in the office. It's the modern equivalent of developers using much faster machines and much bigger monitors than most users are likely to have. Just as they need to think about users on tiny phone screens and virtual keyboards, developers need to think about users with poor network connectivity (or just low patience for frills and fluff).


When dealing with web applications at least, people don't realize how many non-essential requests are being made peripheral to the action the user actually wants to accomplish. For instance, install NoScript and make a page fetch to cnn.com. There are 20+ external page requests being made to all kinds of tracking, advertising, and analytical domains which have fuck-all to do with the user's request to see content hosted by cnn.com. The page loads almost instantly when all these non-essential requests are filtered. It's a hilariously bad side effect of the web becoming as commercialized as it is.


Inserting network and server latency into the local UI feedback loop is a terrible anti-pattern.


Windows explorer is seriously slow on Windows 10. Things like the right-click menu and creating a new folder are too slow for what is done. The new item menu is very slow, perhaps due to having office 365 installed? Creating a new folder sometimes doesn't update the display of the containing folder at all.


The built-in image viewer application in Windows 10 takes so long to start up that it's outright unusable for its purpose.


It also has a large permanent gutter, can't successfully fullscreen an image, can't switch images while zoomed (really common lately but still dumb), resizes to some third size if you double click on it twice(??), and loses its zoom state if you switch desktops.

Seriously the worst image viewer I've ever seen.


Someone from Microsoft, I know you are reading this, please explain to us how this makes it all the way to launch like this.


Their priority seems to be to make their "programs" "apps" so PCs are indistinguishable from cell phones. It may make business sense to them as so many of these companies are aiming to be everything to everyone, but it doesn't make user sense because if I wanted my PC to work like a phone, I wouldn't have just saved the money and stuck with my phone. They're also trying to make their apps to be social magnets to sell news stories, movies, music . . . it's very tiresome.


Same as everywhere else: some managers needed to justify a promotion.

P.S. not from MS


And all the more irritating because they had a perfectly usable image viewer which they could have used.


Data point of 1, but mine is as fast as anything. On my 7 year old desktop, upgraded from 7 to 10, its near instant. But given almost anything can hook into that menu (i.e. I have winrar, vscode, treesize, etc) it probably comes down to what you have running.


The second data point to the OP: I have a brand new laptop 16gb RAM and an SSD drive. When I mouse over "new", it takes 3-4 seconds for "folder" to appear. This is first thing in the morning with only Opera running. I do have a bunch of tabs open, but it does this without my browser running as well.


How's your start menu?


I concur. I handle video files on a SSD, it can take 2-3 seconds to just view a folder with one file inside. All I want is the file name and an icon but apparently in 2019 this is an Herculean tasks.


I really wish we could turn off what appears to be a deep scan of all the files every time one opens a directory in Windows 10. I just need to see the filenames to do what I want 99% of the time.


"This folder contains a single .wav file. I'll automatically switch over to an artist / album / etc view, and hide all the details you normally see"


It all depends on what hooks up into the context menu. The vanilla menu is fast. But once you accrue some unpackers, file sharing, VCS tools and garbage like Intel graphics drivers, the context menu gets unusably slow.


One part of me says Microsoft still needs to take responsibility for managing the experience.

"git-torrent is lowing your system down, do you want to disable it?"

Another says careful what you wish for, don't want everything to be locked down like Apple do you?


Microsoft does this with Outlook add-ins. Many people accidentally disable them, or wonder where their Skype add-in went to when Outlook killed it because it took too long to load.


I'm using a CalDAV add-in to Outlook and it gets stopped every now and then. It seems pretty lightweight, and my guess is that Outlook is so borked that it causes the problem itself and then mistakenly attributes it to the add-in.


I drop to a command prompt quite often to do a dir /S thatfile.exe

Finding that file via Explorer search takes 10 minutes. Via dir, it somehow takes 10 seconds or less.


I use Everything from Void Tools.


I use Total Commander on Windows, Midnight Commander and DoubleCmd on Linux. All nice and instant with plethora of operations. Not sure why Explorer type UI dominates.


Off-topic story time. So, midnight commander has a nice feature where you hit a hot key to drop into a full-screen shell of the tab which is currently selected. Back in the early 2000’s it had some sort of bug where it would sometimes drop you into a shell where PWD was from the _opposite_ tab.

And of course, the one time this bit me was when I issued ‘rm -rf *’, suddenly realized the command was taking waaaay too long, ctrl-c’ed it, and felt the blood drain from my face as I realized I had just lost 1/4 of my mp3’s.

Not a bad first text editor though. Cut my programming teeth with it.


The weird thing is that these are solved problems.

The most impressive, simple piece of software I've tried is a search tool called Everything.

I thought search was just hard and slow. Everything indexes every drive in seconds and searches instantly. I imagine it must be used by law enforcement to deal with security by obscurity.


I agree, I think there are a few things in the Windows file explorer that conspire against its good performance (file preview is a big factor, but recycle bin content seems to affect it too), and it does seem to get worse over time. I think there's a market now for a third party 'back to basics' explorer.


Classic Shell, but Windows 10 updates sort of broke it


7z file explorer works well for me. Now I am mostly using Linux though.


The productivity boosters for me since 1983 aren't so much speed, it's:

1. A large hires screen so I can see lots of context

2. Lots of disk space

3. Online documentation available

4. Protected mode operating system

5. Github

6. Collaboration with people all over the world

The productivity destroyers:

1. social media


can't comment about 4 as it was before my time doing useful work on a computer, but everything else sounds right.

>The productivity destroyers: > 1. social media

stares at HN page


> can't comment about 4

Having a real mode operating system (DOS) means that an errant pointer in your program could (and often did) crash the operating system requiring a reboot. Worse, it would scramble your hard disk.

My usual practice was to immediately reboot on a system crash. This, of course, made for slow development.

With the advent of protected mode (DOS extenders, OS/2), this ceased being a problem. It greatly speeded up development, and protected mode found many more bugs more quickly than overt crashes did - and with a protected mode debugger it even told you where the bug was!

I shifted all development to protected mode systems, and only as the last step would port the program to DOS.


> The productivity destroyers:

> 1. social media

2. Project Managers


Also modern programming languages, right Walter? ;-P


7. The Internet itself


Apparently I use my computer differently than a lot of commenters. Because when I dust off my 1983 Apple IIe it gets REALLY slow when I try to have 50 open browser tabs, edit video, and run a few virtual machines.


Yet if you check how fast it renders a character to the screen it will almost certainly be faster.

We've made trade-offs in the computer space, input latency and rendering of the screen (also in terms of latency) has suffered strongly at the hands of throughput and agnosticism in protocols. (USB et al.)


The latency issues are dealt with, but you have to accept the RGB LEDs that come with gaming things.


Not really. Here's what an 70's/80's PC and OS had to do to print a single character in response to user input (simpified):

Poll the keyboard matrix for a key press. Convert the key press coordinate to ASCII. Read the location of the cursor. Write one byte to RAM. Results will be visible next screen refresh.

A modern PC and OS would do something more like this:

The keyboard's microcontroller will poll the keyboard matrix for a key press. Convert the key press location to a event code. Signal to the host USB controller that you have data to send. Wait for the host to accept. Transfer the data to the USB host. Have the USB controller validate that the data was correctly received. Begin DMA from the USB controller to a RAM buffer. Wait the RAM to be ready. Transfer the data to RAM. Raise an interrupt to the CPU. Wait for the CPU to receive the interrupt. Task switch to the interrupt handler. Decode the USB packet. Pass it to the USB keyboard driver. Convert the USB keyboard event to an OS event. Determine what processes get to see the key press event. Add the event to the process's event queue. Task switch to the process. Read the new event. Filter the key press through all the libraries wrapping the OS's native event system. Read the location of the cursor. Ask the toolkit library to draw a character. Tell the windowing system to draw a character. Figure out what font glyph corresponds to that character. See if it's been cached, rasterize glyph if it's not. Draw the character to the window texture. Signal to the compositor that a region of the screen needs to be redrawn. Create a GPU command list. Have GPU execute command list. Page flip. Results will be visible next screen refresh.

I could drag this out longer and go into more detail, but I don't really feel like it.

I'm sure people who actually work on implementing these things can find inaccuracies with this, but it should give an idea how much more work and handshaking between components is being done now than in the 70's/80's. Switching to gaming hardware isn't enough to get down to ye olde latencies.


We have 1000x speed machines to handle that. Notepad is perfectly responsive. but most apps do crazy side gunk interfering with typing.


Except we don't, and this really is faster on several older machines: https://danluu.com/input-lag/


That’s not true, polling of usb keyboards, multi-process scheduling and rendering through the various translation layers adds latency, quite a lot actually. You really notice it if you type on a C64 today. The machine really feels instant. Obviously it’s too anemic to do real work on and that’s kind of my point. We traded a lot of latency for a lot of throughput in other areas.


I know, there's input lag, USB driver fiddling, kernel queue, font processing, kerning, glyph harfbuzzing or whatever, blitting, compositing, rendering, freesync/g-sync, and then waiting for the LED crystals to deform and so on.

Yet there are 1000 Hz USB, and optimizations for all of the above, and e2e lag soon might become solved, and then if we're lucky it'll percolate down to consumer stuff eventually.


I think it comes down to the fact that GUIs _sell_. GUIs have visibility and appeal, they are something users can actually see, and have opinions about (right or wrong). GUIs are the ultimate bikeshed, and for many users, the lipstick IS the pig.

----

Anecdote: I can't count the number of times I have seen a team changing a color, updating a logo, or moving an image a few pixels, resulted in happy clients/customers, and managers sending a congratulatory company wide email. While teams solving difficult engineering problems may have garnered a quiet pat on the back, if they were lucky.


> I think it comes down to the fact that GUIs _sell_.

IMO not just that, but also that the sale happens very early - before people get a chance to discover the UI is garbage. What's worse, in work context, a lot - probably most - software is bought by people other than the end users. Which means the UI can be (and often is) a total dumpster fire, but it'll win on the market as long as it appeals to the sensibilities of the people doing the purchase.


I just remember doing cad in the 1980's. If you want to talk slow. As in getting coffee while the compute redraws the screen slow. Some time around 1993-4 suddenly the hardware was fast enough.

I think at this point we're trading performance for a bunch of ultimately worthless bling bling. No one's added a damn thing that improves user productivity.


> changing a color, updating a logo, or moving an image a few pixels, resulted in happy clients/customers, and managers sending a congratulatory company wide email. While teams solving difficult engineering problems may have garnered a quiet pat on the back, if they were lucky.

That sounds like an extremely unhealthy business environment. It'll also leave you with just the worst engineers who cannot find a better job. A company that doesn't do this should be able to run circles around one that does.


I despise the web.

HTML was designed for static documents, it boggles my mind that things like nodejs were created. It's not a secret.

HTML techs can't even run efficiently on a cheap smartphone, which is the reason apps are needed for smartphones to be usable.

Every time I'm talking to someone for job offers, I state that I want to avoid web techs. No js, no web frameworks. I prefer industrial computing, to build things that are useful. I don't want to make another interface that will get thrown away for whatever reason.

Today the computing industry has completely migrated towards making user interfaces, UX things, fancy shiny smoothy scrolly whatnots, just to employ people who can't write SQL. Companies only want to sell attention. This is exactly what the economy of attention is about.

All I dream about is some OS, desktop or mobile, that lets the user write scripts directly. It's time you encourage users to write code. It's not that hard.


>It's time you encourage users to write code. It's not that hard.

It is. Try teaching coding to someone non technical, especially someone that doesn't want to learn, and by the time you get them to understand what a variable is, you will fully understand that coding is not for everyone.


This.

What he suggested wasn't viable in terms of productivity either. One may be programmer but don't want to spend time administering the insignificant parts of the system.

I never understood the culture of elitism in system micro-administration by hobbyist crowd.


> HTML techs can't even run efficiently on a cheap smartphone

That's largely the fault of ads. Some well placed JS stuff is lightning fast, even on mobile.


Fortunately we now have webasm, which will allow developers to write customized web browsers that will recreate the same HTML/Javascript/DOM web runtime environment that we have now, only somewhat less efficiently.


I agree with your general point, but:

> I despise the web.

The web used to be great. I think you're despising something else.

> HTML was designed for static documents, it boggles my mind that things like nodejs were created. It's not a secret.

Someone already told me we were heading this way in 2005, using JS to write apps inside web pages. It boggled my mind then, but it hasn't really boggled since. My main worry (and sadness) was that JS was such an utterly shitty programming language back then. It was something you loved to hate, writing JS functionality for web pages was almost like a boast; look at the trick I can make this document do by abusing this weird little scripting language.

But that has changed, oh boy has it changed. Almost all the warts in JS can be easily avoided today. With the addition of the `class` keyword (standardizing the already possible but hacky class-like constructs), the arrow functions, and the extreme performance increase in current engines, it's actually become one of my favourite languages to code for. But don't worry I don't use it to write bloated web apps :)

> HTML techs can't even run efficiently on a cheap smartphone, which is the reason apps are needed for smartphones to be usable.

That's not the reason why apps are "needed". It's simply because it allows for more spying on the user. A website can only do it when it's loaded, an app can do it all the time, periodically, on boot, or whenever. They get a neat device-global tracking ID (and more than enough fingerprinting info, just in case), which makes tracking super easy and the advertisers happy. They don't have to do anything with that cookie permission banner EU law, because, well yeah apparently the EC didn't realize that apps are being used to do everything the advertisers want but the websites can't. Cookies are child's play compared to the trackers they can insert at elbow depth.

And the few apps which are actually apps because performance of being a web app doesn't suffice, they actually tend to be about performance and not so bad in the bigger picture.

You see the same thing on the web though, all the bloat and slowness and shit is caused by ads and tracking. Normally we fight against industries that are a net negative on society. Except that ads happen to be equivalent to propaganda, and tracking happens to be equivalent to surveillance, so somehow there's not a lot of push from the powers to get rid of these things, because of how convenient it is this industry just builds the infrastructure for them. They especially like it when the tracking is sent over unsecured HTTP.

That said, there should be sufficient work for a qualified engineer to write code for industrial applications, no? Many web devs can only write PHP/JS framework code. If you know how to program industrial controllers, or have similar qualities, experience with various industrial systems, I doubt you're going to have to explain anyone you're not a web dev ...


And he's not even talking about software bloat. The word processor I have on my early 90's Powerbook is more responsive, generally faster to use, than my current one running on a Core 2 duo processor. Oh, and by the way, I was once complaining about this to a friend who's in IT, and he told me how the speed in which a software runs doesn't mean anything regarding its quality. What I mean is, I was telling him how bad some new software was because it was quite slower than one 10 years older which did the same thing, and he tolde me that, in software engineering, this (speed) is never a measure of a program's quality. Is this universally accepted? Speed and responsiveness are not taken into account? I always meant to ask other people in this field, but always forgot.


> Oh, and by the way, I was once complaining about this to a friend who's in IT, and he told me how the speed in which a software runs doesn't mean anything regarding its quality.

Your friend is wrong. It's an imperfect proxy, but looking at programs that do work, speed is a good proxy for quality, because speed means someone gives a damn. There are good programs that are slow, but bad programs all tend to be bloated.

Of course "speed" is something to be evaluated in context. In a group of e.g. 3D editors, a more responsive UI suggests a better editor. A more responsive UI in general suggests a better program in general.

> this (speed) is never a measure of a program's quality. Is this universally accepted?

Universally? No. It all depends on who you ask. Companies tend to say speed isn't, but the truth is, a lot of companies today don't care about quality at all - it's not what sells software. If you ask users, you'll get mixed answers, depending on whether the software they use often is slow enough to anger them regularly.


To me (on the internet since 92), speed is 100% a measure of a program’s quality. Intensive tasks get a pass (especially if they are pushed to a background queue), but IU responsiveness is definitely a measure of quality for me. Jason Fried has also written an optimized extensively for UI speed in Basecamp (a quick google shows an article from 2008). Speaking of: there has also been a lot written about Amazon’s discovery that every 100ms of latency cost them 1% of sales from people simply walking away from the “slow” site.

Especially when you’re doing the same task “template” on a day to day basis, even 1 second per input adds up quickly.


More like speed is a quality, whether you care about it or not is a different story.

In many cases I'm happy with simple but slow but fast enough.


One reason why I'm happy to not be in IT, is because of said bullshit. Maybe that means I am part of the problem, because if every programmer who has a problem with that, leaves, you're left with just the programmers like your friend who don't see anything wrong with this.


>how many times have i typed in a search box, seen what i wanted pop up as i was typing, go to click on it, then have it disappear

Regardless of anything else, this is 100% happening to me on a regular basis. And the ironic thing is that I think it is caused by the attempt to speed up getting some results onscreen. But it’s always 500ms behind, so it “catches up” while I’m trying to move the mouse to click on something.


firefox is notoriously bad at this - type a bookmark name into the url bar, move mouse onto the bookmark, search results come in and you click something you didn't want. gets me all the time.


I think we can actually blame React and similar frameworks for the issues we see in many modern apps, including the ones mentioned in the article.

Part of the issue stems from the "strong data coupling" that's all the rage. Everything on the page should correlate at any given point in time. Add a character to a search box and the search results should be updated. The practical effect of this is that any single modification could (and often does) rewrite the contents of the entire page.

The other thing the article brings up is that developers and designers often disregard input flow. This may be partly driven by not having sufficiently dynamic tooling (Illustrator can hardly be used to design out flow patterns, for example.)

These two issues have a unifying quality: Websites must be "instagrammable", which is to say look good in single snapshots of time, and the dynamics take a serious back seat.


> any single modification could (and often does) rewrite the contents of the entire page.

I thought the entire point of react was that it _doesn’t_ rewrite the entire page (DOM diffing)?


I said that it rewrites the contents of the entire page, by which I mean ~100% of the things visible to the user may change.


If the devs use SSR and code splitting and whatnot loadtime isn't so bad. But yea these frameworks get abused because everyone wants to stay competitive and make a decent salary


Yeah but React's popularity is because there is demand and expectations for interactive UIs with tons of features that are taken for granted. Users expect basic amount of interactivity and usability that adds a ton of complexity which React offers a good model for.


I feel that, for a discussion on a site with the minimalistic design of HN, the demands of that sort of user are sufficiently foreign that you need a citation.

Not because I particularly doubt that expectations on the wider web are different to HN. Just that the crowd here isn't going to be able to easily understand the people out there. Anyone who expects interaction was weeded out long ago.


I think the core point missed by some people on HN and in dev circles in general is that non-tech people don't expect shit. They accept what they're given, mostly unquestionably, because they don't know any better and couldn't know any better. It's not their domain.

To get meaningful answers about what kind of UI people prefer, you'd have to sit a lot of them in front of several different interfaces, show them around, and then let them use those interfaces for prolonged amount of time, and then - only then - ask which one they prefer. But this almost never happens in the wild, so the market is completely detached from what people want.


Uhh,no. Reacts popularity is because of lazy devs that only know JavaScript and want to use it everywhere. Large companies want to have have one "unified" codebase that runs on all "platforms". It has nothing to do with interactivity or usability because if it did then developers would write native apps with native UIs with much better interactivity, usability, and performance.


React is leveraging Functional Reactive Programming concepts, with the use of lifecycle and other conventions that basically make it easy to reason about events and their effects. Every click/scroll will have an effect on the app state and/or network call, and I actually think a sufficiently complex enough app would end up with React-like patterns or the alternative which is much worse, a ton of repeated code and logic.


Uhh,no. React is popular because devs HAVE to use JS & DOM to do web development in the browser, and they want speed, and to use a library endorsed by a big company. And some of those devs (like me) prefer the unidirectional / function paradigm.


> and they want speed

They won't get it with React. And I'm referring to both how fast the webapp runs and development speed. It gets too complicated too quickly, even for relatively small sites.


Whether or not it seems complicated depends on if you have nailed down the mental model. It's a bit like Git, in that respect.


Complexity is a very real and objective thing. When you can't 100% guarantee your program won't go into an infinite loop when you change a single line, it's too complex.


I don't know of any infinite loop problems in React. I would think they are less likely than using MVC / JQuery style patterns.


Have you done a lot of work with Hooks and useEffect? React changed the entire component lifecycle with the introduction of hooks. All sorts of weird things trigger re-renders now, including when a dependent function changes. You have to surround all of those with useCallback.

I should do a thorough writeup of the infinite loop issue in React.


I've not used hooks yet, but it's on my list to learn. Thanks for the warning though!


You seem to think React is mostly (only?) for building mobile apps.

Most usage of React is for web apps. For most of those, a native UI makes no sense and has massive distribution implications.


If you spent what people paid for a PC in 1983 (literally, without any inflation) you probably wouldn't notice anything being perceptually slower.

Like the first Mac retailed for $2500 US. Go spend $2500 on a PC today, you'll have a great time.

Granted, economies of scale make this kind of a dumb argument. But it has a bit of truth to it. People are just less willing to spend as much on their machines, as well as push much more limited platforms like mobile to their limits. We should definitely deal with that as developers, don't get me wrong - but not having to deal with the optimizations they dealt with 40 years ago doesn't make me unhappy.


Not true.

I have a top of the line Intel processor that’s less than 2 years old (launched, not bought). 970 Evo Pro that’s the one of the fastest drives around. 32 GB RAM (don’t remember the speed but it was and is supposed to be super fast).

Explorer takes a second or two to launch. The ducking start menu takes a moment and sometimes causes the entire OS to lock up for a second.

The twitter rant is spot on.

There’s so much of supposed value add BS that the core usage scenarios go to shit.

And this is coming from a Product Manager. :-)

Anyway the referencing problem is painful. I feel it often. Google maps or Apple Maps. Try to plan a vacation and Mark interesting places on it to identify the best location to stay. Yup gotta use that memory. Well isn’t that one of the rules of UX design, don’t make me think?

Regarding OSes: storage has gotten so much faster and CPUs haven’t, that storage drivers and file systems are now the bottleneck. We need less layers of abstraction to compensate. The old model of IO is super slow is no longer accurate.


Honestly it sounds like your problem is Windows.

I'm writing this on an AMD Phenom II, running Debian and StumpWM, that's over 10 years old. I've upgraded the hard drive to an SSD, and the memory from 8 Gb to 16 Gb (4 Gb DIMMs were very expensive when I first built it) and it's as fast as can be.

My work computer is much newer, has twice as much memory and a newer Intel processor, and I really can't tell the difference except for CPU bound tasks that run for a long time, like compiling large projects.


Have to voice my agreement. Linux is an expensive investment but so very much worth it. Each time my colleagues complain about their computers it is because of Windows. I count myself lucky to have Linux as my only desktop and the skill to maintain it. I run an ancient i5 2500k with 8GB RAM and SSD. All the games I play work fine on Steam Proton. I still have to figure out how in the world Reddit on Firefox manages to completely lock the system up, with looping audio and frozen cursor. Nothing else causes that fault.


> I still have to figure out how in the world Reddit on Firefox manages to completely lock the system up, with looping audio and frozen cursor. Nothing else causes that fault.

Fellow X220 user here... a solution for this exact problem where the system runs out of memory and then you sit there staring and waiting until it churns around long enough until it can do stuff again is to run earlyoom[0].

It will kill off the firefox process (or whichever is the main memory hog) early, which is also annoying but less so than having to wait minutes until you can use your computer again.

[0] https://github.com/rfjakob/earlyoom


My previous two laptops came with Windows installed, it was XP and Win7, which I happily used for a year or two, until despite my best efforts, the whole system got crudded and slow, and at some point a teensy crashy or maybe I got a virus and that's when I put Linux on the thing. That easily gave the device a few more years of useful life. (my current laptop also came with windows, but I just backed up the factory image, wiped and installed Linux right away).

Anyway, while every version[0] of Windows I have used has become inescapably crudded up and slower over time, on Linux, even the old laptop, the only thing that got slower over time was the web browser. Which has mostly to do with webpages becoming heavier.

[0] Actually win95 cause I can't remember if this also happened on win 3.11 and the like.


The new Reddit just does that on every device. Luckily there is still old.reddit.com and i.reddit.com


I don't feel like linux is expensive investment for everyone.

I am a first year CS student. when I got my first laptop recently, I got crazy and installed debian (had some prior experience with command line), it didn't work very well for laptop. All DEs except enlightenment (yeah i even tried it) had lots of display related glitches due to cheap hardware.

Then I moved on and installed fedora. Nothing to tweak from CLI. Just changed few settings from GUI and peace of mind even on relatively obscure hardware.

It has been vastly simplified and worth it for anyone in IT / CS related fields.


> memory from 8 Gb to 16 Gb

2 GiB total is not a lot


Very satisfying to read about the new start menu. Every year or two I'll wonder how things are in Windows land, install a fresh copy of Windows 10, open the start menu and wait. Oh, there it is!

This is one of the things that made me ditch Windows when it came out, but I was pretty sure they would have fixed it now. Now I'm convinced Windows 10 is part of an authoritarian experiment in getting populations to gradually submit to a worse quality of life.


I have been told I have to update my Windows 7 system at work starting next year and I'm already dreading it. Too much software is Windows-only to switch to something else.

Privately it was so easy to ditch. (Still have it on dual-boot, rarely use it and so every 2-3 times it needs to update for long minutes while I wait. Meanwhile Mint updates the kernel during operation while I barely notice at all.)


I heard you can get security updates til 2023 with some registry tinkering, but for your work computer it might not be worth the social consequences.


Actually, a computer from 1983 is still the reigning latency champion. https://danluu.com/input-lag/


That comparison's a bit ridiculous, considering how much more a modern system is doing and making possible. I think all it shows is that latencies under 200ms are widely regarded as acceptable. What latencies are observed if you run an OS of comparable simplicity to the Apple 2e's on a modern machine?


If you ran an OS that was exactly as simple as the Apple 2e, the Apple 2e would still win.

Modern hardware introduces a significant amount of latency, its important to differentiate throughput and speed, a modern computer would crush a 2e in throughput a million times over, maybe more, but that doesn't mean its pipeline is shorter.


Why? Can nothing be done about it short of literally going back to 90s era hardware?

I see latency as a silent killer, of sorts. For instance, if you introduce a tiny bit of mouse latency, users won't notice the additional latency, but they will sense that their mouse doesn't feel quite as good. Give them a side by side comparison, and I bet most will be able to tell you the mouse with slightly less latency feels better.

This extends to everything. Video games with lower latency appear to have better, smoother controls. Calls with less latency result in smoother, more natural conversations. Touch screens with less latency feel more natural and responsive.

(I only have anecdotal evidence of this, but I am absolutely convinced of it.)


Definitely dig this theory and feel like it might be spot-on, but do we have any linkable evidence?


Evidence? Of how computers work?


Evidence that “If you ran an OS that was exactly as simple as the Apple 2e, the Apple 2e would still win.”...


> I think all it shows is that latencies under 200ms are widely regarded as acceptable.

They are literally not. At all. You're way off. For anyone who cares about latency, you gotta be sub 50ms at least. For anyone doing generic not latency-sensitive work, maybe you can get away with 100ms, but that's stretching it.

200-250ms is the (purposefully built-in) latency with which an autocomplete may appear while typing. Not the latency for a single character or mouse click!

Where do you get 200ms latency anyway? That's a lot


> ... you probably wouldn't notice anything being perceptually slower

I disagree. I have such a PC (64 GB of RAM, Quadro GPU, SSD, etc.) and I absolutely do notice things being slow, even things like Word, Excel, and VS code, let alone resource-intensive professional software.


Personal machine or work machine with antivirus and enterprise spyware?


Work machine. But it's a small company, no dedicated IT staff, no anti-virus, and everyone has admin privileges to their computer.


A more expensive PC does very little to address the latency issues at play here, the problems are very much not lack of processor speed, gpu speed, or even ssd speed (most times).

I know from experience, the most godlike PC you can possibly build does virtually nothing to make common applications less laggy.


Modern day tools such as Slack, VS Code and other Electron & browser based apps do bring a fair amount of lag into day to day work.

The common denominator there is browser tech & I think that will improve with time. And network-delivered services like Google Maps & Wikipedia are best compared to CD and DVD-ROM based services like MapPoint and Encarta, which had their own latency and capacity challenges.

In the meantime, you can still use tools like vim for low-latency typing. And it’s kind of interesting to see a Java GUI (IDEA) perform as well as it [has](https://pavelfatin.com/typing-with-pleasure/).


I get your point, but I don't agree on the anecdotal front. I haven't used fewer than 4c/8th and 16GB of highspeed ram in probably 5 years - and the only "common" applications that I notice going slow on me (unless on the occasion I'm not booting older/slower hardware), are things like IDEs and absurdly large spreadsheets. Even stuff like Electron apps are snappy to me and I haven't had issues with (GitKraken, VS Code, and Slack are daily drivers for me).

Browser based apps are a shitshow though, but I figure that's mostly out of anyone's control. I chop that up to the browser being fundamentally a poor place for most applications, even ones that are tightly coupled to a server backend.


Problem is its perceptual, Go back and use windows XP, its a complete nightmare lagfest with any appreciable CPU load compared to windows 7+ (all UI rendering was done directly by the CPU, no GPU acceleration except a few minor things).

I bet at the time you barely noticed though.


I will try to explain - going back to XP may not take you back far enough.

I recently read a history of early NT development, and then installed NT4 in a VM to play with, choosing a FAT disk. It is /extremely/ responsive. Much more so than the host OS, Windows 10.

The NT4 and 95 shells were tight code. They were replaced a few years later by the more flexible "Active Desktop". This was less responsive.

In later releases, Windows started to incorporate background features, such as automatic file indexing. File indexing is IO intensive and hammers your CPU cache.

When I was regularly using NT4 (years ago), I had an impression that there was some overhead caused by registry searches. If this was ever a thing, improvements in raw computing power have conquered it.

If anyone else wants to try, NT4 and VC++ cost me next to nothing on amazon. For a good editor, get microemacs. Python2.3 works. (Don't let it near an open network.)


I do recall being a luddite in upgrading from Office 2003 to 2010 (rip, '07 on Vista) and rued the day that it became permanent. It did get better though.


You'd think so, but yet here we are. Even on a modern $3,000 machine it takes 5+ seconds to open Photoshop.


That's more a statement on Adobe's quality of engineering than computing. CC is awful software and I hope their engineers are embarrassed by its deployment for the fantastic platform that is their creative tooling.


Right, that's that point. Why does Photoshop take so long to start up and be usable even on incredibly powerful hardware?

It's hard to find an excuse, considering:

- Adobe has vast resources

- Photoshop is a mature piece of software

- It's image editing, not a complex video game (look at what something like Red Dead Redemption 2 can accomplish with every frame, @ 60FPS)


I would argue that it’s because of vast resources.

Both Adobe’s, and their customers.

At a certain level, when a graphic designer complains that Photoshop is too slow, they don’t push back against Adobe for optimizing poorly, they just buy a new computer.


The whole point on that rant and supporting comments here is that almost all software today is of such "quality of engineering".


Most user facing software is more like adobe's quality than unlike adobe's quality.


Most software surprises me with how much time it needs to start. Games are particularly slow, I guess because they need to load the entire engine and all assets before they can even show you a game menu, but even very simple applications can be slow.

On the laptop I'm typing this on, Windows Explorer often takes several seconds to open.


Games tend to be optimized for framerates so it makes sense to sacrifice load times for that. Of course there are plenty of games that are just badly optimized and could improve both framerates and load times.


Games on old machines also used to have loading screens, remember? They actually sometimes took quite a bit longer than, say, starting PS on a modern machine. I don't think games a very useful comparison in this context, it's more about utility and application software.


> "People are just less willing to spend as much on their machines"

And why should they? Today's smartphones are much more powerful than the most powerful supercomputer of 1983. Computers have been powerful enough for most practical purposes for years, which means most people select on price rather than power. And then a new OS or website comes along and decides you've got plenty of power to waste on unnecessary nonsense.


The first Mac was 3.5 inch disk based IIRC. I remember test driving it and was kind of shocked at that price since it felt slower than my Commodore 64 with a hard drive (the tape drive was so slow but cheap!) or my next computer, an Atari ST with a hard drive, of course the disk access/read/write speed was the dominating speed factor.


I seriously doubt there is a huge difference in how fast I can access files, scan memory, or iterate through a loop, which is what has a huge impact on perceptual latency.


NVME over SATA drives will drastically improve file access times. You will find these on newer, pricier machines, but if your mobo has a slot, use it because the drives are fairly cheap.

Going from low clocked memory to high clocked memory can cost a bit of money (last I looked, it was like a 30-50% premium going from 2666 to 3200 to 3600MHz). As well, if you're comfortable, tightening the CAS timings on your memory can see noticeable improvement in memory bound applications. I personally have measured a 25% performance increase once my memory profile for 3200 was set correctly (mostly a Ryzen thing) and just upgraded to 3600 and haven't tested, but in my larger projects with tons of in-memory code I'm noticing improvements.

Iterating over a loop can be a world of difference depending on what is happening in the loop and what vector instructions your CPU supports, and how well it is supported. As well as your CPU's clock, L1/L2 cache sizes... basically everything.


I have used a computer with NVMe, higher clock speed, better caches, more RAM…the works (but the computers I’ve personally owned only have had some of those ;)). They’re faster, yes, but fractionally for short latencies. Typing a character on one and waiting for it to show up is not significantly better on the other.


> People are just less willing to spend as much on their machines,

Please stop blaming the consumers, they have very little freedom of choice.

> as well as push much more limited platforms like mobile to their limits.

I don't think anyone has really pushed any recent smartphone to their limits. I haven't checked if any demoparty maybe had a smartphone compo, but if they didn't, then yeah nobody has really tried.

The C64, Amiga and early x86 PCs have been pushed to their limits though, squeezing out every drop of performance. And there still exist C64 scene weirdos that work to make these machines perform the unimaginable.

Smartphones haven't been around long enough and have been continuously replaced by slightly better versions, that really nobody has had time to really find out what those machines are capable of.

> but not having to deal with the optimizations they dealt with 40 years ago doesn't make me unhappy.

I used to have to deal with such optimizations and I totally get that. It's freeing and I occasionally have to remind myself what it means that I don't have to worry about using a megabyte more memory because machines have gigabytes. Except that a megabyte is pretty huge if you know how to use it.

But not having to deal with the optimizations also means that new developers never learn these optimizations and they will be forgotten. And that's bad. Because there's still a place for these optimizations, like 95% of the code doesn't matter, but for that 5% performance critical stuff, ... if you just learned the framework, then you're stuck and your apps gonna suck.

It's kinda weird to optimize code nowadays though. At least if you're writing JS. It's not like optimizing C or machine code at all. If you're not measuring performance, 99% sure you'll waste time optimizing the wrong thing. Sometimes it feels like I'm blindly trying variations on my inner loop because sometimes there is little rhyme or reason to what performs better (through the JIT). Tip for anyone in this situation: disable the anti-fingerprinting setting in your browser, which fuzzes the timing functions. It makes a huge difference for the accuracy and repeatability of your performance measurements. Install Chromium and only use it for that, if you worry about the security.


There are two problems with interfaces like google maps - and one exacerbates the other.

- it's not bloody obvious how they work - randomly clicking on meaningless icons, try to uncover functionality. - then just as you get used to it, they change it!

My biggest feature request would be a key stroke to hide all the floating crud that is obscuring my view of the map!


"one of the things that makes me steaming mad is how the entire field of web apps ignores 100% of learned lessons from desktop apps"

But dude, DESIGN. The design. Look at those rounded corners.


Financials are to blame.

Selling a software release is a one-time payment. Selling a support subscription is recurring revenue. And if you make your software horrible enough to use without the support subscription, it is automatically immune to piracy.

As a practical example, I don't know anyone who uses the free open source WildFly release. Instead, everyone purchases JBoss with support. It's widely known that you just need the paid support if you want your company to be online more than half of the day. And as if they knew what pain they would be causing, their microservice deployment approach was named "thorn-tail".


Anecdotally, we used WildFly with great success on a project or two.


Most of the arguments mentioned in the article are just a bias [https://en.wikipedia.org/wiki/Rosy_retrospection] and personal preference.

Remember when softwares are stored in floppy. It took a while to load. Then every application came with different behavior and key bindings.


No this has actually been measured by people.

https://danluu.com/input-lag/

The computers are faster, can do more stuff, and monitors have higher frame rates. But for many applications that aren't games latency and non-responsive UIs are a growing problem.


I remember being able to type faster than the machine could keep up an an old Mac. (Maybe a Mac plus?).

I couldn't type up handwritten notes reliably, because half a page in, I would fill up the buffer and characters would get dropped.


If you want to relive that experience, just use voice.google.com.


Or the desktop Outlook client.


<cough> Android Studio. I'm pretty sure code completion used to be way faster and less intrusive (pop up with suggestions comes up when you're finished typing the keyboard, decides to pick a random thing, erases your keyword).


Ouch. "finished typing the keyword" not "keyboard".

I don't think I could have typoed this, there must be a spell checker somewhere that I haven't disabled...


That input lag is true, but that is not the argument the article is presenting.

The article argues that keyboard is a better interaction hardware than mouse. Google Maps doesn't work exactly as he wanted. Popups everywhere, etc.


Last night I downloaded an app update on my handheld computing device (a phone). It took around 30 seconds to download and install the 100mb update, on a internet connection I can use pretty much anywhere in Europe for £10/mo.

15 years ago I would have been waiting 20 minutes for a single song to download on a hard wired PC.


I've been trying to explain for years that for the past 4 decades the hardware guys have been surfing Moore's law and the software guys have been pissing it away ....

Well Moore's law is falling by the wayside, if they want to start doing more with less the software guys are going to have to stop using interpreted languages, GC, passing data as json rather than as binary, all that overhead that's deriguer but that doesn't directly go to getting the job done


This is widely joked about, there's even a Wikipedia article: https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law

"What Andy giveth, Bill taketh away"


I mostly disagree with the conclusion. GC can be very fast these days. Serialization to JSON does not really take much time. Granted, some scripting languages are terribly slow.

But the main problem seems to be a lack of a clear architecture in many systems. These systems have often accumulated so much technical debt that nobody understands why they are slow. Profiling and optimization might remove the worst offenders but usually don't improve the architecture.

Basically, in the software industry, we use the hardware gains to cover up our organizational deficits.


I don't think interpreted language or gc are inherently the issue you think they are.

You can write seven layers of lagging crap in c if you like.


Is json really that difficult to deal with?


JSON provides a lot of flexibility: it's human readable, it has explicitly-named/optional/variable-length fields, it provides nesting ... all of this comes with a cost:

- extra branching in the parsing code (the parser cannot predict anymore what the next field will be, they could be in any order)

- extra memory allocations, decreased memory locality (due to variable-length/optional fields, and also the tree-like structure).

So if your data consists in a single object composed of a timed-fixed set of little-endian integer fields, you're comparing the above costs to the cost of a single fread call* with no memory allocations.

* many other data formats provide similar flexibility, text-based ones (XML) and also binary-based ones (IFF, protobuf, ISOBMFF, etc.)

don't write it as such though, you must write the endianess-decoding code (which the optimizer should trash anyway on little-endian architectures - e.g LLVM does).



Unfortunately, because twitter threads forces the user to dumb down their main points to a single, compressed sentence. It's a shame, since I like to read well thought out articles.

Twitter takes that away because it offers a UX that's makes publishing too easy for your random ideas. People with low self control will create threads like this. With hundreds of likes comes self validation so they keep doing this.


In 1983 virtually everything was text base. Since moving to graphical user interfaces a great deal of effort moved into more visually stimulating UI such as animations and better fonts etc. Not all of this should be counted a progress/innovation. We have waisted much of the HW performance we achieved of the years for baubles and trinkets:)


That's what you get for catering to people who don't care for your work one bit.

The same people who are telling me that their computers are slow are the same people who need a flashy animated button for every single action and the same people who refuse to understand that passwords are not just a formality.

To each his own.


Reminds me of this article from 12 years ago: https://hubpages.com/technology/_86_Mac_Plus_Vs_07_AMD_DualC...

Computers have gotten much faster in terms of raw speed and throughput, yet that hasn't translated into much of an improvement in basic UI interactions and general functioning.


That keyboard-centric design for GUIs (I clearly have never taken a design class) is what makes Reddit Enhancement Suite such an effective product, in my opinion. HN's interface is possibly just as effective in that it discourages me from taking too many actions; I can vote on basically every reddit comment I read but using a mouse to do it on HN represents such a massive barrier compared to keyboard navigation.


It is NOT just Web apps

I'm here trying to write in LibreOfficea several-page document with minimal formatting, usinga high-spec CAD laptop/workstation -- and every damn keystroke is laggy!

My muscle-memory arrow-keys & quick moving around to edit portions of sentences, merge/split lines -- all rendered useless - because I need to wait for the cursor to catch up.

A part of my mind keeps wandering off to whether I should go setup another old DOS box and load XYWrite - which was feature-rich, always lightning-fast and never laggy and worked great. Of course, the lack of printer drivers...

In every area, the software developers just squander more of the processing power than the incredible continuing hardware advances provide.

Anyone have any advice on software that at least attempts to work closer to the metal and lets us see the performance that we should see from modern hardware (for all values of modern)?

I got out of the software industry because of this trend to building on multiple layers of squishy software, instead of requiring efficiency - this framework compiles to that pcode, talking to the other API, which gets thunked down to the other ..... where the h*!? is the hardware that does some actual computing?

It seems like it happened for the same reason that high-fire-rate rifles took over military use -- because they figured out that most troops couldn't actually shoot straight, so it worked better to just let them spray bullets in the general direction than to require & teach real skills.

Similarly, this whole morass seems designed to make it easy for mediocre programmers, and programmers that learn something more serious like Haskell are considered exceptional, while the bulk of stuff is written for


Discussed at the time (2017 that is—not 1983): https://news.ycombinator.com/item?id=15643663


When we plugged in our Acorn Electron (it had no power switch), it would be on and ready to use immediately (unless it hung). Nowadays it takes a minute for my work laptop to boot. When I'm lucky.

I also totally agree with his complaint about things disappearing when you try to click on them. Most of his rant is just about how crap Google Maps is, though.


I can say that I have yet to see ANY web app, that is fast and snappy. ANY. At some point, they all have problems.

Even many frameworks for iphone and android that are essentially web apps are terrible and make every app slow and miss clicks. On the latest and modt powerful iPhone no less.

If you are creating a product as a web app only, you are telling me you do not care about UI enough.

Programs never not had bugs or issues, but we never had the Situation that every app, even though it technically works, is either sluggish, breaks somehow or requires the user to learn intricate timings to use a simple UI.

Developers took the easy way out and used these frameworks because its simple and, they feel, good enough.

And here we are.


Sadly it's not the "easy" way. It's the "cheap" way. So we're most likely stuck with the javascript garbage.


Casey Muratori and Jonathan Blow have been bitching about this for many years, now, and they're absolutely right. But it's not just UI latency - it's everything. Modern software is, technically and morally, a five alarm tire fire.

The profession needs to actively fight Moore's law in order to keep our jobs relevant, and find more "work" to do - most of which is not only poorly engineered, but culturally destructive.

If you care about this, there's a tiny community of developers that actually care about reversing it: https://handmade.network/


That's also how I remember the XT (~8086) I got in the 90s. When I typed or clicked, the response was instant. But that was about it, sometimes the HDD didn't work for weeks because I forgot to properly "park" it before turning the computer off. So I had to boot from floppy disk. And that was slooooow... Probably that's one core difference to today's computing, batteries are usually included, systems can run for months or even years without maintenance and crash much more seldomly. In the past people often had to wait for the computer to work again...


danluu has some articles on this topic

Computer latency: 1977-2017 (https://danluu.com/input-lag/)

https://news.ycombinator.com/item?id=16001407


I blame two things.

1. The "release early, iterate often" culture - as it encourages half-assed software to flood the marketplace.

2. Poor or non-existent incentives for proper code maintenance.


> The "release early, iterate often" culture - as it encourages half-assed software to flood the marketplace.

That is blaming the wrong aspect of agile development. Software should fail early. It is wasteful to go through a lengthy process only to find out the final product either is incompatible with the market or someone else made a tool that already dominates the niche market.

The problem most of the time is not realising you are actually selling a prototype and what should be a proof of concept ends up in production.


Have some empathy! I've seen a trend where rants against the modern web often get upvoted lots on Hacker News. Consider this quote from the article:

> I make no secret of hating the mouse. I think it's a crime. I think it's stifling humanitys progress, a gimmick we can't get over.

Does the world's typical computer user today hate the mouse, and prefer a keyboard-only interface (CLI)? No -- in fact, command-line interfaces are less discoverable and harder to use, starting out. Even as a programmer, I struggle to remember the flags to many common command line utilities

Sure, the author's example of a cashier's checkout console might be great as a text-only interface -- cashiers use it day-in, day-out, and can learn all the keyboard shortcuts in a day. But what about the self-checkout machines that shoppers use maybe once a week? Would you rather have every person have to learn a list of keyboard commands while navigating a two-color interface?

Does the modern web poorly serve the author, who's good enough with technology to master any UI? Sure!

But the modern web works better for the billions who otherwise would not have started using it in the first place


> But the modern web works better for the billions who otherwise would not have started using it in the first place

We need to start talking about expected utility.

For software that's used briefly and once in a blue moon, it's perhaps not worth the effort to make the UI particularly ergonomic. Most web pages fall into this category - the random e-commerce shop or pizza delivery service you're using today. It would be nice if the UI wasn't actively user-hostile, but it's not critical.

The problem is with software used regularly, for extended periods of time. Like, during a work day. A very large part of the world's population interacts with software at work. A lot of them sit in front of a small set of programs 8+ hours a day, day in, day out. For example, a word processor + e-mail program + IM + e-commerce platform manager + inventory manger. That software needs to be as ergonomic as possible, otherwise it's literally wasting people's life (and their employers' money). Such software needs to be keyboard-operable, otherwise it's just making people suffer.

A lot of software falls into this category. If you're doing a startup that is meant to, or even conceivably can be used in a business, you probably have some full-time users. You probably want those full-time users. If so, then for the love of $deity make it more like that old DOS POS than the hip mobile-on-desktop web garbage. Otherwise you're wasting people's health, money and sanity.


Keyboard-only input is not antithetical to discoverability. You can have a menu system (including top-menu bar) with clear hotkeys displayed.


> command-line interfaces are less discoverable and harder to use, starting out. Even as a programmer, I struggle to remember the flags to many common command line utilities

Not to disagree with the general point you're making, but autocompletion of commands just using the tab command is how CLIs get discoverability and it's kind of cool.

Whenever I "don't know" I just 'cmd -<tab><tab>' and suddenly I am presented ith a list of options that I can filter by continuing to type the option I suspect I need, or tab to the one that I see on screen. Then if that requires an argument <tab><tab> it let's me select the, for example, file that is needed as the argument.


> Whenever I "don't know" I just 'cmd -<tab><tab>' and suddenly I am presented ith a list of options

You assume you already know which `cmd` to type. Most users don't.


I've never seen a self checkout machine with a mouse.


mouse or touchscreen, the point still stands


If we are talking about how intuitive an interface is, I strongly disagree. I remember in the 90s seeing people learn to use a mouse that had never used a mouse before. There's a definite learning curve. A touchscreen with large enough targets and low enough parallax is a different story.


(2017)

Google Maps has improved the primary complaint here. You can now search along your route.


Not on the desktop...


It is true. Many stupid programming design, and other stuff, results slowly.

They say, type two words and push F3, well, you can implement a telnet (or SSH) service which provides such a program.

Or, better may be, what I thought of "SQL Remote Virtual Table Protocol". You can access remote data using local SQL, allowing you do make cross-referencing data, both with the same and with different data sources.

Of course, there is still going to be network latency regardless what you do. But many local programs are still slow (as many comments mention), also due to doing too many things, I think. (Maybe network latency may make it a bit slow even if you use telnet to implement the old interface now, but not as slow as with HTML, which is just bad for this kind of things.)

Modern user interfaces I think are also bad, and makes it slow.

I hate touch screens, and slightly less hate mouse. Command buttons and toolbar icons are bad and keyboard is better, I think. There are some uses for mouse, but it is way overused.


The guy's rant is a pain to read, but he's mostly right.

Don't get all defensive: take this as a boatload of opportunities to make things better.

I woke up the other day to find my mouse broken, and believe me, on macos, it's very hard to do anything without the mouse. I had to look up all sort of crap from my phone just to find out how to reboot the thing.


> it's very hard to do anything without the mouse

Is it? I tried your example on windows and I could shut off the computer easily (alt-f4), then I opened up a browser (windows button, write chrome), navigated to your post, wrote this message and logged in without touching the mouse. I've found that you can navigate most websites without a mouse, as you can just move to the links by using the browser search and then click then with ctrl-enter.

Edit: I even managed to go back and edit this message without touching the mouse.


Yes, windows is a lot keyboard-friendlier than apple. The windows key / start menu is definitely missing on mac


Command-spacebar is probably what you’re looking for

And there are many ways to restart the Mac using shortcuts: https://support.apple.com/en-us/HT201236


Then why do people like mac computers? I never liked them and I grew up with them.


Consistency, better design, fashion, take your pick.


> twitter.com

I found it funny that this appeared on Twitter, a website which always slows down my browser, especially in VMs.


Some things that are drastically faster: email, copying files, booting a disk OS and/or waking instantly from sleep (though some 1983 laptops were instant-on), printing, GUIs (try using a Lisa from 1983), any kind of complex computer graphics, software downloads and installation.


There is going to be a major C64 revival when the average Joe realizes we have hit the peak in the last human invention:

The most sold computer ever in history is still alive:

New cases: https://shop.pixelwizard.eu/en/commodore-c64/cases/90/c64c-c...

New keycaps: https://www.indiegogo.com/projects/keycaps-for-your-commodor...

New software: https://csdb.dk/

It's happening boys, back to the future!


Including reading essays, now written half-paragraph by half-paragraph in twitter threads?


Exactly right. This person wants me to read his thoughts on UX but could not make it less work for me, the user, to read them?


Boot times are way faster.


Maybe boot times are faster than 1996, but not 1983.


I don’t understand why you are being downvoted. Flip the switch and the BASIC prompt was essentially instantaneous.


Boot times are faster than they were in 1986. I feel comfortable assuming they're also faster than they were in 1983.

SSDs are fast.


> "SSDs are fast."

ROM is faster. That's what microcomputers booted off in 1983.


A powered-off Apple 2 or C64 could be operational in seconds. We're nowhere close to that now.


A powered-off Dell laptop running Windows 10 is operational in seconds. I know this because that's what I use.

Whereas an IBM PC booting into DOS in 1986 took, sure, seconds, but a lot more seconds. You could read a lot of the messages as they scrolled by during boot.

To get to a BIOS configuration screen now, you need to independently research the key that will bring it up and memorize it. Then you have to frantically mash it during the whole very brief boot process, because there's only a split second during which it will actually work. It used to just be a boot message. When you saw the message, you had time to hit F12 or whatever.


If it's operational in seconds it was hibernated or suspended.

Windows now by default has "quick startup" which is effectively log the user out kill their apps and hibernate.

Beware if you dual boot and want to access the windows files or your machine does not handle hibernation well.

Actual startup probably takes more like 20-40 seconds


> If it's operational in seconds it was hibernated or suspended.

This is not true. Are you still using a platter hard drive? (If an SSD, have you looked up benchmarks for it?)

My ~5 year old laptop used to cold boot Windows 7 in less than 10 seconds (once I'd disabled most autostarting programs, at least). It currently cold boots Ubuntu in ~5 or so; most of that time is spent displaying the UEFI and Grub splash screens. This is made possible _almost entirely_ by a Samsung Evo; I'm looking forward to getting an M.2 drive when I replace the computer.


Are you pressing the button to start and externally timing the process of arriving at a usable desktop and have you explicitly disabled fast boot? This isn't a new feature in windows 10.

Internally my computer tells me the process takes about 5 seconds from the OS start to graphical environment but in reality there are several steps. For example this doesn't account for the period of time between hitting the power button and the OS itself starting to run, entering full disk encryption password, unlocking volume.

I would be surprised if a full restart actually took so short. Maybe not loading a menu or unlocking a volume is sufficient to explain the difference?


I often hear hibernation is one of the great Mac Os Features. For my part, I decided years ago that cold booting windows works much faster than hibernating it.

I will not start to count the seconds and fight which OS boots faster, but it is certainly much faster than it was in the nineties. Boot times are certainly one thing where modern computers have significantly improved. Everyone who compares an instant on 8-bit is oversimplifying things. Try booting to ie. GEOS on one of those.


That's windows. In Linux, I get this from cold boot (on an Thinkpad T495):

    > systemd-analyze 
    Startup finished in 8.878s (firmware) + 1.666s (loader) + 1.592s (kernel) + 3.265s (userspace) = 15.403s 
    graphical.target reached after 3.176s in userspace
It's crazy that the slowest part of the whole process is the firmware stuff.


Is that wall clock time? I get 13s but actual wall clock time is more like 26 from power on to desktop. Around 30 seconds to opening a browser.


I haven't check wall clock; I would have to enable auto-login to eliminate pauses from user interaction.


> I would have to enable auto-login to eliminate pauses from user interaction.

Why? By the time you're logging in, booting is already finished.


To check at what point the system is responsive enough to launch a browser. So, not just calculating the boot times (we have that already), but from power on to launching a browser.


While that's not an entirely unreasonable idea, I note that launching a browser has its own startup time. Power on to opening a document in evince is faster than power on to launching a browser, and that's support for the original thesis in a way, but it also feels like a little much to count it against "boot time".


Just tested mine from totally off to login screen, took slightly over 12 seconds.


A powered off Dell laptop is not nearly as fast as a TRS-80 Model 4.


I used to use Apple 2. Powered on Apple 2 is practically useless - it doesn't do anything useful until much later.

My Chromebook boots in seconds, with full GUI and everything and usable. My Windows desktop boots in seconds and usable. I'd say anyone saying Apple 2 was faster is comparing apples (ahem) to oranges. In no way Apple 2 provided faster user experience for anything compared to modern machines.


That's because it doesn't do anything. We expect many orders of magnitude more functionality from a modern computer than we expected back then.


Remember when OS's had startup bleeps/bloops to entertain us while they booted? Those were the days. Now I'm completely irked waiting for an old 5400 RPM drive to boot Fedora on the occasion I need to do it.


You felt like you really got your money’s worth when that counter slowly ground it’s way up to “8192KB RAM OK”


The POST alone for my AT clone took over a minute. I remember watching it count up to 512k.


1992ish?


No, I definitely had upgraded to 1MB by 1992. The AT clone was an old CAD computer my dad's work had gotten rid of in the late 80s.


You need really good firmware and storage to actually see this. Otherwise it’s 25s in firmware and 35s to load the OS. If you have a LOT of RAM, that’ll slow things down as a single thread inits all the pages.


I definitely recall back in the late 90's boot times that lasted minutes were the norm, not the exception.


Sorry, but it's just a very dumb statement meant to be a hot take. So he has a select few examples that he's choosing as "everything on computers". I have to admit, I didn't start working on computers until 1985 when it took forever to load the program via floppy . . . as long as the drive wasn't drifting out of alignment; then it would either take even longer or it wouldn't load at all . . . so maybe his golden age of computers was just a year.


I think us software developers are so used to doing inherently slow things ("find all" for some function name in a huge codebase, installing packages, compiling assets, starting the iOS simulator, making API calls, waiting for a unnecessary meeting to be over so I can get back to working), that we need a good UX team to tell us when to make things faster. We just don't notice slow things anymore.


The thread started out being about how library computers were faster in the 80's and then derailed into angry Google Maps feature requests


Not quite the same topic, but I remember someone measuring keyboard to screen input latency, and that has also increased over the years.


Consider the precursor to current MacOS, NextStep v3.3.

It ran very well on a 33mhz 68040 with 32 to 64 megabytes of RAM.


Few days back, I installed the last version of Microsoft Encarta (2009) in an old PC (CPU E5200, RAM 4GB). To my surprise, all the millennial around me were astonished how fast and smooth that software was running and searching for information on a typically (very) slow machine.


I feel like my 2017 mbp takes longer to log in than my 2009 mbp did. The old one broke, so I can't compare, but I remember it like wake up would be done by the time I open the lid. On my new one I can sit and wait for the screen to turn on.


So this is mostly about UI. I would have agreed that many operations take very long, eg. accessing our web-based info system or waiting until outlook open or closes that window. It‘s strange that these operations don’t happen instantly nowadays.


Searching a map of the entire world online is slower now than it was in 1983?


Lots of great points - but feels slightly ironic that he's decided to divide this into segments of 160 characters, and spread the words between the nomenclature of the Twitter UI.


This is a comparison of native applications doing simple things in 1983 vs. web apps in 2017. Not all the world’s a web app, although that does seem to get forgotten here.


Half the people in this thread keep confusing latency and speed. The other half keep explaining it to them. It's a little repetitive here :)


Code running in real mode has less overhead than long mode but who wants to use an OS that runs in real mode besides that templeos guy.


The performance of Microsoft Word seems to be a universal constant across all known hardware.


I thought about reading this article, but the webpage was taking too long to load


This guy has obviously never had to wait for GI Joe to load on a Commodore 64.


But my computer is at least 1,000 times more valuable in what it can do for me, than it was in 1983. Probably even more.

Seems like a pretty good tradeoff to me.

I'll take a slow autocomplete box of all the world's knowledge over a lightning-fast lookup of my local files in a single directory any day.


I wonder if your computer is more valuable in 1983 running the first version of Lotus 123 compared to 36 years earlier in 1947 or more valuable now compared to 1983 with everything it can do? Not sure how to quantify that but it might be similar.


Define everything. A lot of things were way slower in 1983.


This guy never copied floppy discs


Except the internet it seems.


Speaking of slow and crappy UIs, what the fuck is up with the trend of "click to read more."

Youtube does this in video description and comments. If I'm scrolled down there to read comments, maybe I want to actually just read them and not click to read them?

Reddit does this. I don't know what reason I have for reading a thread of comments other than reading comments, so why do I have to keep clicking read more?

Twitter evidently does this. If I'm reading a thread, why do I need to click to read more? And after a couple dozen posts, click again. In this case, it also seems to expand the unread posts above the point where you're currently scrolled to, so you have to scroll back up and manually figure out where exactly the last post you read is and where the new stuff begins.

Many shops do this, by cutting product descriptions at a few lines so you can't read what the product is all about without clicking read more.

And they do the same thing with reviews.

I'm really tired of clicking read more over and over again in places where reading is the whole point!


There’s three possibilities, although I’ll warn you now that none are great.

1. In an attempt to improve perceived performance on initial load a decision was made not to load all content in at once. In the case of a Twitter thread containing potentially hundreds of items that’s reasonable, for product descriptions less so.

2. The widget being used to display a product description on the product page itself is also used elsewhere on the site, but in a context where space is constrained to fit a grid. They got round that with a read more link.

3. Sadly the most likely for product descriptions, in an attempt to determine customer interest an arbitrary cut off was chosen for how much of a description is shown. Metrics are then tracked on which descriptions are expanded, and taken as a proxy for customer interest in those products.


> 1. In an attempt to improve perceived performance on initial load a decision was made not to load all content in at once. In the case of a Twitter thread containing potentially hundreds of items that’s reasonable, for product descriptions less so.

In an age where web pages are several MB large, bandwidths surpassing 100Mbps, and GPUs alone with 11GB of RAM, we can't render more text on a screen.

This page right now contains around ~8kB of pure text, and everything is already expanded as opposed to most other comment sites. I'm aware that formatting, layout, data modelling, messaging etc increases that amount, and that's fine, I'm just baffled that it's possible to have a slower experience with percievably the same amount of brain-data as 20 years ago, but with hardware that is magnitudes better.

We shouldn't lose this much to UI fluff.


The bottleneck would in this case be within YouTube servers that have to fetch the comments. Nothing to do with UI. Since comments work on scale it is possible it takes quite a bit of resources to load them instantly with the video.


It's not directly related, sure. But it's certainly a second order effect of UI choices that has brought us here. They manage to compute (batched and not real time) personal ads tailored to you, load media for this, push a whole bunch of data from your clicks to further this tailoring, push metrics, grab libs from 40 different locations, etc, etc, etc.

If we wanted to do some work around getting more text to users and improve the reading experience of sites with at least a portion of them designed for that purpose, we certainly could.


HN doesn't show every comment if there are too many comments, FYI.


How could it be one?

Loading text, or anything statically, is faster than many megabytes of JavaScript frameworks.

I think web devs have collectively broken their brains if they think this.


I suspect that for a surprising number of sites it's secretly #3: they track the clicks to keep track of what you're reading on the page.


Try going to a reddit page from google nowadays... it shows, like, one comment and then “click to read more”. Oops, you were on the amp page, so now it’s loaded the exact same page but non-amp. Click to read more again. Now the long comments are still not shown fully so you have to click each one to read the whole comment. Now the thread replies are still collapsed so click those to read the replies. This is absolutely infuriating if you’re in a place with patchy internet. With HN I can load a comment page and read it top to bottom for ten minutes, with reddit I’m clicking non-stop just to see text on a text-based website!


First rule of reading reddit: edit the url and replace the leading "www" with "old".


I keep an account and remain logged in for the single purpose of using the "Preferences" setting that I always want the old version. So I can now just use the regular www-URLs and still get the old interface.

EDIT: Checking my preferences, there is an option on the bottom that actually is disabled and says the opposite: "Use new Reddit as my default experience". So I guess that's on by default and you have to disable it.


I have that set too but still use https://addons.mozilla.org/de/firefox/addon/old-reddit-redir... to make sure I don't accidentally share a link to that awful new interface.


What a fantastic user experience!


I also love it when there is an ellipsis menu … and once you click it there is just one option in it.

Edit: another of my favorites is a Cogweel icon for the contextual settings, another cogweel icon on the other side of the screen for the account settings, then a hamburger menu for navigatio, then an icon to display all apps (like the numpad icon) and then an another menu when I click my profile picture.

Every time I'm looking for a preference it's like it's Easter! :)


Performances and money. You don't want to make the db, cache and network operations to get the comments for every users, only those who really want to read them which are not most of them. At the size of google, you save millions or dollars with such trick.


At the cost of a crappy and sluggish user experience, of course


Everything is a matter of compromise. What's important is to make them consciously, which I believe Google did in that case.


Yet they insist on auto-playing the next video when most users will not be interested in that. Serving all comments is much less data than even the initial buffer of the video. The real reason I think is that a user reading comments on youtube isn't watching ads during that time.


Auto-play means more views means more ads means more money, yet. But I don't think it's the same for comments.

Comments don't have adds. They are a feature that cost money, but don't bring any.


This is what mobile first design gives you In mobile it might be welcome On desktop or any big screen environment this is just silly


There have been very few times that I have enjoyed clicking on one of those, even on mobile.


I think it's actually close to what I'd like without getting there. The "click to read more" essentially gives me an overview of the comments. When you click on an individual comment, it expands to give you a more detailed view.

I think what I really want is something more akin to org mode, where it's expanded/contracted by default (configurable) and when I hit a single key, it expands portions to reveal more detail. I often find when I'm in the middle of a thread, I start to think that I'm wasting my time and want to collapse that thread in some way so I can start searching for where I want to reinsert myself. I rarely want to read every single comment on one of these services. Basically, I think the intention is on the right track, but the execution is poor. Getting it right would be tricky, but I hope someone tries and sets a better bar than the one we have.


Gmail even does this on long emails! It’s insanely annoying.


I can't explain Reddit, but for Twitter it's a side-effect of how they decided to create the "simple" linear view.

Tweets are actually structured as a tree similar to Reddit comments, but while Reddit essentially displays a depth-first traversal, Twitter opted for a breadth-first traversal so it can show all the immediate replies. Almost every "show more" is for going one level deeper in that tree.

(With some caveats/custom rules, at least - if a subtree only has 1 reply it'll often be shown in-line, then there's tweet chains like this one)


Another possibility: this destroys the "reader view" that some browsers have. Since the reader view usually skips ads, it is considered undesirable.


Is it deliberate satire that this was written as a very long series if tweets rather than as a well structured article like it may have been in 1983?


It reminds me of the people complaining that the streets are dirty and then go on by throwing their coffee cup on the street just as they are talking.

I mean the dude is using the worst medium possible to display and transfer data. It is also slow and barely follow-able/readable.

Interesting world we are living in.


> I mean the dude is using the worst medium possible to display and transfer data.

Whoa hold your horses. Are you sure about the worst part ? Imagine if this were an instagram story or a series of snapchat snaps. Would it be a better medium? I beg to differ.


Fully agree it is ironic but the fact that it is structured this way is the key reason we saw any of the content.

The ideal format for this is probably a text file or simple html page on a personal website. Linking that from Twitter is one tweet. Sadly it’s not going to get the same visibility and traction as a Twitter thread where each paragraph can be linked, retweeted, liked. Look at all those RTs!

Maybe the simple version would have ended up on Hacker News but more likely not — a fraction of the people would have seen it to begin with.


You might be shocked to discover that some people use Twitter as a main means of communication and don't want to host a blog or anything.


Its a thread of tweets turned into an article by a bot.


our brightest cs minds wasted making this happen...


Better this than peddling advertising, like most of them are doing.


I couldn't look past the irony of that either.


Started with what could have been an interesting premise for discussion, and then lost me when he declared mice to be intrinsically bad in a writeup about user interfaces.

It always confounds me when people assert that the number-one desirable feature in all user experience is velocity of input. As if there's nothing to be thought about, much less discovered. Just data to be entered. That may be true in certain narrow segments of interfaces, like POS software and development environments, but to pretend that all human/computer interaction fits that mold is to be willfully ignorant of the reality of personal computing.

Edit: It's been pointed out that towards the end he does acknowledge some valid use-cases for mice, but he still drastically oversimplifies the space of GUI use-cases and ignores some of the key benefits of mouse-based interaction.


> but he still drastically oversimplifies the space of GUI use-cases and ignores some of the key benefits of mouse-based interaction.

I think you've still missed the point. He's not saying that keyboards should be used most of the time, he's saying that keyboards are better for structured interfaces while mice are better for unstructured interfaces and a lot of what we consider to be unstructured interfaces are actually structured (e.g. twitter).

While I don't think he's 100% correct, I do think he raises a lot of interesting points if you can look past his rather unhelpful delivery.


> While I don't think he's 100% correct, I do think he raises a lot of interesting points

One of the things I got from this was that maybe we should do more to design structured interfaces instead of counting the number of clicks needed to transition to the page you're looking for.

Many enterprise applications (especially web based ones) are horrible to navigate, and some people spend most of their working days in them. It's terrible.


Years ago I had occasion to visit my mother at work. In her department most of the worker's time was spent in a custom Windows app that was developed specifically for them... as in exclusively for that department of that company.

The app was modal and took up the entire screen. Except is was clear that the developers had larger displays than the workers because in order to view and use the entire page the workers had to scroll around both X and Y using the scroll bars. Well, that and the fact that tabbing around didn't go around in a logical order.

I walked out and returned with a slightly LCD display that I bought in a local big box store and nearly started a riot. They had all been using that crappy app for years like that and the worst pain point for it was due to the fact that the devs had assumed that everyone in the company had the same size displays while their employer had refused to buy those larger displays for the 45 people that had to use it.


The Danish government had a program developer in the 90' for the job centers (the job centers are suppose to help unemployed people return to the job marked, they're just very good at it). Somewhere in the specs. the was a limit on how long the loading time for a window could be.

To get around this load time restriction the developer simply put less information and features into window. So rather than spending 10 seconds loading one large form and fill it out, users would have to click though X number of windows, filling out part of the form in window.

I'm not sure where I'm going with this, but your comment reminded me of this case.


Different budgets probably - I've seen people kicking up an almighty fuss on a ~£40 million pound project about buying ~200 large monitors for one screen in the application (the application screen really did need to be huge due to various constraints).

One of our meetings probably cost ten times the cost of 200 monitors!


Those are some huge (or long) meetings.

Back of the envelope:

$400 * 200 = $80000 for the monitors.

You need to have $80000 / ($100000/12/21/8) = 1600 person/hour of meeting time to equal the cost. 100 people can meet for two days!

I feel like something is off with that calculation, but I cannot find it...


It was a big project - so easily 30-40 people at meetings (usually in multiple concurrent sessions). People flying in from all over the world and staying in decent hotels...

I remember seeing the bill for breakfast for one set of meetings - was nearly $5000.

Yeah - I was off with the "ten times"... but not by a huge amount ;-)


Ah, yeah, you can rapidly run up the cost if you fly to your meetings. I was thinking about an all-in-one-office situation. Good point, thanks :)


'enterprise users are too dumb to use the keyboard' is what i heard a lot and it's blatantly untrue.

'look at what the iphone does with just one button' is a useless thing to say in an enterprise app.

etc.

it just shows that enterprise app developers don't use the software they make.


100% agree. I manage two teams of developers - one builds the infrastructure for a back-office (power user) interface - the other builds the infrastructure for an end-user (casual user) interface. While both are web apps, the usage patterns are VERY different.

The power users know all the special shortcuts. Keyboard mappings, tab order, and also app-specific shortcuts (forcing look-ups to behave certain ways, quick key combos to access forms, etc). They value speed of data entry above almost everything.

The casual users know none of these things, don't care, won't remember them if we try to teach them, and place a much higher value on easy navigation and discovery via visual cues and mouse.


Lots of business programs of the sort where people spend much of their day in them should probably look and act more or less like "bad" old DOS keyboard-focused business programs. Instant responses to input, shortcut-driven. You watch someone use one of those who's been at it for even a few weeks, man, they're fast.

A very good touchscreen interface would probably be as good or better for cases in which typing isn't required, except that most touchscreen interfaces are terrible (janky, frequent missed presses, laggy enough that it's often unclear whether a press has been missed or it's just being slow, confusing navigation—see those new McDonalds ordering kiosks for all of the above) so that doesn't seem to be a realistic option unless you put some kind of UX dictator with great taste in charge of the project.


I've yet to see a graphical enterprise app that approaches 1/10 of the interface speed of a mainframe terminal app.

All app developers should be required to sit and watch an expert user interact with a mainframe interface for at least 30 minutes before they're allowed to make arch decisions.


Neither do the buyers. No one cares about the ergonomics of enterprise applications, because the users are forced to use it.


Nothing makes me feel stupider than SAP Concur. I miss our old SAP "native GUI" travel reporting.


Concur is an impressive example of how you can make a really simple thing really, really painful.


It has to be complicated so they can justify the expense. If it were as simple as it needed to be, even execs would wonder where the money is being spent; and they don't even use the app!


I was once waiting in a doctor's office about ~4-5 years ago, and sitting close to me was an older lady, perhaps around 60.

A commercial came on the TV in the waiting room for SAP, and this lady started cursing about what a piece of trash it was. I was really shocked. She was not someone I expected to even know what SAP was, but I guess she was an unfortunate user stuck with having to use it in an office somewhere.


Really? I hated S/3 with a passion. Nothing that describes itself and totally bad laid out applications.


I also thought I disliked it. Then we switched to Concur.


>a lot of what we consider to be unstructured interfaces are actually structured

But only IF you know the structure otherwise it's effectively unstructured. The issue with structured interfaces is that the barrier to entry is high and the discoverability of how to do things is low. That works if you are able to force people to use it but fails when you depend on people organically wanting to join.


Also, they are very fragile in a move-fast-break-things world. If you are going to be structured, and have people depend on your structure, then you have to commit to it and support that structure. Sometimes for far, far longer than you ever expected.


I got to that point and had the same reaction you did, but finished reading the thread and realized he's actually claiming something far more ambitious and interesting than I thought he was.

Particularly when he clarified that keyboard-only doesn't just mean text-based, and it can be graphical and keyboard-only. I'm trying to imagine UI that are graphical and beautiful but works with keyboard only, and it actually could be a very interesting world if that were popular. It's not unlike games from older days where ways of input is limited and input is purpose-driven.

For sure, as you said, there's the discovery/exploratory aspect of using a software where GUI helps a lot. But there's definitely merit in what the author is saying.


Blender is a fairly good example of visual software usable with a keyboard only. Especially in older versions there were a lot of functions you couldn't even get to with a mouse. These days everything has a way to get to it from both the mouse and keyboard.

Turns out that an interface like that is really hard to learn but also really fast once you do. Extruding a face along the normal is as easy as getting it selected somehow (which I do with the mouse, but there's plenty of keyboard based selection tools too) and hitting e, followed by z twice, and then either entering a distance on the keyboard or moving the mouse to move it a bit. Personally I love the interplay between keyboard and mouse: you can pick whichever method is most effective. For selecting specific things or moving stuff until it looks right you use the mouse, for selecting in bulk or moving stuff specific distances you use the keyboard. Or a combination. Whichever is convenient.

So I think if you can solve the discoverability problem of keyboard shortcuts it'd be a real productivity boost for power users.


Blender is a great piece of software and I love it but speaking of the keyboard shortcuts in Blender, one thing that annoys me about it is the reliance on the numpad for some of the keyboard shortcuts I would like to use. The reason those shortcuts annoy me is that I don’t have a numpad, so I am unable to use them.


True, but you can turn on "numpad emulation" and then use the normal number keys as substitute. It's a weak substitute though, The whole point of the numpad is that it makes those shortcuts work somewhat as directional keys, which is lost when using the normal number keys.

Additionally I don't think it allows you to substitute a period for the numpad period key which is a rather important button in day-to-day blender usage. Of course you can re-map any key you like, but it'd be nice to have it set up right by default.

EDIT: and of course under a recent blender version (2.5+) there's the search box if you really don't want to use your mouse ;)


No longer an issue with the latest major release. All views and manipulations can be performed just like your favorite other 3D softwares.

Also, I bought a separate mechanical 10key years ago and never looked back. It's a fantastic addition to any 40-60%.


Emulate numpad+emulate 3 button mouse (IIRC)

https://blender.stackexchange.com/questions/124/how-to-emula...


My laptop has no built-in numpad. Yet I use all the Blender numpad related keymappings and do all my graphics using a mouse / pen. I've done this at airport lounges without issues. USB numpads aren't that rare. Neither are mice.


But you could easily acquire one. I'd imagine most physical keyboards do have numpads?


Not OP, but I don't have a numpad amongst my keyboards. As my computing experience changed, I had to get used to touchtyping numbers above the letters; previously, I had almost always used the numpad.

My laptop has no number pad. I use it when I want a portable keyboard device, so an external keyboard would be irrelevant and unnecessary. Buying a device with a numpad built in would be a compromise that makes the built in keyboard even less useful, due to its non-centred location - in the unlikely event that everything else was identical.

My external keyboards have no numpad. They're ergonomic keyboards. One built for size, the other built uncompromisingly ergonomically. Ergonomic design actually speaks against number pads, since it means you have to extend you arm further to reach the mouse, and an uncompromised ergonomic design are wider than a regular keyboard since you want more separation between the two hands - so the number pad becomes an even bigger issue.

I've sometimes thought about buying an external numpad. But so far I neevr have.


> Ergonomic design actually speaks against number pads

Either move the mouse or the number pad to your left hand. I've left moused in the office for years (Yahoo! ergo team preached it when I joined), and it took a bit of getting used to, and limits your mouse choices, but it's actually pretty nice sometimes to be able to have one hand on the mouse and the other on the number pad. Mousing with one hand in the office and the other hand at home can help with overuse injuries too (although, using good techniques is much better for that). Of course, that still doesn't get you a numberpad on a reasonably sized laptops, but laptop keyboards are an ergo nightmare anyway, reaching over today's enormous touchpads is awful.


> Ergonomic design actually speaks against number pads, since it means you have to extend you arm further to reach the mouse

I don’t have this problem with my Kinesis Advantage, since the numpad is embedded in the main keyboard (using a mode shift/switch to access, which I have mapped to a foot pedal)


Real keyboards do, but there are far fewer of us with proper full-size IBM-style keyboards than laptops that are cut down to 13-15 inches and have more-or-less eccentric layouts.


I wouldn't say most. I rarely come across laptops that support a numpad for example.


External number pads are cheap. If someone's spending enough time using programs for which a number pad would be very helpful, it's probably worth the few dollars they cost and the tiny amount of space they take up (similar to a normal-size corded mouse—slightly larger footprint, but thinner)


Blender is a perfect example of the ideal compromise between "takes 10 clicks to do anything" and "takes 10 days to learn to do anything".

You can start in blender ploddingly wandering though the gui as you discover features and then streamline your workflow with the keyboard shortcuts as you go. As you do so, it feels a lot like leveling up in a particularly good game.

Very few pieces of software currently do this well these days. Its a breath of fresh air. Especially v2.8.


I'm not understanding the adversion to the keyboard in the comments here. It's just faster. I prefer not having to reach over and find the mouse or trackpad, then wiggle around to find the cursor. Useful only when the keyboard ux is bad.

Imagine playing piano while having to take a hand off the keys and find a mouse during the song. It would be impossible to keep up. Keyboard only is speedy when learned.


I think the point is, that a keyboard is better when you know what you are doing, and a mouse is more suitable for “discovery” i.e. when you dont know what you are doing.

This jives quite well for me. When I’m working keyboard is almost always better, but when I’m messing around with software I’m unfamiliar with and I haven't yet familiarised myself with the keys for it, I’ll poke around with a mouse. Once I figure out my workflow then I’ll probably use the keys more.


Yes and no. I discovered just fine with a keyboard, arguably better than with a mouse. Tap the Alt key, use the arrows to explore the menus.

Unlike with a mouse, the keyboard never "fell off" the active menu tree requiring me to start over.

Unlike with a mouse, the keyboard couldn't hide things until I moused-over them. Recently I spent quite a while trying to find the zoom controls on a PDF because they were transparent until I randomly waved the mouse in a corner of the screen where there were no controls.

Mouse-driven discovery sucks, IMHO/IME.


> Tap the Alt key, use the arrows to explore the menus.

You have to know that this is possible as a user, and app developers have to support it, and/or not override the default operating system handling.

This is, without a doubt, an area that we have regressed on. I remember very clearly that when I was first learning to program VB6, UI conventions were pretty standardized on Windows, and so you just did a lot of this stuff by default - you had a &File menu, and &New, &Load, &Save, and E&xit commands on it.

Some software still does this, but it is increasingly rare; in Chrome right now, the Alt key doesn't do anything, and I find that Electron apps have to go out of their way to mimic native conventions, as VS Code does.

So there's less software that follows keyboard-driven conventions, so fewer users know about those conventions, creating a vicious circle. Not to mention that the bulk of your average person's "computer" usage is poking at a touchscreen phone or tablet that has no physical keys at all, just godawful on-screen keyboards.


I think the average user did know it was possible, because the hotkeys for everything were shown in the menu, and "alt-f" to open the file menu was just how things were done. Or you could click on it with the mouse, and still see the hotkeys to get back to it faster next time.

The user-hostile pattern of nested levels of mouse-maze menu, which collapse if you stray one pixel out of the required path, absolutely infuriates me. I can't imagine who thought this was a good idea or what sort of pointing device they used.


What you're really describing is bad UI design. It's not a fault with the interaction method at all. Your whole argument that "keys are well documented" can fall down if a developer doesn't bother their hole to document their keys, or to follow a standard keyboard pattern.


I completely agree it's bad UI design, but it's also become the standard. Some modern "standard" did away with buttons that look like the buttons we spent 20 years learning the look of, scroll bars that look like the scroll bars we spent 20 years learning the look of, hotkeys with underlined letters that we spent 20 years looking for and learning to speed our interaction with frequently-used programs.

It's absolutely bad UI design, but I think it's become the rule rather than the exception. It's just "prettier", according to some jerk who never used a hotkey in his life.


I understand you now. I think this has more to do with the prevalence of "touch" than mouses particularly.

> It's just "prettier", according to some jerk

Some would say "brave" and with jerk I think you're being too kind :-)


I'm not taking sides on this one, but thought you might find this article interesting that was posted here 2 days ago. https://www.asktog.com/TOI/toi06KeyboardVMouse1.html

The key take-away was they spent a lot of money on a research project specifically about mouse vs. keyboard and found the precise opposite to be true! Surprising to say the least.

* Test subjects consistently report that keyboarding is faster than mousing. * The stopwatch consistently proves mousing is faster than keyboarding.


Considering all the time I have wasted in recent years wrestling with text selection and cursor positioning on touchscreen devices, this discussion seems delightfully pointless: they are both so much faster than the third option that any difference between the two must be insignificant.


> The stopwatch consistently proves mousing is faster than keyboarding

Without specifying how it was tested that's meaningless. For example I've read of a test where keyboard users were slower because they had to do a find/replace operation on a file manually, moving the cursor only with arrow keys. That's like saying using the mouse is slower after forcing mouse users to type text on a virtual keyboard.

I think in most scenarios both can be fast enough that the difference doesn't matter, provided there's proper support for both.


I have no idea why you're getting downvoted.

Supplying actual data is always useful.

Especially when it contradicts a common opinion ;)


Supplying actual data is always useful. If only that article did that :).

Sometimes, articles that contradict a common opinion do so because they're simply dead wrong. This is one of those cases.


I'll admit it's not much data, and there doesn't appear to be a source. But it vaguely refers to a study, and doesn't just reflect the authors preferences.

Did Apple do a study? Did it come to the conclusion that the mouse was faster? If so, why is it dead wrong?


I don't know if Apple did a study, and I'm not particularly inclined to look hard for it, because the argumentation from the article series itself is pretty much bogus and uses some weird test setups (like https://www.asktog.com/SunWorldColumns/S02KeyboardVMouse3.ht... and the e -> | replacement game); that, coupled with my real-world experience which every day proves superiority of keyboard over mouse in structured interfaces, leads me to assign very low prior to the validity of the conclusion of that Apple study, as reported by Tog.

(The study perhaps had a more narrow set of conclusions than presented in the article. Wouldn't surprise me.)


I double-dog dare you to do any significant photo editing using only the keyboard.


This goes back to structured vs unstructured from elsewhere in the thread. Applying most of the editing tools is unstructured, but navigating the UI is structured.

As a result, I'm very fast at navigating menus and switching tools in my program of choice (GIMP), because those are rapid keypresses rather than forced mouse clicks. It's just so much faster to click Alt+I > S, type a few numbers, hit Tab a couple times, and enter (well, Space usually), than it is to navigate with the mouse to the Scale Image drop-down, put my hands back on the keys to type numbers in, and then switch back to the mouse to hit OK. Ctrl+Q is much easier and faster than clicking the Selection Editor button. Alt+L > T > 9 is the quickest and simplest way of rotating the current layer - a more mouse-based UI might even refuse to give me a 90-degree option and instead force me into manual control. And of course, keeping a hand on the keyboard so it can quickly type a key or shift-key is much faster and more accurate than having to mouse over to the Toolbox to select a tool.


You really need to learn how to use a tablet.


Or a 2-in-1 device with a touchscreen and a pen (e.g. Surface).

The non-graphical tablets with pens are really a different type of interaction, and one that supersedes keyboard + mouse for light graphics. Using one hand with the pen for pointer input + another hand for touch input is really powerful. Moving around, zooming, or rotating things on the screen is easier done with a hand. And a wheel menu (context or otherwise) really starts to shine with a pen.

I wish more software actually supported this. I currently own such a device (a Dell), and I'm searching for software that can utilize touch+pen input to full extent, but there aren't many programs that can.


These days, I'm doing all my editing with image-magick from the commandline.


Pretty late but a lot of significant photo editing is done with one hand on the keyboard for shortcuts and the other hand on the wacom pen, no mouse involved.


Nobody has an aversion to the keyboard, they just disagree with the "people that use mice are idiots" crowd.


Considering that the Qwerty keyboard was deliberately designed to be slow, then yes, I do have an aversion to the keyboard.

I'm a Pom (British/Australian) living in Germany, so most keyboards in shared locations are unusable for me. Using other people's computers is a massive pain as my carefully-learned muscle memory is now actively stopping me from pushing the right keys.

I keep looking at Dvorak or a chord input device, but I know that if I get used to that then I'll have to carry one around with me always so that I can plug it in to whatever computer I'm using at that point. This doesn't seem practical.

I'd love another input option. One that was actually designed for humans to communicate fast with.


> Considering that the Qwerty keyboard was deliberately designed to be slow

No, this is a myth that doesn't become more true just because many people perpetuate it without question.

QWERTY was designed to be as fast as possible for the first data entry jobs at the first companies that adopted typewriters and went through a couple iterations based on feedback of these first customers.

Is QWERTY optimal? No. But it's good enough that learning an alternative layout like Dvorak for months may cost you more of your lifetime than you'll ever get back by typing minimally faster(not to mention such a switch would be more costly with every single keyboard shortcut you've memorised so far).

Either way if you are waiting for a better alternative to become the standard so you don't need to carry your own keyboard you're going to need a lot of patience.


> Considering that the Qwerty keyboard was deliberately designed to be slow

Hasn't that origin story been debunked?

e.g. https://www.smithsonianmag.com/arts-culture/fact-of-fiction-...


I use Dvorak on a Kinesis and typing on a regular keyboard isn’t a big deal. It’s surprising, but learning Dvorak or another keyboard doesn’t really make you worse at the original.


It did for me when I tried it.


you're lucky. I couldn't even use Vim because anytime I entered text in any other setting, my muscle memory slapped ESC as soon as I was done entering. On most computerised forms, that's a bad thing. I still find myself hitting ESC when I've finished a code edit, and I'm not even using Vim any more.

I don't want to know what my muscle memory would do if it got used to a Dvorak keyboard.


Qwerty was designed to cut down on jams caused by adjacent typewriter arms, which generally were adjacent keys.

While not designed for best human use, it's not completely terrible.

Azerty vs Qwerty verus Qwertz is a different issue entirely.


Isn't it more akin to playing piano (being like using a mouse), reaching for octaves, vs. playing, say, a saxophone where you press a "meta" key to change octave (and alter breathing, but that's just a limitation of the analogy) being more like using a keyboard?

The problems I have with keyboard are usually discoverability.


Nah, they're both like keyboards. Playing piano is like typing. Changing octave with a "meta" key is like using Meta (Alt) on your keyboard.


> I'm trying to imagine UI that are graphical and beautiful but works with keyboard only, and it actually could be a very interesting world if that were popular.

Windows is mostly mouse optional. You very rarely need a mouse, it's just way more common to use one. My memory is a bit off, but I know Win 95 was fine, I think it got a bit worse in XP (a lot more things that were hard to tab to), and 10 is ok, but not great.


I do not use Windows much, but when the search box was added to the start menu (in Windows Vista I think), it was a huge navigational improvement for me.


I've done a ton of keyboard only on XP and 7 - it's mostly fine.

I would like to murder the group of fools that managed to make a GUI for a touchscreen service program that was unusable without mouse and disabled the touch unit for some functionality, but that's neither here nor there.


Full keyboard navigation capability used to be the standard for Desktop applications. It still is, if the application is well-programmed and uses native controls, but unfortunately you can no longer count on it with all those browser-based monstrosities on the desktop nowadays.


CAD/CAM software would be difficult, possibly unusable to use keyboard only. I had a good left hand hotkey setup for everything I could, but there were things I had to use the mouse for that would have been slow, painful and tedious to do on a keyboard and I can't really think of anything better than a mouse for those.


Anything graphical is naturally difficult to use with the keyboard only, but since you bring up CAD/CAM --- that's an example of software which is equally difficult or even impossible to use with the mouse only, especially when you want precise control over things.


My dad had a digitiser tablet for doing CAD, this big slab in front of the keyboard with a mouse like thing but instead of a ball or optics it had a crosshair made out of copper on it to precisely pin point where you were on the surface. There was an area that represented the screen and around it were all manner of UI shortcuts.

I remember it was much more accurate to use than a mouse, and better interface than having toolbars all over the display.


That's a graphics tablet/digitizer and a puck.

http://www.brisk.org.uk/gt1212b/index.html

Crazy high res and absolute rather than relative which makes them very useful for some applications.


Yep, that one and another less colourful model.


so did my dad for a very long time. But in recent years he (and my brother as well) switched to a normal mouse. I haven't asked him why (I should!), but I suspect that lack of support from more recent OS revisions and the hardware just being harder to find might be a reason.

I have used the digitiser a few times and it seemed much more efficient than the mouse.

Also autocad has (or at least had, last time I looked at it years ago) a very prominent prompt where to input keyboard commands (in autolisp, no less!).


maybe current breed of optical/laser mice are just really good enough.


Very possible. Also, with everything being already digitised today, there is less of a need of being able to precisely input points from a piece of paper.


True, I had my setup done fps style. Most of my commands were within finger reach of wasd. When i first learned the software, there were no commands set to hotkeys. It was a lot slower using menus or toolbar buttons.


A drawing tablet is better than a literal mouse. They are pretty cheap, have better ergonomics, and you can use them proficiently pretty much instantly.


For you, yes. I (and a lot of other people) have a disorder that makes my hands shakes when the muscles are tense [0]. This means that the only way I can interact precisely is with my arm laying down on the desk and only the fingers moving the mouse. This is just one example of a wide variety of problems/annoyances. To everyone out there design user interfaces: please remember that people are not either perfectly fine or with disabilities in a black&white fashion, there is a lot of grey in between.

[0] https://en.wikipedia.org/wiki/Essential_tremor


My father has ET. I have often watched him use the mouse two-handed. One arm driving tensely (with tremor) and the second hand holding the wrist of the first (with less as not engaged in fine work).

I have early signs. I often wonder if I will eventually be pushed into the grey disability area parent mentioned moreso from UI "progression" than from ET progression


I lived with (not that bad) ET since I've memories more or less, and I'm still under 30. Nonetheless, I encounter daily "difficulties" in doing basic things like typing my code on the door keypad (I miss the correct button and I need to re-enter the code), signing on the POS while the delivery guy keeps it floating mid-air in front of me, etc. Those are obviously nothing compared to real disabilities, but some of them could be easily avoided with a bit of smart design of UI.


Not for CAD/CAM. I’ve tried it, but, for my money and time a mouse is way more effective for managing multiple, detailed selections.


Are you using a 3D mouse?


For you perhaps; I find these very hard to use. Something hand-eye coordination that really does not work for me while a mouse is no issue at all. And I tried because the idea feels nice.


I use a browser extension called Vimium which is an astoundingly faster way to navigate web apps via keyboard. I somewhat reluctantly picked it up in large part to deal with some RSI issues but now I couldn't go back.


For example, take a look at qutebrowser (or Vimium if you're more into getting an extension for your current browser). It support basically 100% keyboard based browsing.


Or Tridactyl.


All the ms office 365 products are this already minus the beautiful. I only use keyboard shortcuts to interact with them.


Same, and it's really intelligent design because you hit the alt key and it shows you the next key next to the usual buttons so you can keep keyboard navigating instead of forcing you to remember every single one. You end up memorizing them pretty quickly anyway for the most common use cases but can still keyboard navigate very rapidly to novel locations. I'm actually a big fan, despite my reflexive aversion to Microsoft products leftover from the 90's and 2000's.


There's a big challenge in usability with keyboard-only. I don't at all dispute that keyboard is faster, once you learn how to use it with whatever application(s), but that's exactly the problem.

I heavily use the keyboard, and for certain things (mostly apps I use daily) I'm highly proficient with it, barely touching the mouse at all. But once in a while I have to work with a spreadsheet, or edit an icon, or use some other piece of software I only use maybe a couple times per year. I simply don't remember shortcuts and I certainly can't justify spending the time to (re)learn enough to be more efficient keyboard-only that with a mouse, so I just rely on the mouse.

It's hard to imagine any keyboard-only (G)UI doing a better job at making usage discoverable than what can be done with a mouse-enabled one.


The problem is and always was the lack of a good online help system. Apple and Microsoft did really lousy jobs with those systems. The OS should provide a whole screen display that instantly gives you a cheat sheet of the most important keyboard shortcuts, and it needs to be clearly readable and intelligently organized. Its mysterious to me why this is not standard in every operating system.


> to pretend that all human/computer interaction fits that mold is to be willfully ignorant of the reality of personal computing

He doesn't. He has a whole section where he points out types of software that mice are great for, even with a picture. He then describes types of software where he thinks keyboard is better (i.e. linear stuff).


Not until close to the end (right after where I stopped reading, turns out). He still has some hardline views that I think drastically oversimplify the landscape, but apparently doesn't think mice are completely useless.


Mice are sort of the simplest generic thing technologically possible 50 years ago that people can be taught to use.

Touch screens at least will be better.

A touch screen and a keyboard based interface is the fastest generic UI. I've been using touch laptops only for years. The mouse dies hard though. It's closer to the keyboard and I'm already proficient with it. I wonder if this will be true for kids growing up now.


Even if touchscreen is faster (I'm not convinced), it is much less precise. You need huge UI elements, sparsely laid out. Standard rolldown menus are fine with mouse, awkward with touchscreen.


Add a pen to the mix and you're golden. I have a touchscreen 2-in-1 device with a pen, and I barely ever use the touchpad anymore.


> A touch screen and a keyboard based interface is the fastest generic UI. I've been using touch laptops only for years. The mouse dies hard though.

Touch it's ok with laptops, but when working on a desktop computer, with real monitors, some need another interface. I can't reach my screens from where I'm sitting comfortably, and if I could I'd place them further away.


Trying to place a cursor to insert text, or trying to select text in general, can lead one to having to do breathing exercises to keep from punching the screen. I couldn't imagine doing my current job (BI developer) with a touchscreen and no mouse.


Point of sale systems are painfully slow with a mouse. Touchscreens are ok, but the old text-based POSes were ugly but super fast.

When my beta testers learn about keyboard navigation in PhotoStructure, it's so much faster than a mouse, and it's mostly just cursor keys, so it's not hard to remember.


I'm not saying it never happens; Blender is one example I've personally experienced where input velocity would be a major drain on productivity were it not for keyboard shortcuts. But suggesting that Twitter shouldn't involve a mouse just because it's "one-dimensional" content is laughable.


I’m not a heavy twitter user so perhaps my experience is limited, but the actions I perform with the mouse are:

Scrolling, click to view replies, click to reply, click to retweet, click to like, click to view hashtag feed, click to view persons profile.

All of these things could be done efficiently with the keyboard (theres also no real reason to not still support mouse for the people who prefer it). I use the online checklist tool Checkvist.com and checklists are surprisingly similar to a linear feed like twitter. I use it almost exclusively with the keyboard, including editing, reordering, creating new items, tagging, colouring... Its super efficient and comfortable. There is no reason why you couldn’t have similar navigation and interactions on twitter. Personally, I find your outright dismissiveness of a keyboard-centric UI for twitter to be laughable shrug


Can you explain why it is laughable?


For one, there's a lot more on the page than the linear feed. And then each tweet has several actions you can take on it, a drop down menu, etc. What was a single click becomes two or more keystrokes. When you factor in the other views that are even less linear - settings, editing your profile, messaging - you run into a long tail of cases that are poorly suited to a keyboard alone. This tends to be the case: a hyper-focused UI may work a little better with a keyboard than with a mouse, but a mouse does a much better job supporting the wide range of different cases out there and giving users a way to explore and discover the UI.


There are a lot of people who never use a mouse while browsing the web! That's possible with browser extensions like Vimium, which show you a keyboard shortcut on every link when you press a key. "Two or more keystrokes" is a lot faster than using the mouse with that scheme.


The things you mention as less efficient with a keyboard actually sound like plus points for a keyboard to me, personally. Especially messaging.

Besides, you can create a keyboard centric UI without removing mouse support if you really want. It’s just that keyboard should be treated as a first class interaction method, instead of an almost tacked on accessibility feature.


It wasn't just speed for me, but uninterrupted flow. WordStar is probably the best example. Doing things like adjusting formatting, printing, saving a copy, etc...didn't pull my mind away from what I was writing.

It's no longer practical because of the initial learning curve. So I'm not arguing for going back. But it did have advantages beyond just being faster.


Agreed. Velocity of input is not nothing, but I think the original premise set up in the first tweet of the thread—that modern computer software makes us wait too much—is lost by focusing on an input mechanism, which would suggest the root problem is the human's ability to provide input.

I contend my chief complaint with software, in any context, is that it makes me wait while it responds to my input, regardless of how I provided that input.

Modern software:

* Often cannot keep up with typing.

* Often cannot keep up with mouse actions to, e.g., show highlights on hover or indicate interactive elements because there are too many third-party scripts executing (or whatever).

* Takes too long to display results, again because of being too busy executing software components (trackers, ads, bloated client-side frameworks, whatever) that is not expressly related to my action. Also because server software tends to be written using inefficient languages, platforms, and frameworks that theoretically optimize for developer efficiency (arguable) at the expense of user experience. Every time I interact with a web site that is slow to respond to network requests, I am usually right to guess the server is PHP, Ruby, or similar.

* Infantilizes the user by providing a too-narrow set of actions (the curse of "mobile first"), narrowing the functionality and in many cases causing things that should be relatively easy to require lots of steps. This may be where he was going with the remainder of his thread, but I think it went off the rails, as you said.


Spot on. Maybe the author didn't notice most users don't interract with a computer with a keyboard. They use a mouse, a touchpad or a touch screen.

I use the keyboars a lot, but I keep my mouse. Why ? Cognitive load, that's why. I can remember all the shortcuts for all the apps. But I don't want to. Remembering things has a cost. Switching context has a cost. I have a limited brain budget and want to balance it between execution efficienty and solving the problem I'm working on.


One big issue with a mouse is that it's completely unusable for blind users. A touch tablet interface with a 1:1 mapping to points on the screen is significantly more accessible since you don't need visual feedback to know where you are.

Mice work for a lot of us in a lot of circumstances, but there are UI options beside keyboard only vs. mouse


Mouse interfaces are useable, but not optimal, for blind users largely due to accessibility technologies.


The standard mouse hardware allows you to move the digital pointer relative to it's current position. That means you know need to know where the digital pointer is, and that's 100% visual. You could set up audio cues too give location feedback, but that's still not a great solution.

If you use the same digital pointer with a drawing tablet that maps its surface to the screen, then you can set its absolute position on the screen. You now have physical feedback about your pointer's position, as well as visual


Even in development. In fact, probably especially in development!

I've done a lot of programming where my productivity was input-constrained, so I know that that such jobs do exist.

I much prefer jobs where my productivity is idea-constrained. The final code tends to be a lot more interesting, the process of creating it a lot more joyful, and the final software artifact a lot more useful.

As a bonus, I tend to be paid at least an order of magnitude more for idea-constrained code.


I don't think that keyboard only coding is even faster, I can beat top competitive programmers on speed and I use mainly the mouse to navigate and select code. It is possible keyboarders could be faster than me if we have to write huge amounts of boilerplate code, like 1 line of code a second, but otherwise using a mouse shouldn't be a disadvantage.

Tip for more productive mousing: Disable "mouse acceleration" or "enhance pointer precision", it makes it harder for your body to adapt to the mouse. I can move it to where I look at the screen in an instant with almost no adjustment needed at the end since when that if off world space and screen space aligns. Of course I learned that skill to select and order units quickly in RTS games, but it works just as well for selecting and manipulating code.


What kind of code are you writing that requires or benefits from a mouse? For selecting code I can imagine it helps. A touchscreen is just as good. But for writing it?

I haven't used a mouse to drive coding since a C++ course in high school almost 20 years ago. The only thing I use it for is selecting and copying code.

For navigating and writing code it's all emacs, tmux, terminals and unix tools.


> A touchscreen is just as good.

A touchscreen is a nightmare for manipulating small text.

> For navigating and writing code it's all emacs, tmux, terminals and unix tools.

Well, if you never leave emacs/vim and related workflows, you never know anything else.

For writing code, yes, keyboard is best. For navigating, debugging and other things a TUI does not cut it. And I spend a lot more time reading and thinking about code than actually writing it.

A particular example that stands out in the Linux world is GDB's CLI interface. It sucks. Its TUI is better, but still bad compared to a good GUI.


> navigating, debugging and other things a TUI does not cut it. And I spend a lot more time reading and thinking about code than actually writing it.

as an emacs user, I find it extremely painful watching my colleagues clicking around in eclipse to navigate the code base[1]. It feels just so inefficient and slow.

[1] I assume eclipse actually has keyboard shortcuts, but they do not seem to be used by my eclipse-using colleagues.


Slow? It is definitely faster to use a mouse to click a word in a text than use the keyboard to get to it. (I have no idea about how Eclipse does it, though).

Otherwise we would be playing RTS games with the keyboard. (I am a competitive RTS player, so I can almost instantly hit any point in the screen, which helps; a good mouse also helps a lot).


> For navigating, debugging and other things a TUI does not cut it. And I spend a lot more time reading and thinking about code than actually writing it.

Pure TUI is not optimal, but it still beats available mouse-based interface. At the very least, moving around the code is much faster with incremental search than with scrolling and spotting.

I'm still waiting for a code reading program that supports drawing on top of the code and that would otherwise behave like an infinite desk with (searchable, linkable, annotateable) code printouts on it.


> At the very least, moving around the code is much faster with incremental search than with scrolling and spotting.

A GUI does not necessarily imply scrolling to find something.


I use it all the time to cut, paste, copy, find and replace selection, indent or dedent specific lines. I can get by just fine using for example vim, but I am much faster with a mouse based interface since mouse based selection is much faster and more flexible. Sure keyboards have some shortcuts for specific kinds of selections, but with a mouse I can select exactly what I want in a fraction of a second.

The only real advantage to a keyboard only interface is the ability to automate steps, but I feel if you feel a need to automate your code writing then it would be better to refactor the code to require less boilerplate.


Labview code is something you usually don't write. It's common enough in research when working with hardware.

https://en.wikipedia.org/wiki/LabVIEW

addendum: It also allows you to create visual spaghetti code with the data pipes, which is much closer to real spaghetti than the text-based version.


At least an order of magnitude? That's interesting, considering that even mediocre code jockey can make $100k a year, that must mean you make at least a million a year?


That's not a terribly unreasonable claim for a consultant on a particular job. The issue is you can't usually fill your schedule with such jobs.


That sounds perfect to me, I'd love to just work 2-3 months out of every year and call it good.

Do you happen to have any insight as to what kinds of specialties allow for these kinds of lucrative short term opportunities?

My biggest problem with my erstwhile career was that the only good opportunities I've ever heard of were only full time.


In the software world? Not sure, I mostly do niche environmental consulting, engineering, modeling, etc. and have a reputation for solving weird problems for people in a variety of industries.

Typically the lucrative work I'm talking about happens when I just negotiate a flat fee and it doesn't take me that long. Companies often like that because it limits their risk compared to my hourly rate.

But honestly most consulting in most fields should have this type of negotiating opportunity. As long as it's not a field with some sort of existing convention of hiring lots of "independent contractors" that probably should be employees.

As for time in... It takes a lot for maybe a year or so to get enough reputation that you aren't hustling for clients. But now they call me and I decide whether or not I can take them. I get a lot of last minute rush gigs which are usually the best money if I want to take them. Often I don't, and I throw out a "fuck you go away" number and then they jump at it anyway.


I'm amused by the thought that, like the famous legend about the QWERTY keyboard, the mouse is designed to slow us down and distract us from how slow and delicate the computer is. Since 1983, we are still guiding the computer through a series of tiny little steps in order to get anything done, while the real shame is that even after 35+ years, we still can't tell the computer what we want in natural language.

Of course I realize the mouse has its uses.

Since I recently started using Linux for some things, I've noticed a difference in online commentary about how to use it. In Windows, the instructions for installing a software app are many pages of pictures, with circles and arrows, and a paragraph of text for each one. When the OS changes some of its menus, the documentation is obsolete and you have to guess what you're actually supposed to do.

Instructions for Linux are a few lines of text that you copy into your terminal. Now more and more instructions for Windows are going that way: Press Windows-cmd, enter some text, press return, done.


Linux still has some of the same issues around online help, however, considering all the variants of Linux, and versions of those variants, and the insane number of possible variations of installations, and the amount of assumed knowledge behind a lot of the "tutorials" for Linux.


The point is that one user interface cannot work for everything, but we hyper-focus on one anyway.

All smartphones are touchscreen now, which is a terrible form of input for the vast majority of use cases. The blind can't feel it, a sighted person can't type without looking, it doesn't work with gloves, repairs are difficult and expensive, and typing requires complex software for it to not be slow and painful. Touch screens are not meant for 100% of user input, yet that's what we're left with.

The mouse is just a poor form of touchscreen. Hopefully we can convince industry that the user's ease of use is more important than an inconvenient "standard".


I spend a lot of time in an IDE at work. I spend a lot of time in vi at work and at home. I have the keyboard shortcuts for those apps, and a few more apps I use all the time, down. But I think if I had to memorize the keyboard shortcuts for every single app I use, I'd have a fit. Sometimes it is just better to use a mouse. Sometimes it's just nice to sit back and click around (web browser).


The rat has you in its clutches. Kill the rat!


I miss how tight my Amiga used to feel, and in the electronic music sector some people still prefer dedicated hardware rather than PC/Mac for these types of reasons. Click a button and the device responds instantly. When you're doing something creative the aesthetic quality of your tools can genuinely affect the output, and a tight response just feels right, not to mention how fast your workflow can become over time using dedicated hardware that responds instantly.


This is one reason I prefer simple guitar pedals to VSTs and such. With their instant response and tight controls, pedals feel like solid tools you can build muscle memory on. When unzipping plugins and awkwardly dragging sliders with a touchpad, it feels like it's myself who's the tool.


Knob-per-function with no menu dives is what make an instrument.

I hate software.


Why do people keep writing essays on twitter instead of writing an essay and posting it as a blog post?


Twitter works quite well to captures some thoughts you just have on the moment. Though of course the reader experience is awful. The blog post format somehow implies more work and preparation, where you can just open twitter, start to write a short line, then continue message by message.

I don't know why twitter doesn't even try to improve the reader experience, threads are just a complete mess.


Twitter essays are definitely perceptually worse than blog posts.


It's where the audience is. It's working better for getting messages out & people actually reading them than any alternative, despite sucking.


because it's cool


I didn't have a computer in 1983. I wasn't born even. But Windows 98 felt faster than Macintosh and even Linux.


Today Linux can be much faster than Windows 10.


It’s such a bs, UI in 80ies and 90ies was beyond slow by today’s standards. We just expected to wait more back then.

Computer magazines used to publish comparisons on how fast editors can scroll through the text document!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: