If I understand correctly, the problem is that the hash created on the client side is used to create the encryption key before the server side hashes are applied. Only the master password uses the extra server side hashes.
I think you might have missed his point. The gist of it is that almost all modern software is at least 100x slower than it needs to be, for no good reason.
Conventional software practises result in unnecessarily complex code that takes much longer to write and to execute than needed. The problem is so endemic that there isn't really even a good programming language available that makes it easy to use the capabilities of computers.
Fortunately there are a few pioneers trying to improve things, constructively creating solutions such as JAI. Casey is fighting against the fact that not only are industry participants unaware of the problem, buy they actually fight to write bad code to ship bad software to customers because they believe that is the pinnacle.
Considering the tools developers use in general (Visual Studio, VS Code, Eclipse, IntelliJ, Netbeans, various DB browsers, etc.), application start up time doesn't seem to be an important concern for the majority.
My IDE stays open (I keep all the common projects I work on open and just switch between them as needed). I rarely reboot. Startup time is completely irrelevant.
Why not starting them asynchronously? That's how Neovim does it. The editor starts instantly and the language server integration is enabled once the language server is active.
I use Vim with a language server and 10 other plugins that give me almost everything JetBrains offers. Only thing JetBrains is so much better at is fixing merge conflicts.
Vim still starts instantly, and I never have to wait for it to index anything.
JetBrains is amazing, but I hate waiting for it. It stops me from starting work. Maybe that's just my problem.
Doesnt' really matter in my daily life if some IDE takes 0.1 or 10 seconds to start, since it's done so rarely. Might be more annoying with an editor if you're opening files often, then it starts to add up. Or if the editor is slow with editing and scrolling big files, like VSCode is.
What about the cost of putting shitty, slow software into the world?
How about having respect for you user's time? Depending on how many users you have, shaving seconds or milliseconds off your response time will save humanity hundreds, thousands, or millions of hours waiting for your software to do something.
It depends on the product and the user/customer impact, if it doesn’t matter that it takes 200ms or 300ms why do you care? Also, if it matters and you can scale horizontally to improve performance do that first instead of spending valuable developers hours.
I think it's difference in the hacker/product mindset and the engineer mindset.
For hackers or product focused devs, making something work is the most important aspect whereas for engineering focused devs, it hurts to see such large inefficiencies that are solvable.
I empathize with the engineer mindset, but definitely align more with the hacker/product mindset.
After your first sentence I thought you'd continue like this:
For a hacker, making something fast and cool is the most important aspect, whereas for engineers making careful tradeoffs between effort and customer impact are the main focus.
I think you could swap 'hacker' and 'engineer' in your sentence and it would still make sense.
I'm not saying, "engineer bad, hacker good", just that we tend to value "good" code, architecture, performance, etc highly and sometimes that is to our detriment.
I feel the latter is just not being able to see the big picture.
People in the latter mindset don't get that providing real value to users is what matters.
Literally everything boils down to it. Even the example of making a faster application, it's literally only useful in that it increases how quickly you generate user value.
At some point once you generate enough value for your end user, the utility will outweigh a given latency problem.
Even making your server costs cheaper with optimization really matters because you can transform the savings into user value that exceeds the cost of optimizing.
So it really doesn't make sense to thumb your nose down at people who are "just sticking stuff they don't understand together" in a vaccum like they tend to do.
You don't know their runway.
You don't know how concrete their business case is.
You don't know their development budget.
The moment you're making a dichotomy between yourself and "those programmers who just put shiny legos they don't understand together", you're demonstrating a lack of understanding of the bigger picture that development fits in.
Because sometimes hiring someone who has little experience outside clicking those legos together is all that allows an idea to exist.
tl;dr: A service that loads with 100 requests instead of 1 because the developer doesn't know better still generates more value to the end user than one that doesn't exist.
The even bigger picture is that we (as people, who all use various services in our daily life) end up with all these services being slow and frustrating, although they do provide value, resulting in an overall frustrating daily experience.
The non-existence of a bad service allows a better service to come into existence. All too often a market is dominated by some company that got lucky and now has no incentive to improve.
I happen to have it on good authority that there is an endpoint lurking somewhere in our API that has a p99 latency of almost 3 full hours per request. We clearly don't respect our users time. Or our own. Or basic software engineering practices. Sigh.
Yeah client's will timeout for sure after a few minutes. The API doesn't know that though... it'll just sit there wasting an ungodly amount of resource and causing tonnes of noisy neighbor problems. You look in the APM traces and see one with 8 million spans that enumerates the whole database. Just silly. It's easy to be a 10x engineer in an environment where there is a lot of 0.1x engineering :(
Steve Ballmer made some mistakes, but after enough years went by, he realized that and changed his approach, setting up MS for success once again. He just didn't have enough time to execute on his new vision before he got ousted.