Performance is a feature and management often doesn't care to optimize for it. If the market valued performance more then we would probably see competitive services which optimize for performance, but we generally don't. I'm sure there's plenty of developers that could deliver improved performance, it's just a matter of tradeoffs.
Maybe the people who care this much about performance should start competing services or a consulting firm which optimizes for that. Better yet, they could devote their efforts to helping create educational content and improved frameworks or tooling which yields more performant apps.
One issue is, that the caring about performance is often not visible. How does management accout for or measure how annoyed people get visiting their bloated websites? How many people do not know better, how fast and snappy a not bloated website can be, because they apend all their time on Instagram, FB, and co? Even if a company does measure it somehow via some kind of truly well executed A/B test, other explanations might be reached for, to explain why a user left the website, than the performance.
Isn't that what the tracking stuff is supposed to track? Measure things like how 'annoyed' people get by bounce rate and whatever other relevant metrics.
Yes, but how do you determin the actual reason for a bounce? The test would need to have all the same starting conditions and then let some users have a better performing version or something like that. But at that point one would probably rollout the better performing version anyway. Maybe artificially worsen the performance and observe how the metrics change. And then it is questionable, whether the same amount by which performance decreased would have the same effect in reverse, if the performance increased by that amount. Maybe up to a certain point? In general probably not.
In general it is difficult, because changing things to perform better is usually accompanied by visual and functionality changes as well.
I doubt a company would be willing to deliberately risk losing sales by testing a worse version. AB tests are great in theory, but in practice, to test the current slow system against a faster one, you have to do the optimization work which the test is supposed to justify. That’s why AB testing is often used for quick wins, pricing points or purchase flows, but rarely the big costly questions.
Surveys could be used to explain the bounce rate, but getting feedback from people who leave is one of the hardest the recruit well for. Usability tests could help with that though.
> If the market valued performance more then we would probably see competitive services which optimize for performance, but we generally don't.
I believe there is some nuance to this due to the winner-takes-all nature of modern software services. There simply isn't a lot of choice for users or switching is expensive so companies don't do it and employees are forced to suffer through horrible performance.
Or switching happens for 90% of the feature and is called good enough, which results in now having 2 systems to maintain because the old deprecated one actually still has critical edge cases depending on it…
Performance is not a feature. Decisions about performance are part of every line of code we write. Some developers make good decisions and do their job right, many others half-ass it and we end up with the crap that ships in most places today.
This “blame the managers” attitude denies the agency all developers have to do our jobs competently or not. The manager probably doesn’t ultimately care about source control or code review either, but we use them because we’re professionals and we aim to do our jobs right. Maybe a better example is security: software is secure because of developers who do their jobs right, which has nothing to do with whether or not the manager cares about security.
I can agree to a point, but it's not very scalable. Imagine if the safety of every bridge and building came down to each construction worker caring on an individual level. At some point, there need to be processes that ensure success, not just individual workers caring enough.
Secure software happens because of a culture of building secure software, or processes and requirements. NASA doesn't depend on individual developers "just doing the right thing", they have strict standards.
Maybe the people who care this much about performance should start competing services or a consulting firm which optimizes for that. Better yet, they could devote their efforts to helping create educational content and improved frameworks or tooling which yields more performant apps.