I was recently talking at a conference and got in an argument with another speaker (not publicly) because he had a technique to improve API response time. Thing is, said technique would delay the time to market on the product and make the code needlessly complicated to maintain. And I pointed out, that it is probably better for the bottom line of most companies to do the way that is slightly slower but easier to maintain and faster to get to market.
At which point he accused me of being another product shill (not a real software engineer) and shut down the debate.
Thing is, I have a computer science degree and take software engineering very seriously. I just also understand that companies have a bottom like and sometimes good is good enough.
So I ask, this "world's fastest web site"... how much did it cost to develop in time/money? How long is the return on investment? And is it going to be more successful because it is faster? Is it maintainable.
I'm guessing the answers are: too much, never, no and no.
With that said, I fully appreciate making thing fast as a challenge. If his goal was to challenge himself like some sort of game where beating the benchmarks is the only way to win. Then kudos. I love doing stuff just to show I can just as much as the next person.
Of course the impact of latency is going to depend on your particular circumstances but there are certainly circumstances where it can make a big impact.
I really want more evidence of the importance of performance in web systems (to justify my hobby, if for no other reason), but things from ten years ago just don’t cut it.
Can we please stop citing articles from ten years ago and cite some studies that aren’t more than two years old?
Do people have more such studies?
(Also I’d be interested in more geographically diverse results. Figures like “three seconds to abandonment” are absurd when you’re in Australia or India as I am. Most e-commerce sites—especially US/international ones—haven’t even started drawing at that point, even with the best Internet connectivity and computer performance available, simply because of latency if the site is hosted in the US only. I visited the US a couple of years ago and was amazed at just how fast the Internet was, simply due to reduced latency.)
> I really want more evidence of the importance of performance in web systems (to justify my hobby, if for no other reason), but things from ten years ago just don’t cut it.
> Can we please stop citing articles from ten years ago and cite some studies that aren’t more than two years old?
Could you explain why? New data is good, sure but why should we disregard old data? I think instead of asking for people to simply discard old data, you should point out the issues you believe the data has so that they can be addressed (with newer data if necessary).
- "In 2008 the Aberdeen Group found that a 1-second delay in page load time costs 11% fewer page views, a 16% decrease in customer satisfaction, and a 7% decrease in searches that result in new customers."
- In 2009 Microsoft Bing found that introducing 500ms extra delay translated to 1.2% less advertising revenue.
The situation has changed. The nature of people’s consumption of the Web is different now, especially with the take-up of mobile browsing in the last couple of years. (The change in the nature of consumption on that count in particular is why I would like to see results from the last year, especially if they’re segmented by device.)
People depend on the web more than they did, so they might be willing to wait for things longer—or even just be resigned to the web being slow. Or perhaps the increased number of providers means that they won’t be willing to wait as long. I don’t know.
It does seem to me that the precious few studies that there are from recent times haven’t shown as severe a drop-off as ones from ten years ago did. But then the ranges of the results are so far out of my experience in non-US countries that I don’t feel I can judge it.
What I do know is that I have had people complain of the age of the data I cited within the last couple of years, when citing the traditional examples of Amazon and Google. Their age and the indubitable ecosystem changes since their time make me a little leery of citing them now, because after due thought, I agree with the objections.
As I say, I want performance to be a commercially valuable factor, because it would justify one of my favourite two topics in IT. I just want us to be using solid, dependable studies of the matter so that we can prove our point without reasonable doubt, and justify expenditure on improving performance, rather than aged studies which are open to quibble.
> People depend on the web more than they did, so they might be willing to wait for things longer—or even just be resigned to the web being slow. Or perhaps the increased number of providers means that they won’t be willing to wait as long. I don’t know.
It is hard for me to disregard those big names and their claims, especially since I have no direct experience in working for any of those companies for me to say otherwise.
On the other hand, in my purely anecdotal world of friends, family, and colleagues, I have never seen/heard of such behaviors. I doubt any of them even realize what 100ms is, let alone abandon their purchase on Amazon because of that delay. Most people can't even react in 100ms to brake their cars.
I really have to wonder if we have causation vs correlation going on.
But it's probably very reasonable to believe that people's expectations of a faster response time has only risen since 10 years ago.
If I had to make a guess whether the same studies performed 10 years ago were performed today, whether the drop in page views would increase or decrease with slower response time, Id guess that today you'd have even fewer page views than 10 years ago.
I don’t believe it’s reasonable to assume that, having presented an alternative view that I find plausible in other comments in this thread. I hope that people’s expectations have increased, but I don’t believe it’s a reasonable assumption.
There is lots of recent data out there. The general trend is that those findings from the mid 2000s hold even more true today. People are much more sensitive to latency, while the problem has gotten harder due to last mile flowing through congested air.
https://www.doubleclickbygoogle.com/articles/mobile-speed-ma...
Why would you expect that visitors became more accepting of slower websites?
If anything I would imagine to be the opposite with connections generally being faster these days. Especially since old Google's result was about observed effects of 100ms slowdown so not about generally slow websites.
I’m not trying to say it will go one way or another—just that the variables in the equation have all changed (expectations, the nature of access to the Internet, ubiquity, &c.), so results from several years ago are obsolete (I’d even call results from over a year ago obsolescent) and we need new figures for credibility.
People don’t necessarily become more demanding; maybe in the earlier days they said, “I don’t need this, it’s taking too long, I’ll just give up,” whereas now they are resigned to the Internet being slow.
Also of interest would be making a site massively faster (e.g. by an apposite service worker implementation), but A/B test on the rollout of it. Over several months, ideally.
People have a threshold to to [1] on how long they stay on your site, say 5 minutes. If you're site is faster, they will see more pages. More pages increases the likelihood that they find something to buy/comment/...
I tested the impact of page speed on ads about 3 years ago. The CTR impact was pretty dramatic - I can't recall the exact numbers but they were in the order of 50% CTR drop with 500ms delay.
That's obviously specific to ads, but there's clearly still some impact from page speed there.
Ya I have worked with the optimize at the end guys before. Really, you are gonna refactor a 100k Java codebase to be fast when its done? Want to know how much easier it is to plan for speed from day 1?
As always, going too far either way is a bad idea. I've worked with "optimise everything ahead of time" people before and it can be an astonishing waste of time.
You shouldn't completely ignore performance all the way through and assume you can sprinkle it on top later, but you shouldn't spend months squeezing the last few ms out of a feature that the client will change / remove when they try it anyway.
> Really, you are gonna refactor a 100k Java codebase to be fast when its done?
Maybe? Depends where the problems are. If the problems are huge architectural ones, no that sounds terrible. If the problem can be solved by later improving an inner loop somewhere and getting things solved then sure.
Speed is part of the user experience, but again it's more complex than that.
Nobody would have used snapchat if it didn't take photos but the site loaded in 1ms. Nobody would use it if it had loads more features and took 2 hours.
Speed is a part of it, and must be traded off with all the other parts of the user experience.
It's helpful to separate performance from the ability to perform. It's generally not the greatest idea to prematurely optimize, but you should absolutely be making decisions that give you the flexibility to optimize in the future. For instance, adding a caching layer before it's needed may be a bad idea, but organizing your data in such a way that you could easily add a caching layer may be a good idea. Making your website in C because it's fast may be a bad idea, but choosing Python because you could easily add C bindings to the slow parts may be a good idea.
I'm currently leading a team whose primary product is the front end service for the core SaaS business that does billions of monthly impressions (hundreds of millions of uniques). This product is over 10 years old, written in Spring, and has around 400,000 lines of Java.
A couple years ago, we underwent a massive overhaul to the system and refactored the application to use our realtime steaming platform instead of using MySQL+Solr.
It took some time to get it done, mainly because at the time there were about 8 years of features baked into a monolithic web service.
Ultimately, we were successful in the endeavor. It took about 2.5 years and our response times are dramatically faster (and less expensive in terms of infrastructure cost).
So yeah, you can do it. It's expensive and takes time if you wait too long. And you need full support from the business. And some companies don't survive it. YMMV.
But sometimes that fix can only be done by switching to another tech stack or framework.
That is, a complete rewrite.
When that's not possible, we have to endure the constantly crashing and slow as molasses web sites like the ones used in universities by students to enroll in courses.
I wonder if Amazon has done other/more research since and softened their stance since that gigaspaces article...
... because I'm noticing, esp. over the last 1+ years, that Amazon's page loads are getting slower and slower (Win7, FF) with all of the "crud" they are loading with each/every page. I remember when it was all a lot snappier (not like an HN page load "snappy", but pretty good!)
This is reasonable. Aside for must-reads, I don't read articles or news stories, funny posts, memes, gifs, etc. if they take way too long to load (3000ms+), or if it's TLDR (unless it's must-read). Most of the time, stuff ends up being TLDR, and it's a quick skim to make sure I'm not missing must-read. On websites like Washington Post, Bloomberg, etc. there's just too much shit going on. Too many ads. Too many popups. Too many "please give me your email!" requests. Shit's everywhere on the internet. I'm not about to go to Rocket Fizz for a Snickers Bar.
An acceptable 2s load time on Wifi might turn into 2 minutes on Edge.
You might say, 95% of our customers have 3G, to which I'll reply, 100% of your customers sometimes don't have 3G.
And when your page takes a minute to load, it doesn't matter what your time to market was, because noone will look at it.
When your news website is sluggish every time I'm on the train, I'll stop reading it, and do something else, like browse hacker news, which is always fast.
Exactly. I'm at my parents' house, where there is only EDGE, and only if the wind goes in the right direction. :)
The supposedly "fastest web page" took 13 seconds to download and render, and that's really good compared to the rest. HN is slightly faster, but probably due to cached assets.
Have you never tried to browse the internet on a crowded hotel wifi, or on a train, or in the mountains? When you have network latencies measured in seconds and a bandwith of a few KB, one minute page load times are nothing special.
Most people just give up in those cases and think they have no signal. But it's not true: That bandwidth would be absolutely sufficient to transfer a few tweets or a newspaper article. It's just the bloated ad-tech infested websites used to serve the content that are breaking down.
Let's imagine for a while, that an engineer is designing a bridge, or an architect a new building. The companies that pay for them are in a hurry and want to cut costs as much as any other.
Do you think it would be an ethical thing to build a less secure bridge or building just for the sake of getting them out quicker and/or cheaper?
So this is how I see it with software engineering. Of all the engineering branches, we take our job the least serious and are not good at defending our decisions or taking the required time to build our software the way it should be. We just assume that our customers know better and have better reasons to get out to the market and that there is nothing we, as software engineers, can do about it.
So in a way, that guy you talked to was kind of right, because it is your responsibility to defend the need of fast, efficient and maintainable software. It is the customer's responsibility to take care of the product and plan accordingly.
> Do you think it would be an ethical thing to build a less secure bridge or building just for the sake of getting them out quicker and/or cheaper?
Yes, and we do this constantly. Nothing is ever designed to be completely safe because we'd never get anything done. Bridges aren't all designed to withstand magnitude 11 earthquakes. We arguably go too far because the costs of the final safety features could save far more lives if used for something like vaccinations, but that's not key here.
> And is it going to be more successful because it is faster?
There's a lot of research out there about the link between page performance and user retention rate. And this makes sense: If newegg is taking forever to browse, I'll switch to amazon and newegg loses out on a decent chunk of change.
So, up to a point, yes, yes they are going to be more successful because it is faster. 200ms on my broadband connected desktop isn't that much, but Google is able to measure it's impact. And that might be a second or two on my cellular connected phone.
> Is it maintainable.
A lot of optimizations I've seen involve simplifying stuff. Fast and maintainable don't have to be at odds. I wouldn't care to guess for the whole, but, for example, do you really think using system fonts instead of embedding your own is more complicated, harder to maintain, and more work? I doubt it, and that's one of the optimizations suggested.
Now, yes, with optimizing, there is a break even point where it's no longer worth it to push further, but it's also not necessarily obvious where that is if you're just taking it a task at a time. Keep in mind: some of this is research for effective and ineffective techniques for optimizing other websites, and evaluating which ones are maintainable (or not) for future projects. To know what to bother with and what not to bother with when implementing the rest of the codebase. If you're just worried about the next JIRA milestone, you'll be sacrificing long term gains for short term metrics.
Is it worth micro-optimizing everything before launch? Probably not.
Is it worth testing out what techniques and technologies perform well enough before launch? I've been part of the mad scramble to do some major overhauls to optimize things that were unshippably slow before launch. Building it out right the first time would've saved us a lot of effort. I'd say "probably yes."
You were right if it would take a lot of resourses to make things fast. But most of the time it doesn't.
I made a lot of sites fast(er) in maybe 4 hours time. Yes, slow frameworks are slow so you cannot change that in 4 hours time. But most frameworks arent that slow.
My work involved: rewriting a slow query used on every page, changing PNGs to JPEG and reducing the resolution, moving an event that is fired on every DOM redraw, and so on.
And every single time I was just fixing someone's lazy programming.
Ofcourse I agree that there should be a limit to optimizations, but most of the time simple fixes will reduce seconds.
> And every single time I was just fixing someone's lazy programming.
Often, the kinds of mistakes you mention are not due to laziness, but instead to ignorance. Many web developers come from a graphic design or old school "webmaster" background and some never really mastered programming as a skill. They're comfortable writing and/or copy-pasting small scripts using jQuery or some favorite library for minor enhancements to a page, but struggle when it comes to building a cohesive, performant, well-designed application for the front-end.
I myself was a member of this group until about 2007-2008, when I made a concerted effort to upgrade my programming skills. I did intensive self-study of algorithms, data structures, low-level programming, functional programming, SICP (most of it) and K&R C (all of it), etc.
More recently, Coursera and EdX have been a great resource for me to continue to advance for software development skills.
An engineer's job is to solve a problem within real world restrictions. Cost, implementation time, maintainability are all parts of the equation an engineer has to solve.
Your approach was correct. Ideally you would take into account how response time affects a site's metrics and try to balance between all constraints.
Because its all about making compromises to manage an app and achieve its goals. You are right about the time to market and launching the product sooner should be the number one priority. But of all the factors that make your product worthwhile, performance is a pretty darn good factor.
There are several websites today on the internet who have the potential to become great, if only they pay some heed to the performance factor. Take the Upwork freelancing site for example, its performance was really solid when it was oDesk, its predecessor. Its basically, because of the earlier oDesk goodwill that it still even has a sizable userbase today. Sometime in 2013, along the lines of your thinking, some management guru must have cut corners in development of the repolished upwork site, and the result was an absolute performance nightmare! As a freelancer, Upwork is a third or fourth priority for me now, whereas the former oDesk was actually number one.
Another example of a nightmarish performance is Quora - it has a fantastic readership that supplies solid content to the site. Its a solid proof that really good content is so much valued in the online world - that despite its lagging performance, people are willing to endure a site with good content, but that doesn't mean its ideal. Quora still has a lot of potential, it can match or maybe even surpass the levels of Reddit and HN, or even Facebook and Linkedin if they pay heed to the performance factor, but I don't see that happening soon!
I think performance is one of the reasons, if not the main reason why WhatsApp is the leading mobile chat application.
They can manage more millions of messages per second than all their competitors.
The 'build slower applications much faster' mantra has some value, except when everyone can build that application in a month and the market is full of clones.
I was recently talking at a conference and got in an argument with another speaker (not publicly) because he had a technique to improve API response time. Thing is, said technique would delay the time to market on the product and make the code needlessly complicated to maintain. And I pointed out, that it is probably better for the bottom line of most companies to do the way that is slightly slower but easier to maintain and faster to get to market.
At which point he accused me of being another product shill (not a real software engineer) and shut down the debate.
Thing is, I have a computer science degree and take software engineering very seriously. I just also understand that companies have a bottom like and sometimes good is good enough.
So I ask, this "world's fastest web site"... how much did it cost to develop in time/money? How long is the return on investment? And is it going to be more successful because it is faster? Is it maintainable.
I'm guessing the answers are: too much, never, no and no.
With that said, I fully appreciate making thing fast as a challenge. If his goal was to challenge himself like some sort of game where beating the benchmarks is the only way to win. Then kudos. I love doing stuff just to show I can just as much as the next person.