If the request you are talking about actually gets executed once (maybe it gets cached or something), then you shouldn't be pitching the time difference anyway.
If it gets executed 100s of times in a day per person, you can say this is 50 ms * 100 = 5s, 200 ms * 100 = 20s. And that's just per user.
Yeah, actually doing this math tends to surprise people. We had an automated background process that took ~2s to run, which doesn't sound bad at all considering it includes an API call over the internet. But multiplying it out to the number of backlog items we actually had at the time, 30-40 hours doesn't sound so good.
Fair enough. Unless you're talking about lags for automated trading systems/ algos where even that single "I really can't measure this" sec difference counts.
I mean, sure, the precision matters for HFT but at the scale the point would be moot since the time is so minuscule. Unless you hyperscale it: “on 1,000,000 trades the 50ms difference becomes very pronounced and could cost us $z” or something of the sort. But I still think it loses “the spirit” of the method — best way I can phrase that.