"So do we all have to keep reinventing these wheels, but only after a production outage?"
Lotta cynical replies, and mine is going to sound like one of them at first, but I actually mean it in a relatively deep and profound way: Time is hard. You can even see it in pure math, where Logic is all fun and everyone's having a great time being clever and making all sorts of exciting systems and inferences in those systems... and then you try to build Temporal Logic and all the pretty just goes flying out the door.
Even "what if the reply takes ten seconds" is the beginning. By the very nature of the question itself I can infer the response is expected to be small. What if it is large? What if it might legitimately take more than ten seconds to transfer even under ideal circumstances, but you need to know that it's not working as quickly as possible? Is your entry point open to the public? How does it do with slowloris attacks [1]? What if your system simply falls behind due to lack of resources? The difference between 97% capacity and 103% capacity in your real, time-bound systems can knock your socks off in ways you'd never model in an atemporal system that ignored how long things take to happen.
Programming would be grungy enough even if we didn't have these considerations, but I'm not even scratching the surface on the number of ways that adding time as a real-world consideration complexifies a ton of things. Our most common response is often just to ignore it. This is... actually often quite rational, a lot of the failure cases can be feasibly addressed by various human interventions, e.g., while writing your service to be robust to "a slow internal network" might be a good idea, there's also a sense in which the only real solution is to speed up the internal network. But still, time is always sitting there crufting things up.
One of my favorites is the implicit dependency graph you accidentally start creating once your business systems guys start doing "daily processes" of this and that. We're going to do a daily process to run the bills, but that depends on the four daily dumps that feed the billing process to all have been done first. By the way, did you check that the dumps are actually done and not actually in progress as you're trying to use them? And those four daily dumps each have some other daily processes behind them, and if you're not very careful you'll create loops in those processes which introduce all sorts of other problems... in the end, a set of processes that in perfect atemporal logic land wouldn't be too difficult to deal with becomes something very easy to sleepwalk into a nightmare world, where your dump is scheduled to run between 2:12 and 2:16 and it damned well better not fail for any reason, in your control or out of it, or we're not doing billing today. (Or even the nightmare world where your dump is scheduled to run after 3pm but before 1pm every day... that is, these dependency graphs don't have to get very complicated before literally impossible constraints start to appear if you're not careful!) Trying to explain this to a large number of teams at every level of engineering capability level (frequently going all the down to "a guy who distrusts and doesn't like computers who, against his will, maintains a spreadsheet, which is also one of the vital pillars of our business") is the sort of thing that may make you want to consider becoming a monk.
I believe that, in terms of firm theory and how technology plays into organization side, we're reaching the limits of current paradigms. Over the last three to four decades, transactions costs grew (more regulations on personal data, more complicated cross-borders contracts as services became dominant in most economies - free trade agreements typically cover goods but not services) while coordination costs fell (most business facing software can now be used as a metered service in the browser). This favored growing corporations.
I've seen in my lifetime conglomerates fall out of favor ('synergies' failed to materialize) and then rise up again but this time in the computer technology sector - are you in the Apple, Microsoft or Google corporate tech garden?
But now interest rates are back and investors can't just park wealth in businesses that just grow revenue but not profit. So ballooning complexity can't just be dealt with by throwing bodies (and pay raises) at the problem anymore.
I hope this leads to more niche player offerings and less saas where small local outfits are just independent sales outfits for cloud borgs.
Lotta cynical replies, and mine is going to sound like one of them at first, but I actually mean it in a relatively deep and profound way: Time is hard. You can even see it in pure math, where Logic is all fun and everyone's having a great time being clever and making all sorts of exciting systems and inferences in those systems... and then you try to build Temporal Logic and all the pretty just goes flying out the door.
Even "what if the reply takes ten seconds" is the beginning. By the very nature of the question itself I can infer the response is expected to be small. What if it is large? What if it might legitimately take more than ten seconds to transfer even under ideal circumstances, but you need to know that it's not working as quickly as possible? Is your entry point open to the public? How does it do with slowloris attacks [1]? What if your system simply falls behind due to lack of resources? The difference between 97% capacity and 103% capacity in your real, time-bound systems can knock your socks off in ways you'd never model in an atemporal system that ignored how long things take to happen.
Programming would be grungy enough even if we didn't have these considerations, but I'm not even scratching the surface on the number of ways that adding time as a real-world consideration complexifies a ton of things. Our most common response is often just to ignore it. This is... actually often quite rational, a lot of the failure cases can be feasibly addressed by various human interventions, e.g., while writing your service to be robust to "a slow internal network" might be a good idea, there's also a sense in which the only real solution is to speed up the internal network. But still, time is always sitting there crufting things up.
One of my favorites is the implicit dependency graph you accidentally start creating once your business systems guys start doing "daily processes" of this and that. We're going to do a daily process to run the bills, but that depends on the four daily dumps that feed the billing process to all have been done first. By the way, did you check that the dumps are actually done and not actually in progress as you're trying to use them? And those four daily dumps each have some other daily processes behind them, and if you're not very careful you'll create loops in those processes which introduce all sorts of other problems... in the end, a set of processes that in perfect atemporal logic land wouldn't be too difficult to deal with becomes something very easy to sleepwalk into a nightmare world, where your dump is scheduled to run between 2:12 and 2:16 and it damned well better not fail for any reason, in your control or out of it, or we're not doing billing today. (Or even the nightmare world where your dump is scheduled to run after 3pm but before 1pm every day... that is, these dependency graphs don't have to get very complicated before literally impossible constraints start to appear if you're not careful!) Trying to explain this to a large number of teams at every level of engineering capability level (frequently going all the down to "a guy who distrusts and doesn't like computers who, against his will, maintains a spreadsheet, which is also one of the vital pillars of our business") is the sort of thing that may make you want to consider becoming a monk.
[1]: https://en.wikipedia.org/wiki/Slowloris_(computer_security)