20 minutes is nothing. Some orgs have CI times where it takes hours or even days. And I'm pretty sure that all orgs go through the same set of tradeoffs and decisions, it's not unique to yours.
I was on the other side of that a few years ago as a tech lead/manager, and of course the team complained about CI speed because everyone always does in every software company. We staffed a team to work on build time improvements. It was the sensible choice. Why not just throw money at it? Well, because:
a. We'd already done that, more than once.
b. The tests weren't parallelized over multiple machines in a single test run anyway.
c. There was a lot of low hanging fruit to make things faster.
d. Developer time is not in fact infinitely expensive compared to cloud costs.
CI can easily turn into a furnace which burns infinite amounts of cash if you let it. Devs who set it up want to use cloud because that's hip and easy and you can get new hardware without doing any planning, but, cloud hardware comes with a high premium over the actual hardware. Optimizing build times feels non-productive compared to working on features or bugs. Also, test times just expand to fill available capacity. Why optimize, even a little bit, when you aren't paying for the hardware yourself? Better to close a ticket and move onto the next, which is transparent to managers and will look impressive to them. In contrast CI times are a commons and it leads to tragedy, where everyone complains but nobody will fix things as the incentive are mis-aligned.
There are often some quick wins. For non-urgent relatively stable use cases like CI it makes more sense to use dedicated hardware, as the instant scaling of the cloud isn't that important, but to a lot of devs (especially younger ones?) it seems like obstructionist conservatism to use that. They'd rather add lots of complexity to try and auto scale workers, shut them down at night, etc. Maybe dedicated machines are coming back around in fashion now, as the cloud premium gets to absurd levels. I see more talk about Hetzner than I used to. In my new company the CI server runs in Hetzner+own hardware, and that works fine. It also has the advantage that the hardware itself is a lot faster because it's not being overcommitted by the cloud vendors, so build times and tests will just magically get faster and (just as importantly) performance will get more predictable.
In other cases enabling caching and fixing any bugs that reveals can also be a big win. Again it can make sense, especially if this work can be assigned to junior devs.
I was on the other side of that a few years ago as a tech lead/manager, and of course the team complained about CI speed because everyone always does in every software company. We staffed a team to work on build time improvements. It was the sensible choice. Why not just throw money at it? Well, because:
a. We'd already done that, more than once.
b. The tests weren't parallelized over multiple machines in a single test run anyway.
c. There was a lot of low hanging fruit to make things faster.
d. Developer time is not in fact infinitely expensive compared to cloud costs.
CI can easily turn into a furnace which burns infinite amounts of cash if you let it. Devs who set it up want to use cloud because that's hip and easy and you can get new hardware without doing any planning, but, cloud hardware comes with a high premium over the actual hardware. Optimizing build times feels non-productive compared to working on features or bugs. Also, test times just expand to fill available capacity. Why optimize, even a little bit, when you aren't paying for the hardware yourself? Better to close a ticket and move onto the next, which is transparent to managers and will look impressive to them. In contrast CI times are a commons and it leads to tragedy, where everyone complains but nobody will fix things as the incentive are mis-aligned.
There are often some quick wins. For non-urgent relatively stable use cases like CI it makes more sense to use dedicated hardware, as the instant scaling of the cloud isn't that important, but to a lot of devs (especially younger ones?) it seems like obstructionist conservatism to use that. They'd rather add lots of complexity to try and auto scale workers, shut them down at night, etc. Maybe dedicated machines are coming back around in fashion now, as the cloud premium gets to absurd levels. I see more talk about Hetzner than I used to. In my new company the CI server runs in Hetzner+own hardware, and that works fine. It also has the advantage that the hardware itself is a lot faster because it's not being overcommitted by the cloud vendors, so build times and tests will just magically get faster and (just as importantly) performance will get more predictable.
In other cases enabling caching and fixing any bugs that reveals can also be a big win. Again it can make sense, especially if this work can be assigned to junior devs.