> There is no reason data isolation should cost you 100x memory usage.
It really depends on what you mean by "memory usage".
The fundamental principle of any garbage collection system is that you allocate objects in the heap at will without freeing them until you really need to, and when that time comes you rely on garbage collection strategies to free and move objects. What this means is that processes end up allocating more data that the one being used, just because there is no need to free it.
Consequently, with garbage collecting languages you configure processes with a specific memory budget. The larger the budget, the rarer these garbage collection strategies kick in.
I run a service written with a garbage collected language. It barely uses more than 100MB of memory to handle a couple hundred requests per minute. The process takes over as much as 2GB of RAM before triggering generation 0 garbage collection events. These events trigger around 2 or 3 times per month. A simplistic critic would argue the service is wasting 10x the memory. That critic would be manifesting his ignorance, because there is absolutely nothing to gain by lowering the memory budget.
> That critic would be manifesting his ignorance, because there is absolutely nothing to gain by lowering the memory budget.
Given that compute is often priced proportional to (maximum) memory usage, there is potentially a lot to be gained: dramatically cheaper hosting costs. Of course if your hosting costs are small to be begin with then this likely isn't worthwhile.
In my example, the cost of having garbage collection generation 0 events triggering twice a year would be an extra $5. If I wanted the frequency of these events to double, in theory I would save perhaps $2/month.
If I ran a web-scale service with 10 times the nodes as-is, we're talking about a $50/month price tag difference.
How much does a company charge for an engineer's hourly labor? How many years would it take to recover the cost of having an engineer tune a service's garbage collection strategy?
People need to thing things through before discussing technical merits.
> That critic would be manifesting his ignorance, because there is absolutely nothing to gain by lowering the memory budget.
Well, that depends on information you haven't provided. Maybe your system does have an extra 900 MB of memory hanging around; I've certainly seem systems where the minimum provisionable memory[1] is more than what the system will use for program memory + a full cache of the disk. If that's the case, then yeah, there's nothing to gain. In most systems though, 900 MB of free memory could go towards caching more things from disk, or larger network buffers, or something more than absolutely nothing.
Even with all that, lowering your memory budget might mean more of your working memory fits in L1/L2/L3 cache, which could be a gain, although probably pretty small, since garbage isn't usually accessed. Absolutely nothing is a pretty low barrier though, so I'm sure we could measure something. Probably not worth the engineering cost though.
There are also environments where you can get rather cheap freeing by setting up your garbage to be easily collected. PHP does a per-request garbage collection by (more or less) resetting to the pre-request state after the request is finished; this avoids accumulating garbage across requests, without spending a lot of effort on analysis. An Erlang system that spawns short lived BEAM processes to handle requests can drop the process heap in one fell swoop when the process dies; if you configure the initial heap size so no GCs are triggered during the lifetime of the process, there's very little processing overhead. If something like that fits your environment and model, it can keep your memory usage lower without a lot of cost.
[1] Clouds usually have a minimum memory per vCPU; if you need a lot of CPUs and not a lot of memory, too bad. I don't think you can buy DDR4 SIMMs of less than 4GB, or DDR5 of less than 8GB. Etc
> Well, that depends on information you haven't provided. Maybe your system does have an extra 900 MB of memory hanging around;
That's not how it works. You cannot make sweeping statements over how something is bad when you fail to consider how it's used and what are the actual real world constraints.
For example, you're arguing that minimizing memory consumption is somehow desirable, and if you're making that claim you need to actually make a case. I clearly refuted your point by clarifying how things work in the real world. If you feel you can come up with a corner case that refutes it, just do it. So far you haven't, but that didn't stopped you from making sweeping statements.
It really depends on what you mean by "memory usage".
The fundamental principle of any garbage collection system is that you allocate objects in the heap at will without freeing them until you really need to, and when that time comes you rely on garbage collection strategies to free and move objects. What this means is that processes end up allocating more data that the one being used, just because there is no need to free it. Consequently, with garbage collecting languages you configure processes with a specific memory budget. The larger the budget, the rarer these garbage collection strategies kick in.
I run a service written with a garbage collected language. It barely uses more than 100MB of memory to handle a couple hundred requests per minute. The process takes over as much as 2GB of RAM before triggering generation 0 garbage collection events. These events trigger around 2 or 3 times per month. A simplistic critic would argue the service is wasting 10x the memory. That critic would be manifesting his ignorance, because there is absolutely nothing to gain by lowering the memory budget.