Plenty of GC enabled system languages have proven their value, up to building full stack graphical workstations, so far they just lacked somg big corp political and monetary willingness to push them down the anti-GC devs no matter what.
Thanfully with the likes of Swift on iDevices, Java/Kotlin on Android (with an increasingly constrained NDK), COM/UWP über alles + .NET on Windows, ChromeOS + gVisor, Unreal + GCed C++, those devs will have a very tiny niche to contend with.
I give it about 10 years time, for pure manual memory management to be like Assembly and embedded development.
> I give it about 10 years time, for pure manual memory management to be like Assembly and embedded development.
For someone who has spent some time thinking about memory management strategies, manual MM isn't actually that much additional work. By far, most code doesn't allocate or free (and that's a good thing). So MM->GC is hardly like Assembly->Compiler. In Assembly you're constantly allocating and pigeonholing, and you can't have nice names for things. Assembly->Compiler is a huge step compared to MM->GC, and GC can cause a lot of headaches as well. (disclaimer, I've done almost no assembly at all).
> By far, most code doesn't allocate or free (and that's a good thing).
Depends on the code you write. If, like in C++, non-stack memory management is painful, programmers tend to react like you suggest.
In pure-by-default languages, you are creating new and destroying old objects all the time. (At least conceptually. A sufficiently smart compiler can eliminate most of that.)
Obviously depends on the use case, but since C++11 there is little to no pain involved in manual non-stack memory management. You clearly express the ownership semantics through things like std::unique_ptr and std::shared_ptr and if those make sense then everything works (minus problems like circular shared_ptr references, which exists in similar forms with GCs).
Yeah, I didn't elaborate this enough (because it wasn't really the main point). When I said "similar" I meant "things holding onto things they should no longer be holding onto", not circular referencing in particular.
I realize that the kind of bug that leads to effective memory leaks with GCs has its own equivalent in manual memory management, but my overall point was that neither manual memory management nor GCs make you immune to leaks from badly designed or incorrectly implemented data structures. Each takes some aspect(s) of pain away.
On single developer projects as long as one doesn't stay too much away from them, scale it up to multiple sized distributed teams, add binary libraries, and you end up with double frees, leaks and ownership issues all over the place.
Of course there will always be some issues somewhere. But, ignoring perfect memory safety, the issues are widely overblown, to the extent that I find manually managing memory a lot easier than dealing with GC once a project grows beyond a couple KLOC.
It's all about proper planning and code organization. Use pooling, central manager structures, etc. If it can be avoided, then do not allocate and free stuff in a single function like you would carelessly do with automated GC. Structure the data such that you don't have to release stuff individually - put it in containers (such as vectors or maps), such that you can release everything at once at certain points in time, or such that you can quickly figure out what can be released at central code locations (that's much like automated GC, but it's staying in control and retaining room for optimization).
I don't think "multiple distributed teams" makes the challenge any harder. You certainly want to (and I'm sure you easily can) contain each ownership management domain wholly in one team.
Building a GC into your system means building nondeterministic amounts of latency into it. Those "full stack graphical workstations" were notorious for being slow, expensive, and coming to a dead halt whenever the heap filled up.
Thankfully, we have RAII as in C++ and Rust and ARC in Swift, which give you automatic memory management without a tracing GC.
If your language requires a GC, it is a complete failure as a systems programming language.
>Building a GC into your system means building nondeterministic amounts of latency into it
That's not true. You can pool, you can call GC when needed, you can build incremental GC with bounded times, and so on.
Check stackoverflow - there's plenty of links to papers and real-world examples of fixed-time garbage collectors. Or check google scholar and read papers.
Go has demonstrated a very efficient garbage collector.
Here [1] is one from Oracle for Java.
>If your language requires a GC, it is a complete failure as a systems programming language.
Plenty of OSes are in development and/or researched using managed languages and GC. Singularity [2] is but one example. I suspect in the future that doing memory management by hand will be as obsolete as writing an OS in assembly. The benefits for security, robustness, and productivity will outweigh the costs, just like the benefits for using higher languages to develop in, while slower than hand-tuned assembly, far outweigh the costs.
Interestingly I never saw that phenomen on ETHZ graphical Workstations powered by AOS.
And apparently no one noticed that a part of Bing used Midori for a while.
That is alrigth, according to Midori team, Windows team also did not accept what Midori was capable of, even when proven wrong.
Having a GC is non different than malloc spending all its time doing context switches to reclaim more OS memory, using the actual OS memory management APIs.
It is up to the developers to decided to use a GC based allocation, stack, global memory segment or plain untraced heap allocations.
The tools are there, naturally there is a learning process that many seem unwilling to do.
And by the way, Swift's reference counting implementation gets wiped out by tracing GCs on the ixy paper.
Well, GC latencies don't bother game developers who work with Unity or people using Java or C# for high speed trading.
Realistically, having the option to use a GC is a boon for many applications. Not everything is hard realtime all the time. Some complex applications tend to have a hard realtime part and parts where it doesn't matter. E.g. a CNC machine controller does not need a guaranteed response time for the HMI or G code parser. But it needs to have a tight control loop for the tool movement.
D is a language where the GC is default, but optional. And the compiler can give you a guarantee that code that explicitly opts out does not interact with the GC and -importantly - can't trigger a GC run that way. However, as this was an afterthought, parts of the language need to be disabled when opting out and not a lot of library functionality works with that.
GC latencies doesn't bother them, because they put large efforts into ensuring there is not garbage to collect. Tricks normally reserved for hard real time embedded systems like allocating all memory buffers at startup time.
GC is very useful for programs that don't have any form of real time - but games are real time and thus you need to be careful to ensure that the worst case of the garbage collector doesn't harm you. Reference counted garbage collection gives you this easier than the other means. Note that I said worst case - the average case of garbage collection is better in most garbage collected languages.
I have never seen such memory mamagement tricks employed in Unity scripts. I'm not saying that they don't exist. They are only rarely required. To be honest, I expected things to be much worse from previous experiences.
Using trees of resources is building nondeterministic amount of latency into your system.
One of my first (after awhile) forays into C++ was to analyze big WFST graph to find various statistics. And I found that my program spent as much time freeing resources as doing actual work. Subjectively, of course, yet.
I don't think that your statement about C with C++ compilers holds true anymore. I have seen quite a few codebases that are definitely C++, but use bespoke memory management strategies where required. Pool allocators and allocation-only heaps are high on the list of things that are useful in this area, for various reasons.
Swift's GC is really a lot closer to modern C++ style memory management than the other languages you mentioned. If you use RAII & shared_ptr in C++ you are using the exact same techniques that Swift's "GC" uses.
The GC is realistically only an issue in rare fringe cases. Early on, the competing standard libraries were a massive problem, though. This was overcome with D2, which is already more than a decade old.
When this standard library competition existed, the library ecosystem had this very weird split where half of the libraries you would have liked to use in your project used the other library you couldn't link to at the same time. This prevented me from picking up and trying D for a long time because I didn't want to deal with that. Now that this is over with and a ton of useful libraries exist, I'm glad that I started to use D because this is now a language in which I'm very productive.
No. D failed due to a mandatory GC and the 'two standard libraries' idiocy.