>> Not true, soft and hard realtime garbage collectors exist. Your runtime simply needs to bound the amount of reclamation work done at any given time.
That doesn't change anything! You're just choosing a garbage collector with a default deterministic pathological case, which is a guarantee you can make about almost any GC by carefully tailoring your memory usage to your scenario and choice of algorithm. That's all realtiem embedded software development is all about: writing code that has predictable timing given your expected inputs and environment. If all you need to do is flip a bit once every 10 minutes with a precision of 1 second while reading 1 bps from a sensor even a full blown Linux distribution on a modern Intel i7 running a Python or Ruby daemon can be considered "realtime". The language doesn't matter as long as you can predict how long everything is going to take in the worst case and your micro[controller/processor] is fast enough to react.
>> For instance, the cascading free behaviour Rust is currently susceptible to can be broken up into a bounded series of free operations interleaved with ordinary program execution. Rust would then be realtime without truly changing its observable behaviour, except its timing in some programs.
You know that's what the Drop trait is for, right? All you have to do is add whatever memory management code you'd have (in your C program) into the trait implementation and your memory deallocation will behave exactly as it would in any other low level language. These low level facilities have been part of the Rust design from the start, they just don't require you to manually call free() by default. That doesn't mean anything in Rust is stopping you from doing so and if you want to, you can opt out of that behavior entirely by providing a blank Drop implementation. After that, literally anything you can do in C you can also do in a Rust unsafe block.
> That doesn't change anything! You're just choosing a garbage collector with a default deterministic pathological case, which is a guarantee you can make about almost any GC by carefully tailoring your memory usage to your scenario and choice of algorithm.
The fact that you don't have to tailor anything is precisely the point. Latency is a property of a runtime, not a language. This has been my point all along. C/C++ or Rust don't guarantee low-latency realtime properties, and introducing tracing GC doesn't guarantee high-latency non-realtime properties.
> You know that's what the Drop trait is for, right? All you have to do is add whatever memory management code you'd have (in your C program) into the trait implementation and your memory deallocation will behave exactly as it would in any other low level language.
Great, but it doesn't guarantee any properties of code you haven't written, so it still can't achieve the global properties I've been talking about.
> C/C++ or Rust don't guarantee low-latency realtime properties, and introducing tracing GC doesn't guarantee high-latency non-realtime properties.
We completely agree.
> Great, but it doesn't guarantee any properties of code you haven't written, so it still can't achieve the global properties I've been talking about.
How is this any different from C/C++? They do not give you any guarantees that Rust takes away in this regard. Any library that uses Box::new or vec! is exactly the same as a C library that calls malloc/free internally and you can implement the same heap allocation free algorithms in Rust as you can in C/C++.
I don't understand what global properties you expect a low level systems language to guarantee. They definitely can't guarantee that code you haven't written doesn't heap allocate, you have to check that they don't call malloc/free yourself.
That doesn't change anything! You're just choosing a garbage collector with a default deterministic pathological case, which is a guarantee you can make about almost any GC by carefully tailoring your memory usage to your scenario and choice of algorithm. That's all realtiem embedded software development is all about: writing code that has predictable timing given your expected inputs and environment. If all you need to do is flip a bit once every 10 minutes with a precision of 1 second while reading 1 bps from a sensor even a full blown Linux distribution on a modern Intel i7 running a Python or Ruby daemon can be considered "realtime". The language doesn't matter as long as you can predict how long everything is going to take in the worst case and your micro[controller/processor] is fast enough to react.
>> For instance, the cascading free behaviour Rust is currently susceptible to can be broken up into a bounded series of free operations interleaved with ordinary program execution. Rust would then be realtime without truly changing its observable behaviour, except its timing in some programs.
You know that's what the Drop trait is for, right? All you have to do is add whatever memory management code you'd have (in your C program) into the trait implementation and your memory deallocation will behave exactly as it would in any other low level language. These low level facilities have been part of the Rust design from the start, they just don't require you to manually call free() by default. That doesn't mean anything in Rust is stopping you from doing so and if you want to, you can opt out of that behavior entirely by providing a blank Drop implementation. After that, literally anything you can do in C you can also do in a Rust unsafe block.