The stuff Fil-C adds is on the same footing as `unsafe` code in Rust- its implementation isn't checked, but its surface area is designed so that (if the implementation is correct) the rest of the program can't break it.
Whether the amount and quality of this kind of code is comparable between the two approaches depends on the specific programs you're writing. Static checking, which can also be applied in more fine-grained ways to parts of the runtime (or its moral equivalent) is an interesting approach, depending on your goals.
> The stuff Fil-C adds is on the same footing as `unsafe` code in Rust- its implementation isn't checked, but its surface area is designed so that (if the implementation is correct) the rest of the program can't break it.
It’s not the same.
The Fil-C runtime is the same runtime in every client of Fil-C. It’s a single common trusted compute base and there’s no reason for it to grow.
On the other hand Rust programmers use unsafe all over the place, not just in some core libraries.
Yeah, that's what I meant by "depends on the specific programs you're writing." Confining unsafe Rust to core libraries is totally something people do.
There’s no reason to believe that one program is inherently representative. sudo-rs eschews dependencies and so is likely to be higher than most programs.
Furthermore, 170 uses in a 200 line program vs a one million line program are very different. I don’t know off hand how big sudo-rs is.
Even in embedded OS kernels, it’s often around 1%-5% of code. Many programs have no direct unsafe code at all.
I mean, again, yeah. I specifically compared the safe API/unsafe implementation aspect, not who writes the unsafe implementation.
To me the interesting thing about Rust's approach is precisely this ability to compose unrelated pieces of trusted code. The type system and dynamic semantics are set up so that things don't just devolve into a yolo-C-style free-for-all when you combine two internally-unsafe APIs: if they are safe independently, they are automatically safe together as well.
The set of internally-unsafe APIs you choose to compose is a separate question on top of that. Maybe Rust, or its ecosystem, or its users, are too lax about this, but I'm not really trying to have that argument. Like I mentioned in my initial comment, I find this interesting even if you just apply it within a single trusted runtime.
That would require having constructors, which is not something Odin will ever have nor should it. However you can just initialize with a constant or variable or just use a procedure to initialize with. Odin is a C alternative after all, so it's a fully imperative procedural language.
Why would it require constructors? As opposed to simply enforcing that it always be initialized with a constant/variable/procedure/etc rather than zeroed.
> you can just initialize with a constant or variable or just use a procedure to initialize with.
Is there an option to leave something uninitialized? I often find the allocation of explicitly uninitialized objects to be a performance necessity in tight loops when I'm working with numpy.
That is true of `&mut T`, but `&mut T` is not the only way to do mutation in Rust. The set of possible safe patterns gets much wider when you include `&Cell<T>`. For example see this language that uses its equivalent of `&Cell<T>` as the primary mutable reference type, and uses its equivalent of `&mut T` more sparingly: https://antelang.org/blog/safe_shared_mutability/
You can explain this sort of pattern to the borrow checker quite trivially: slap a single `'arena` lifetime on all the references that point to something in that arena. This pattern is used all over the place, including rustc itself.
(To be clear I agree that this is an easy pattern to write correctly without a borrow checker as well. It's just not a good example of something that's any harder to do in Rust, either.)
I remember having multiple issues doing this in rust, but can't recall the details. Are you sure I would just be able to have whatever refs I want and use them without the borrow checker complaining about things that are actually perfectly safe? I don't remember that being the case.
Edit: reading wavemode comment above "Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't." that I think was at least one of the problems I had.
The main issue with using arenas in Rust right now is that the standard library collections use the still-unstable allocator API, so you cannot use those with them. However, this is a systems language, so you can use whatever you want for your own data structures.
> reading wavemode comment above
This is true for `&mut T` but that isn't directly related to arenas. Furthermore, you can have multiple mutable aliased references, but you need to not use `&mut T` while doing so: you can take advantage of some form of internal mutability and use `&T`, for example. What is needed depends on the circumstances.
wavemode's comment only applies to `&mut T`. You do not have to use `&mut T` to form the reference graph in your arena, which indeed would be unlikely to work out.
> And in Rust? Well, let's see. Turns out monomorphization isn't all you need. Turns out that it is, in fact, occasionally useful to unify an object and its behavior in a runtime-polymorphic way. I expect yeet_expr to go through eventually too. Others are trying to stabilize (i.e. ossify) the notionally unstable ABI, just like they did to poor C++, which is now stuck with runtime pessimization because somebody is too lazy to recompile a binary from 2010.
Not to make an argument either way on your general point, but these are really bad examples for Rust if you look at the specifics:
Monomorphization was never the only option. Trait objects and vtables predate the RFC process itself. Early Rust wanted more dynamic dispatch than it eventually wound up with.
All the various initiatives related to stable ABI are focused on opt-in mechanisms that work like `#[repr(C)]` and `extern "C"`.
The only way to interpret these as examples of "brash young project walks back its early stridency as it ages" is if you ignore the project's actual reasoning and design choices in favor of the popular lowest-common-denominator Reddit-comment-level understanding of those choices.
It means you can't just write `impl Bar for MyType` and get Foo pulled in automatically. You have to write both `impl`s yourself.
The inheritance-like syntax is shorthand for `trait Bar where Self: Foo`, and bounds like this can show up in lots of other places, where they follow the same rules: `MyType` has to impl all of them, and the traits and bounds can have fairly arbitrary (or at least un-tree-like) relationships.
The upcasting thing is a rare exception to the idea that this `Self: Foo` bound is just like any other bound.
Algebraic subtyping and structured typing are not the same thing. Algebraic subtyping is a specific approach to type inference and checking of subtyping, which may or may not be used with structural types.
I believe they only standardized the two's-complement representation (so casts to unsigned have a more specific behavior, for example) but they did not make overflow defined.
Yeah, signed integer overflow is as UB as ever. I've heard the primary reason for it is to avoid the possibility of wraparound on 'for (int i = 0; i < length; i++)' loops where the 'length' is bigger than an int. (Of course, the more straightforward option would be to use proper types like size_t for all your indices, but it's a classic tradition to use nothing but char and int, and people judge compilers based on existing code.)
What it means is that since i as the variable is monotonically increasing, an array indexing operation that is in the loop body can be replaced with an incrementing pointer instead, which eliminates quite a lot of code. An example here: https://pvs-studio.com/en/blog/posts/cpp/0374/
> Rust's definition of "unsafe" is basically just "whatever Rust prevents".
It's not that circular.
Rust defines data races as unsafe because they can lead to reads that produce corrupt values, outside the set of possibilities defined by their type. It defines memory leaks as safe because they cannot lead to this situation.
That is the yardstick for what makes something safe or unsafe. It is the same yardstick used by other memory-safe languages- for instance, despite your claims to the contrary, garbage collectors do not and cannot guarantee a total lack of garbage. They have a lot of slack to let the garbage build up and then collect it all at once, or in some situations never collect it at all.
There are plenty of undesirable behaviors that fall outside of this definition of unsafety. Memory leaks are simply one example.
>It defines memory leaks as safe because they cannot lead to this situation.
They can't now. They could up to and almost including 1.0. At that point the consensus was that memory leaks were unsafe and so unsafe code could rely on them not happening. That code was not incorrect! It just had assumptions that were false. One solution was to make those assumptions true by outlawing memory leaks. The original memory leak hack to trigger memory corruption was fairly fiendish in combination with scoped threads (IIRC).
>There are plenty of undesirable behaviors that fall outside of this definition of unsafety. Memory leaks are simply one example.
That is my whole point. It is a useless definition cherry-picked by Rust because it is what Rust, in theory, prevents. It does not precede Rust. Rust precedes it.
>It is the same yardstick used by other memory-safe languages- for instance, despite your claims to the contrary, garbage collectors do not and cannot guarantee a total lack of garbage. They have a lot of slack to let the garbage build up and then collect it all at once, or in some situations never collect it at all.
If it will eventually be collected then it isn't a memory leak.
Most actual safe languages don't let you write integer overflow.
> They can't now. They could up to and almost including 1.0. At that point the consensus was that memory leaks were unsafe and so unsafe code could rely on them not happening. That code was not incorrect!
This is not how it worked, no. It was never memory leaks per se that led to unsoundness there. It was skipping destructors. You could have the exact same unsoundness if you freed the object without running the rest of its destructor first.
That part was the design choice Rust made- make destructors optional and change the scoped threads API, or make destructors required and keep the scoped threads API.
There is an underlying definition of memory safety (or more generally "soundness") that precedes Rust. It is of course defined in terms of a language's "abstract machine," but that doesn't mean Rust has complete freedom to declare any behavior as safe. Memory safety is a particular type of consistency within that abstract machine.
This is why the exact set of undesirable-but-safe operations varies between memory-safe languages. Data races are unsafe in Rust, but they are safe in Java, because Java's abstract machine is defined in such a way that data races cannot lead to values that don't match their types.
Whether the amount and quality of this kind of code is comparable between the two approaches depends on the specific programs you're writing. Static checking, which can also be applied in more fine-grained ways to parts of the runtime (or its moral equivalent) is an interesting approach, depending on your goals.
reply