Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The first code snippet, which is as simple as it gets, perfectly illustrates why Rust is extremely annoying to work with. I understand why you need the into_iter bit and why the borrow checker complains about it, but the fact that even the simplest "for x in y" loop already makes you wrestle the compiler is just poor ergonomics.


#include <vector> #include <iostream>

  int main() {
      std::vector<int> vec = {1, 2, 3};
      vec.reserve(3);

      std::cout << "Initial capacity: " << vec.capacity() << std::endl;
      std::cout << "Initial data address: " << (void*)vec.data() << std::endl;

      int* ptr = vec.data();

      std::cout << "Pointer before push_back: " << (void*)ptr << std::endl;
      std::cout << "Value via pointer before push_back: " << *ptr << std::endl;

      std::cout << "\nPushing back 4...\n" << std::endl;
      vec.push_back(4);

      std::cout << "New capacity: " << vec.capacity() << std::endl;
      std::cout << "New data address: " << (void*)vec.data() << std::endl;

      std::cout << "\nAttempting to access data via the old pointer..." << std::endl;
      std::cout << "Old pointer value: " << (void*)ptr << std::endl;
      int value = *ptr;
      std::cout << "Read from dangling pointer (UB): " << value << std::endl;

      return 0;
  }
./a.out Initial capacity: 3 Initial data address: 0x517d2b0 Pointer before push_back: 0x517d2b0 Value via pointer before push_back: 1

Pushing back 4...

New capacity: 6 New data address: 0x517d6e0

Attempting to access data via the old pointer... Old pointer value: 0x517d2b0 Read from dangling pointer (UB): 20861


You opted to use features of std::vector that are documented to be unsafe (notably ::data()). This is the actual C++ translation of the opening code in TFA:

  #include <vector> 
  #include <iostream>

  int main() {
      std::vector<int> vec = {1, 2, 3};
      
      for (auto const & i : vec) {
          std::cout << i << std::endl;
      }
  }
It is possible to use C++ to write unsafe code! Amazing! Some people want a language where this is not possible! Great!


> This is the actual C++ translation of the opening code in TFA:

No, it isn't: this is iterating over references, not moving. This is equivalent to

  fn main() {
      let x = vec![1, 2];

      for y in &x {
          println!("{}", y);
      }
      println!("{}", x.len());
  }
in Rust. Note the &, just like in your C++.


The purpose of the first code example in TFA:

> This is straightforward: we create a vector containing the values [1, 2], then iterate over it and print each element, and then finally print out the length of the vector. This is the kind of code people write every day.

The C++ code I provided does essentially this (I omitted printing the length, since it is so trivial), and is "the kind of code people write every day".

The fact that Rust requires you to consider move semantics for such simple code is precisely one of the central points of the article.


"C++ code that implements the problem, but in a different way" is not "the actual C++ translation of the opening code in TFA."


The C++ code implements the intended goal, not the problem TFA is trying to illustrate.

Changing between:

    for (auto i : vec)
and

    for (auto & i : vec)
has essentially no bearing on what the author is trying to show. If they were focused on how move semantics are always important, they would not use an integer type.


You are not fighting the C++ compiler or showing why the C++ compiler might be annyoing. You are introducing a bug by poorly using a library (which has nothing to do with writing and compiling C++). Ergonomics I believe are fine?

I'm struggling hard trying to understand what or if your comment has anything to do with GP's comment. Perhaps you wanted to tell that the Rust compiler might have stopped you from producing a buggy program, but again, it has nothing to do with GP's comment.


I think 0xdeafbeef is roughly recreating the first code snippet from the article (which is one of the things diath is complaining about) in C++ to show that the compiler should produce an error or else undefined behavior could occur on resize.


i think really the problem is not the compiler so much but that you need to be thinking about what `for` desugars to.


The desugaring is an explanation of what's happening under the hood, but you don't need to think about that. Your mental model can simply be:

    for _ in x  // Consumes (moves) x and iterates over the values in x

    for _ in &x // Borrows x and iterates over references to the values in x


depends on what `into_iter` does in your type impl

ive always felt that there is something a bit strange about a keyword reaching into the stdlib (rust is not the only language that does this), especially if it's only for sugaring.


The alternative is duck-typing, hard-coding a method name, like Python’s __dunder__ methods. That’s just as strange. And for Rust, definitely unacceptable.

I think C++’s range-based for loops work this way, basically hard-coding .begin() and .end(). Oh, and only working on array-like things due to hard-coding ++ as the operator to go to the next item, so it’s less powerful than Rust.

It’s not a stable part of Rust, but you can provide your own definitions of lang items, not using even libcore. But they will need to be quite similar to the official one as the compiler will pretty much hard-code aspects of its definition.

An alternative view of it is that the standard library is reaching into the compiler. This feels just as accurate as the other way round. The two can’t reasonably be separated. A good fraction of Rust’s standard library is stuff that needs to be there, couldn’t be anywhere else.


> An alternative view of it is that the standard library is reaching into the compiler

Well, no not really, because it's a keyword (I don't know enough about rust internals to know if there are special parser rules for `for`)? You could put `for` in the stdlib instead. As rust has macros, you could in principle implement it as a macro.


> I understand why you need the into_iter bit and why the borrow checker complains about it

The borrow checker is not really involved in the first snippet (in fact the solution involves borrowing). The compiler literally just prevents a UAF.


The compiler isn't protecting them from anything in this particular example, we can all see this is valid code. It's provably valid code, just not provable by the Rust compiler.


Literally nothing in your comment is correct. In the Rust snippet vector is moved into the loop, and thus freed.

There are situations where you hit the limits of the borrow checker, plenty of them. This is not one of them. Again, the borrow checker is not even involved in the original snippet.


> In the Rust snippet vector is moved into the loop, and thus freed.

Do you see any code in the snippet that requires it to be? This could be a simple read-only borrow because the actual logic of the program requires only that, and the value could live on happily after the loop. Literally nothing in this snippet requires anything else, you've just sort of assumed that the way Rust does it is the only thing that makes sense.

It's not about what Rust currently does, really, it's about what it ought to do.


It’s important that a for loop takes ownership of the vec, because it’s the only way you can call things inside the loop body that require the element itself to be moved.

If you don’t want the loop to take ownership of the vec, there’s literally a one character change: put a & before the thing you’re iterating (ie. for x in &y). That borrows the vec instead of moving it.

You seem to want rust to decide for itself whether to borrow or own the contents, but that way lies madness… it will be really hard to reason about what more complicated code is doing, and changes would have very non-local effects on how the compiler decides to use your code.

For me, move-semantics-by-default is the key idea that rust got right, and it’s a very simple concept. It’s not intuitive, but it’s the key idea behind all of rust’s benefits for memory management and preventing concurrency bugs. “Learn one simple but non-intuitive thing and you get these huge benefits” is a tradeoff I’m very much willing to make, personally.


> You seem to want rust to decide for itself whether to borrow or own the contents, but that way lies madness…

Most of what Rust does already feels like madness, like the concept of implicit moves, etc., but I understand your point. I don't think the reasoning really makes sense in terms of actual logic, but as I wrote in another comment: It's possible that I've misunderstood the sales pitch of Rust trying to be GC-less GCd language.

> For me, move-semantics-by-default is the key idea that rust got right, and it’s a very simple concept. It’s not intuitive, but it’s the key idea behind all of rust’s benefits for memory management and preventing concurrency bugs. “Learn one simple but non-intuitive thing and you get these huge benefits” is a tradeoff I’m very much willing to make, personally.

I can respect that and seen this way (where we accept that we're simply going to have unintuitive and incorrect rejections of programs) it does make a lot more sense.


I find move by default refreshingly simple. I don't even understand what is so hard to understand about move.

A move is a simple memcopy + the certainty that the source is unreachable.

This is important if the memcopied object holds resources.

Any new type is move only by default (no copy or clone). This is so that you can opt in willingly.

Making a move type also copy is not an API/ABI breaking change compared to removing copy (or clone).

And it's the same for clone not being the default. With clone you get to run custom code for cloning instead of a memcopy.

Remind me again how move, copy and clone works in C++ /s


C++ “move” semantics are quite complicated. That said, those C++ semantics are much better at handling some edge cases in systems software that Rust largely pretends don’t exist. It is a tradeoff. C++ is much uglier but also much better at handling cases where ownership and lifetimes are intrinsically ambiguous in a moved-from context because hardware has implicit ownership exogenous to the code.


The equivalent of the C++ move in Rust are the function take/replace (like mem::replace ajd option::take).

And it is fully memory safe.

You can build all the ownership you want by using raw pointers in Rust. And there is nothing wrong with a specific problem requiring unsafe because the problem cannot be taught to the borrow checker. But there is a point in your stack of abstractions where you can expose a safe and ergonomic API.

If you have a concrete example I would love to get a crack at it.


> Remind me again how move, copy and clone works in C++ /s

Sarcasm, but it’s worth outlining… C++ “move semantics” are (1) precisely the opposite of rust, and (2) not move semantics at all.

- Rust doesn’t let you override what happens during a move, it’s just a memcpy

- C++ has an rvalue reference (&&) constructor, which lets you override how a thing is moved

- Rust doesn’t let you use the moved-from value

- C++ absolutely has no problem letting you used a value after wrapping it in std::move (which is really just a cast to an rvalue reference)

- Rust uses moves to allow simple memcpy’ing of values that track resources (heap space, etc) by simply making sure nobody can access the source, and not calling Drop on it.

- C++ requires you to write logic in your move constructor that “pillages” the moved-from value (for instance in std::string it has to set the source string’s pointer to nullptr and its length to 0.) This has the consequence of making the moved-from value still “valid”

For copies:

- Rust’s Copy is just “memcpy, but you can still use the original value”. Basically any type that doesn’t track some resource that gets freed on Drop. Rust simply doesn’t let you implement Copy for things that track other resources, like heap pointers.

- C++’s copy happens implicitly when you pass something by value, and you get to write arbitrary code to make it work (like copying a string will malloc a new place on the heap and copy the buffer over)

- Rust has an entirely different concept, Clone, which lets you write arbitrarily code to duplicate managed resources (analogous to how you’d use a C++ copy constructor)

- C++ has nothing to help you distinguish “deep copy that makes new copies of resources” from “dumb copy that is just a memcpy”… if your type has an expensive concept of deep copying, callers will (perhaps inadvertently) use it every time they pass your type by value.

IMO C++’s “move” still letting you touch the moved-from value is what made me realize how much C++ had lost the plot when C++11 came out. Rust’s semantics here are basically what happens when you look at what C++ was trying to do, and learn from its mistakes.


I wouldn't have said it better or more thoroughly.


Are you suggesting Rust should automatically insert the borrow annotation because it is able to see that a borrow is sufficient? That would be quite unintuitive and make it ambiguous whether a for loop is borrowing or consuming the iterator without reviewing the body. I'd strongly argue that it should unambiguously do either one or the other and not try and read the author's mind.


Yes, I'm suggesting it should do the right thing for the code the loop is actually trying to execute. I personally think this is exactly what Rust and its users have signed up for. I might be mistaken about that, but I think it's in line with the more general view that Rust is attempting to be as close as it can get to a language that reads like it has a garbage collector without having one.


> the more general view that Rust is attempting to be as close as it can get to a language that reads like it has a garbage collector without having one.

I've used Rust a fair amount, and I've never seen that expressed as a goal.

A couple of general principles followed by Rust are to prefer explicit code over implicit conversions and to support local reasoning. Those are both present here: the borrow needs to be made explicitly, rather than implicitly based on code later on.


This is not at all what we signed up for. The explicit-ness is the point.


The code says to call into_iter and consume the iteratee, so rust does that. If you want a reference, use the &, just like in zig/c/c++/etc. You are saying an even more extreme version of "If there's a way what I wrote could possibly be interpreted that could compile, it should do that" ignoring the fact that there's almost assuredly _many_ ways that your code can be interpreted that could compile.

Slowing down type resolution/compilation (by making every unannotated loop a generic T|&T) and adding more syntax (since rust would need new annotations to explicitly specify borrow/take in for loops), in order to save a single character that matches the behavior of most other related languages and is perfectly clearly explained by the compiler, is maybe a bad move. Considering compile time and complicated syntax are two of the biggest things people who actually write rust complain about.


> wrestle the compiler

This is quite literally a skill issue, no offense.

'wrestle the compiler' implies you think you know better; this is usually not the case and the compiler is here to tell you about it. It's annoying to be bad at tracking ownership and guess what: most people are. The ones who aren't have decades of experience in C/C++ and employ much the same techniques that Rust guides you towards. If you really know better, there are ways to get around the compiler. They're verbose and marked unsafe to 1) discourage you from doing that and 2) warn others to be extra careful here.

If this is all unnecessary for you - and I want to underscore I agree it should be in most software development work - stick to GC languages. With some elbow grease they can be made as performant as low level languages if you write the critical parts in a way you'd have to do it in Rust and will be free to write the rest in a way which doesn't require years of experience tracking ownership manually. (Note it won't hurt to be tracking ownership anyway, it's just much less of an issue if you have to put a weakref somewhere once a couple years vs be aware at all times of what owns what.)


> 'wrestle the compiler' implies you think you know better; this is usually not the case and the compiler is here to tell you about it.

Well, yes and no. The way type systems work to soundly guarantee that some program property P holds is by guaranteeing some stronger property Q, such that Q => P. This is because type systems generally enforce what we call "inductive invariant", i.e. a property that is preserved by all program statements [1], while most interesting program properties are not inductive. To give an example, suppose we're interested in the property that a program produces an even number; an inductive invariant that implies that property is one that makes sure that the outcome of all computations in the language are even. A program that satisfies the latter property obviously satisfies the former, but the converse isn't true.

Similarly, the way Rust guarantees that all programs don't have, say, use-after-free, is by enforcing a stronger property around ownership. So all safe Rust programs don't have use-after-free, but many programs that don't have use-after-free don't satisfy the stronger ownership property. This means that sometimes (and this is true for all sound type systems) you have to "wrestle" the compiler, which insists on the stronger property, even though you know that the weaker property -- the one you're interested in -- holds. In other words, sometimes you do know better than the compiler.

That is not to say that the approach where the compiler enforces stronger invariants is always right or always wrong, or even right or wrong most of the time, but that "wrestling the compiler" is something that even the most skilled programmers confront from time to time.

[1]: This is because inductive invariants are compositional, i.e. they hold for some composition of program terms t and s iff they hold for t and s and their composition operator, and type systems want to be compositional.


No, GP can just not use Rust, they don't have to use GC languages to have something that makes sense and doesn't force you to always have a debate with the compiler about even simple things.

If they used Odin (or Zig) they could've looped through that dynamic array no problem, in fact:

    package example
    
    import "core:fmt"
    
    main :: proc() {
        xs: [dynamic]int
        append(&xs, 1, 2)

        for x in xs {
            fmt.println(x)
        }
        
        fmt.println(len(xs))
    }
It is ridiculous that Rust complains even about the simple for loop and to say that this somehow comes down to "Well, everyone would do it this way if they cared about memory safety" is just not really true or valuable input, it sounds like what someone would say if their only systems programming experience came from Rust and they post-rationalized everything they've seen in Rust as being how you have to do it.

My tips to people who maybe feel like Rust seems a bit overwrought:

Look for something else, check out Odin or Zig, they've got tons of ways of dealing with memory that simply sidestep everything that Rust is about (because inherently Rust and everything that uses RAII has a broken model of how resources should be managed).

I learned Odin just by reading its Overview page (https://odin-lang.org/docs/overview/) and trying stuff out (nowadays there are also good videos about Odin on YouTube), then found myself productively writing code after a weekend. Now I create 3D engines using just Odin (and we in fact use only a subset of what is on that Overview page). Things can be simple, straight forward and more about the thing you're solving than the language you're using.


I dunno; I've never tried Zig before, and it wasn't hard to check whether this kind of bug was easy to have:

  const std = @import("std");
  
  pub fn main() !void {
      var gpa: std.heap.GeneralPurposeAllocator(.{})=.{};
      const alloc=gpa.allocator();
  
      var list = try std.ArrayList(u8).initCapacity(alloc, 1);
      const a = try list.addOne();
      a.* = 0;
      std.debug.print("a={}\n", .{a.*});
      const b = try list.addOne();
      b.* = 0;    
      std.debug.print("a={}\n", .{a.*});
      std.debug.print("b={}\n", .{b.*});
  }


  a=0
  Segmentation fault at address 0x7f9f7b240000


I think it is important to note that in 59nadir's example, the reason Rust gives an error and Odin doesn't is not memory safety. Rust uses move semantics by default in a loop while Odin appears to use copy semantics by default. I don't really know Odin, but it seems like it is a language that doesn't have RAII. In which case, copy semantics are fine for Odin, but in Rust they could result in a lot of extra allocations if your vector was holding RAII heap allocating objects. Obviously that means you would need to be careful about how to use pointers in Odin, but the choice of moving or copying by default for a loop has nothing to do with this. For reference:

Odin (from documentation):

  for x in some_array { // copy semantics
  for &x in some_array { // reference semantics
  // no move semantics? (could be wrong on this)
Rust:

  for x in vec.iter_ref().copied() { // bytewise copy semantics (only for POD types)
  for x in vec.iter_ref().cloned() { // RAII copy semantics
  for x in &vec { // reference semantics
  for x in vec { // move semantics
C++:

  for (auto x : vec) { // copy semantics
  for (auto &x : vec) { // reference semantics
  for (auto &&x : vec) { // move semantics


And why do you think that bug is relevant in the case of a loop that prints the elements of a container? We can all see and verify at a glance that the code is valid, it's just not provably valid by the Rust compiler.

I feel like these posts trying to show possible memory issues with re-allocated dynamic arrays are missing the point: There is no code changing the underlying array, there is no risk of any kind of use-after-free error. This is exactly the kind of case where all of this jumping through hoops shouldn't be needed.


> There is no code changing the underlying array, there is no risk of any kind of use-after-free error.

There is none of this code, until there is.


Ok, so we've established that the loop can be verified as not changing the container in any way, what makes you believe this shouldn't be obvious to the Rust compiler?

When code that modifies the container is added, it should be understood and then correctly errored about, I don't get why this is such a crazy concept to people.

The point here is that you pay the cost for an error that can't happen. It's just a micro example of a much more general issue that boils down to:

The Rust compiler does a lot to find and mitigate bugs, it's amazing, but it also rejects completely valid programs and solutions because it simply isn't good enough (and it's a difficult enough problem where I'm prepared to say it will never be good enough). You can either pay that cost constantly and for certain problems be dealing with it (a lot) for no gain whatsoever (because the bugs it was trying to prevent weren't actual issues or are in fact imperatives because the thing you're doing requires them) or you can choose not to.

I don't think it's particularly useful to make excuses for the compiler not understanding very basic things in simple examples and indirectly argue that it would be too complicated to see what the loop is doing and act accordingly. Rust already signed up for a very complicated compiler that does all kinds of crazy things in order to mitigate bugs; this type of introspection would increase the accuracy of it a lot.


> You can either pay that cost constantly and for certain problems be dealing with it (a lot) for no gain whatsoever (because the bugs it was trying to prevent weren't actual issues or are in fact imperatives because the thing you're doing requires them) or you can choose not to.

Alternatively, you can use Rust so much these limitations become second nature, and thus don't make them in the first place.

> I don't think it's particularly useful to make excuses for the compiler not understanding very basic things in simple examples and indirectly argue that it would be too complicated to see what the loop is doing and act accordingly.

Great idea, until it stops working. It runs into the paraphrased quote: "Any sufficiently complicated borrow checker is indistinguishable from Dark Magic".

First you say, well, the compiler should be sufficiently smart to figure out case A1 should work, then you add that, but then arises another case A2 that the compiler is sufficiently smart to figure out and so on.

However, you add a bunch of these "sufficiently smart" borrow rules, and you'll end up with a mess. A1 and A2 don't work if A432 is applied, but do work if A49324 is given if the A4 and A2 are satisfied.

The harder the borrow checker is to understand, the more difficult it is to construct a mental model that's useful.

In summary: while I'm not against improving the borrow checker, but the problem is that it needs to be balanced with the opportunity cost of understanding how it approximately works.


> Ok, so we've established that the loop can be verified as not changing the container in any way, what makes you believe this shouldn't be obvious to the Rust compiler?

I would be quite happy for the Rust compiler to be able to perform more powerful analysis and make writing code easier. What I object to, and I think that quite small Zig code snippet highlights, is that dealing with those shortcomings

> for no gain whatsoever

is also plainly wrong.


I make custom 3D engines and I can tell you that it would not be a net benefit for us to use Rust. That's why I added "for certain problems" as a qualifier; there are use cases where Rust would be a net negative.

There are also plenty of use cases where Rust is actually useful and provides guarantees about things that you want guarantees about.


For anyone curious about Odin and graphics, it seems to work really well:

https://gist.github.com/NotKyon/6dbd5e4234bce967f7350457c1e9...

https://www.youtube.com/watch?v=gp_ECHhEDiA


And how should resources be managed?


In bulk, i.e. not one-by-one as is implied and most used with RAII. RAII works best for a one-by-one use case and in well designed, performant systems the one-by-one use case is either irrelevant, rare or an anti-pattern.


Why does RAII work best in a one-by-one use case? You can have an Arena and it can be managed by RAII. That doesn't mean you're freeing each piece of memory one at a time.


if you want bulk, you can use arrays, vecs, arenas, etc.


Rust, in many ways, is a terrible first systems programming language.

To program a system is to engage with how the real devices of a computer work, and very little of their operation is exposed via Rust or even can be exposed. The space of all possible valid/safe Rust programs is tiny compare to the space of all useful machine behaviours.

The world of "safe Rust" is a very distorted image of the real machine.


> Rust, in many ways, is a terrible first systems programming language.

Contrariwise, Rust in, in many way, an awesome first systems programming language. Because it tells you and forces you to consider all the issues upfront.

For instance in 59nadir's example, what if the vector is a vector of heap-allocated objects, and the loop frees them? In Rust this makes essentially no difference, because at iteration you tell the compiler whether the vector is borrowed or moved and the rest of the lifecycle falls out of that regardless of what's in the vector: with a borrowing iteration, you simply could not free the contents. The vector generally works and is used the same whether its contents are copiable or not.


A lot of idiomatic systems code is intrinsically memory unsafe. The hardware owns direct references to objects in your address space and completely disregards the ownership semantics of your programming language. It is the same reason immediately destroying moved-from objects can be problematic: it isn’t sufficient to statically verify that the code no longer references that memory. Hardware can and sometimes does hold references to moved-from objects such that deferred destruction is required for correctness.

How is someone supposed to learn idiomatic systems programming in a language that struggles to express basic elements of systems programming? Having no GC is necessary but not sufficient to be a usable systems language but it feels like some in the Rust community are tacitly defining it that way. Being a systems programmer means being comfortable with handling ambiguous object ownership and lifetimes. Some performance and scalability engineering essentially requires this, regardless of the language you use.


None of these "issues" are systems issues, they're memory safety issues. If you think systems programming is about memory saftey, then you're demonstrating the problem.

Eg., some drivers cannot be memory safe, because memory is arranged outside of the driver to be picked up "at the right time, in the right place" and so on.

Statically-provable memory saftey is, ironically, quite a bad property to have for a systems programming language, as it prevents actually controlling the devices of the machine. This is, of course, why rust has "unsafe" and why anything actually systems-level is going to have a fair amount of it.

The operation of machine devices isnt memory safe -- memory saftey is a static property of a program's source code, that prevents describing the full behaviour of devices correctly.


Water is wet.

Yes, touching hardware directly often requires memory unsafety. Rust allows that, but encourages you to come up with an abstraction that can be used safely and thereby minimize the amount of surface area which has to do unsafe things. You still have to manually assert / verify the correctness of that wrapper, obviously.

> This is, of course, why rust has "unsafe" and why anything actually systems-level is going to have a fair amount of it.

There are entire kernels written in Rust with less than 10% unsafe code. The standard library is less than 3% unsafe, last I checked. People overestimate how much "unsafe" is actually required and therefore they underestimate how much value Rust provides. Minimizing the amount of code doing unsafe things is good practice no matter what programming language you use, Rust just pushes hard in that direction.


> For instance in 59nadir's example, what if the vector is a vector of heap-allocated objects, and the loop frees them?

But the loop doesn't free them. This is trivial for us to see and honestly shouldn't be difficult for Rust to figure out either. Once you've adopted overwrought tools they should be designed to handle these types of issues, otherwise you're just shuffling an esoteric burden onto the user in a shape that doesn't match the code that was written.

With less complicated languages we take on the more general burden of making sure things make sense (pinky-promise, etc.) and that is one that we've signed up for, so we take care in the places that have actually been identified, but they need to be found manually; that's the tradeoff. The argument I'm making is that Rust really ought to be smarter about this, there is no real reason it shouldn't be able to understand what the loop does and treat the iteration portion accordingly, but it's difficult to make overcomplicated things because they are exactly that.

I doubt that most Rust users feel this lack of basic introspection as to what is happening in the loop makes sense once you actually ask them, and I'd bet money most of them feel that Rust ought to understand the loop (though in reading these posts I realize that there are actual humans that don't seem to understand the issue as well, when it's as simple as just reading the code in front of them and actually taking into account what it does).


> But the loop doesn't free them.

What if it did free them in a function you don't directly control?


> forces you to consider all the issues upfront.

Ever wonder why we do not train pilots in 737s as their first planes? Plenty of complex issues do NOT, in fact, need to be considered upfront.


YMMV, naturally, but I've found that some embedded devices have really excellent hardware abstraction layers in Rust that wrap the majority of the device's functionality in an effectively zero-overhead layer. Timers? GPIO? Serial protocols? Interrupts? It's all there.

- https://docs.rs/atsamd-hal/

- https://docs.rs/rp2040-hal/


> It's annoying to be bad at tracking ownership and guess what: most people are. The ones who aren't have decades of experience in C/C++ and employ much the same techniques that Rust guides you towards.

You wouldn't need to do that here, in SPARK, or Oberon, or just about any other memory safe language. This is unique to Rust, and their model - and it absolutely is not required for safety. It's not a skill issue. It's a language design problem.


doesn't spark do something inspired by rust to get safe dynamic memory allocation? https://docs.adacore.com/spark2014-docs/html/ug/en/source/ac...

what does oberon do?


SPARK has had it before Rust existed. However, gnatprove doesn't require you the programmer to change anything. The compiler does the work to ensure safety, not you.

Oberon is similar. The typesolver will determine if something is safe, without the need for explicitly borrowing anything.


> Oberon is similar. The typesolver will determine if something is safe, without the need for explicitly borrowing anything.

so what does oberon do to prevent you from resizing a possibly reallocating array while holding a reference into it?


It doesn't allow those operations to occur at the same time. If you can't meet compile time guarantees, then it does not compile.

Pointers aren't the same as in C. A pointer has an explicity type, not just a size. A pointer cannot change where it is located, whilst in scope anywhere else.

If you then make your pointer local, then it will get cloned. As there's no concept of a void pointer, every type supports cloning, and so your thread-local variable will have nothing to do with the parent any longer.

So if you try to grab a local reference, to something in another thread, you'll get a copy, or a compile time error if you don't copy it.

If you try to modify something you're looping over, it won't compile at all.

However, in all of this, there's no extra syntax. The compiler can deal with what is permitted. The programmer can just write.

This is multithreaded (by compiler switch):

        module example610a;
        type
        Vector = array * of integer;
        var
          i, n: integer;
          a: Vector;
        begin
          a := new Vector(n);
          for i := 0 to len(a) - 1 do
            write("a[",i:2,"]: "); read(a[i])
          end;
          writeln;
          for i := 0 to len(a) - 1 do
            write(a[i]:3);
          end;
        writeln;
        end example610a.


That systems languages have to establish (1) memory saftey, (2) statically; (3) via a highly specific kind of type system given in Rust; and (4) with limited inference -- suggests a lack of imagination.

The space of all possible robust systems languages is vastly larger than Rust.

It's specific choices force confronting the need to statically prove memory saftey via a cumbersome type system very early -- this is not a divine command upon language design.


> The space of all possible robust systems languages is vastly larger than Rust.

The space of all possible CVE is also vastly larger outside of Rust as well.

My biggest takeaway from Rust isn't that it's better C++. But that it's extremely fast (no runtime limited GC) and less footgunny Java.


sure, rust is not the final answer to eliminating memory safety bugs from systems programming. but what are the alternatives, that aren't even more onerous and/or limited in scope (ats, frama-c, proofs)?


My preference is to have better dynamic models of devices (eg., how many memory cells does ram have, how do these work dynamically, etc.) and line them up with well-defined input/output boundaries of programmes. Kinda "better fuzzing".

I mean, can we run a program in a well-defined "debugging operating system" in a VM, with simulated "debugging devices" and so on?

I dont know much about that idea, and the degree to which that vision is viable. However it's increasingly how the most robust software is tested -- by "massive-scale simulation". My guess is it isnt a major part of, say, academic study because its building tools over years rather than writing one-off papers over months.

However, if we had this "debuggable device environment", i'd say it'd be vastly more powerful than Rust's static guarantees and allow for a kind of "fearless systems programming" without each loop becoming a sudoku puzzle.


This is yet another issue with Rust, nowhere in my post have I mentioned C++, I made no effort comparing the two languages, I just pointed out poor developer ergonomics in Rust and you're instigating a language flame war as if you took valid criticism as a personal attack; you can do better than that.


> poor developer ergonomics

I don't think it's poor developer ergonomics. The compiler tells you "Hey, try adding &x at this position".

It's unfamiliarity with Rust's type system and syntax sugars.

<HYPERBOLE>

I'll take 1000 compiler errors over a single non-deterministic bug that only happens on ARM at 6 o'clock on Blue Moon when Mercury is in Orion (it's UB).

And I'd ritually sacrifice my first unborn werekid to the Dark Compiler Gods for a compiler error, that actually suggests a correct fix.

</HYPERBOLE>


My 2¢, it’s perfectly reasonable to bring up other languages in defense of criticism, because it explains why these decisions were made in the first place. GP literally said that rust isn’t a good fit for you if you’re in a position to use a GC. The comparison to C++ is important because it’s one of very, very few contemporary languages that also doesn’t require a GC/refcounting everywhere. So it’s useful to compare to how C++ does it.

Yet another issue with people who criticize rust: they don’t want anyone to defend rust, and complain loudly about anyone defending rust as being a literal problem with the language. You can do better than that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: