Hacker News new | past | comments | ask | show | jobs | submit login
2017 Rust Roadmap (github.com/aturon)
332 points by muizelaar on Oct 22, 2016 | hide | past | favorite | 197 comments



I read through the Rust book, and the problem I was having with it and the other docs is that it was hard to map the Rust concepts with what actually runs when it is compiled. For a language that touts "uncompromising performance", it was difficult for me to find performance characteristics of the underlying abstractions and std library (for example, are algebraic data structures just tagged unions or does the compiler do more fancy things with them? What about iterators?). I'd really like to see a "Rust for C/C++ devs" guide that helps you figure out if you were using [some C++ feature] the way to get that behavior/performance with idiomatic Rust.

Another thing that is still tricky for me is figuring out when I should use 'unsafe' blocks in my code. Is it to be avoided if at all possible, or should I go there any time the 'safe' part of the language is making it difficult to express what I want? The meme that Rust is C++ without SegFaults and or race conditions is a bit misleading since the actual guarantee is that you don't get SegFaults or Race conditions outside of Unsafe blocks, and any nontrivial project will make use of unsafe blocks.


> for example, are algebraic data structures just tagged unions

They're tagged unions with no implicit heap allocations. I guess we should at least document that in the reference (though we don't want to overspecify, because we do some tricks in the compiler to try to avoid leaving space for the tag if we can). But I don't think it'd be a good idea to document this straight away in the book: the goal is to make Rust easy to pick up, and adding more information than necessary when introducing enums (which a lot of folks will only see for the first time in Rust) isn't going to do people any favors.

> What about iterators?

The documentation for Iter explains this pretty well, I think: https://doc.rust-lang.org/stable/std/iter/

It even shows the exact implementation of (basically) Range, to give you an idea.

That said, it would probably be worth calling out that most iterators are guaranteed not to allocate. Note, though, that that isn't a hard-and-fast constraint that implementors of the trait have to abide by—you can make your own iterators and implement them however efficiently or inefficiently you like.

> The meme that Rust is C++ without SegFaults and or race conditions is a bit misleading since the actual guarantee is that you don't get SegFaults or Race conditions outside of Unsafe blocks, and any nontrivial project will make use of unsafe blocks.

They'll make use of unsafe blocks transitively, by using unsafe code in the standard library or well-tested crates. Think of these unsafe blocks as part of the compiler: you trust the compiler to generate correct machine code when you type "a + b", so likewise you trust the standard library to do the right thing when you say "HashMap::new()".

It is not the case that most projects should use unsafe themselves everywhere: that usually just makes life harder. The primary exception is if you're binding to C code, in which unsafe is unavoidable.


Speaking of "guaranteed not to allocate", is there a way that you could express that in a type? Seems like that might be nice to have.


Not in the type system itself, but you could write a lint to forbid heap allocation. This way, you could annotate a function (with e.g. `#[forbid(allocations)]`) to get a compile error when your function (or code your function calls) tries to allocate. This might not be easy, though :)


Not in Rust's type system. In a pure language, you could have some sort of "Heap" monad similar to the "IO" type in Haskell.


So Rust has no way to mark side effects and global dependencies of functions? Allowing singletons in a language that is supposed to be safe sounds like a huge design flaw.


Mutable statics are unsafe to access or update. You can use interior mutability with something like a Mutex to get a mutable-but-not-to-rustc value, which is safe.

Systems programming languages need this kind of functionality.


Putting a mutex around a global variable doesn't change the fact that it is still a global variable.

Memory access might be safe but you get spaghetti code and combinatorial state explosion due to all the potential side effects.

Allowing singletons for edge cases is fine but with no proper way to enforce it except code review you really have now idea what the underlying code might potentially do.


I agree with you that using globals as sparingly as possible is good, but your original claim was about safety, so that's what I focused on.


I am not sure if you have come across this "Rust tutorial for c/c++ programmers" https://github.com/nrc/r4cppp but I found it to be nice when I was first exploring Rust (I had prior experience with C++).

I haven't had to resort to "unsafe" blocks in the Rust I have written so far but "ffi" is one use case for unsafe blocks. Another resource that I have yet to read is "https://doc.rust-lang.org/nomicon/" which seems to explain how to write unsafe Rust code.


Is there an equivalent guide to the compile-time representation of important constructs for those trying to learn C/C++? I haven't seen anything like that and it seems like most devs in that realm instead rely on experience and tribal knowledge (which is, AFAICT, how it often works in Rust-land right now). I agree it'd be great for Rust to have clearer official docs about some of these things, but it doesn't seem to me like this is readily available for most languages or runtimes.

Re: unsafe, I think that's tough. My personal feeling is that many of Rust's selling points rely on minimizing the use of unsafe (i.e. limiting segfault-relevant portions of the code), and that there are frequently ways to make things work and also make them fast without using unsafe. What's an example where you found yourself thinking about using unsafe instead of a more complex safe construct?

(Somewhat related to this, and especially for anyone reading who might try Rust, I cannot recommend getting on IRC strongly enough. The Rust IRC channels are by and large incredibly friendly and helpful, and for better or worse that's where a lot of the knowledge in the community is currently collected, not as much SO or blogs)


> Is there an equivalent guide to the compile-time representation of important constructs for those trying to learn C/C++?

Definitely. It was taught in school and there's pretty good guides for it online (maybe not caught up to c++11 and beyond, but the fundamentals are there). You're right that it is not readily available for most languages, but when you need to get serious about performance you either are going to have a guide or spend a lot of time looking at assembly/bytecode. To be fair, I'd probably still have to inspect generated code sometimes, but it's nice to have good instincts for how things run to guide your design/implementation so you can spend less time looking at assembly.

http://www.agner.org/optimize/optimizing_cpp.pdf


I definitely haven't seen anything as comprehensive as the linked PDF for Rust (although that shouldn't be surprising given the extreme thoroughness of that guide and the age of C++). Probably a good project!

When doing very performance sensitive things in Rust, I usually find myself asking questions a lot on IRC and looking at disassembly in perf.

To answer some specific examples you cited above: I'm pretty sure that enums are (almost?) always equivalent to tagged unions. If you have an enum which doesn't contain any data, then I believe it's representation is just the tag. Iterators are just structs with methods, the various generic functions they implement are monomorphized and then optimized by LLVM.


> any nontrivial project will make use of unsafe blocks.

Sure! But that's okay. Just don't use 'quantity of unsafe blocks' as a metric of quality and you'll be all set. Think of it like so: don't use it until you have to and try not to have to. For me, that means consulting experts on IRC (etc), "How can I express this goal in idiomatic rust?" No different from learning C/C++ for the first time, IMO. And if no good way exists you may have to use unsafe blocks.

Unsafe blocks aren't bad, just like #pragma-disable-this-warning and --static-checker-I-did-it-this-way-by-design aren't bad. They mean that you've thought critically about the pros and cons and you are going into this decision well aware of the risk. On the flip side they should be the first blocks to closely examine in the face of failures like segfaults/races/etc.


> > any nontrivial project will make use of unsafe blocks.

I don't think that's actually true? Most projects make use of no unsafe outside of stdlib and a handful of crates.io crates.


This has been my experience also. I've written at least 40kloc of rust over the past couple years (including complex graphs with cycles, low-level DSP) and I could probably count the number of unsafe blocks I've needed on one hand.

edit: This is not counting FFI though.


Excluding FFI I might just be able to count the number on unsafe blocks I’ve needed on one hand. But honesty compels me to declare that for reasons of performance micro-optimisation, I’ve written a lot more.


How do you handle cycles? I've seen discussions on places like /r/rust where people didn't seem to have any pleasant answers.


I tend to use petgraph[1] when I need a graph-like data structure (the only time I've needed cycles). Super fast, distinguishes `Node`s from `Edge`s, lots of useful items for different kinds of traversal.

[1]: https://github.com/bluss/petgraph


About the only time I end up using it is for ffi and uninitialized arrays on the stack.


stdlib has plenty of unsafe blocks inside it, as do many crates. Claiming one doesn't use unsafe blocks because none are visible in one's lib.rs or whatever doesn't mean they aren't there.


If you're going to consider unsafe blocks in other libraries (and especially the standard library) as just as "bad" as ones in your own, you have to include the compiler itself too ("oh your compiler generates machine code, that's unsafe!!!"). This logic of course applies to every language, as everything bottoms out in machine code/hardware, and thus it is a fairly uninteresting point.

The power of Rust is the ability to wrap dangerous code into safe abstractions without cost, and unsafe blocks are essentially a flag for "this is dangerous, make sure it's contained".


I think of libstd as basically what would be part of the compiler or runtime in other languages. In some languages (e.g. Go), things like hash maps are in the language and implemented directly with unsafe code, and nobody thinks the language is less safe because of it.


Every language that exists, compiled or interpreted, typed or untyped, ultimately relies on code which could violate every guarantee that language makes. Most often, that code is written in C or C++, and is a part of the language's runtime or compiler toolchain.


There's certainly a difference between writing one's own unsafe blocks and relying on functionality which was implemented using unsafe in a community project (which has hopefully been vetted by some community members).


Most projects use c bindings; it's not unfair to say that the quality of many of the c bindings on crates.io doesn't match the quality of stuff in the standard library.

(ie sure, maybe you're not writing unsafe yourself, but you'll quite possibly hit an issue where you have to dig into a crate that does)


Most crates don't make use of unsafe, the stuff that I see that does use unsafe is mostly either embedded applications or stuff like the std lib.

And even if you do use unsafe, you can still use it to build 'safe' abstractions on top of. the idea is that your unsafe code is quarantined and abstracted, and you build on top of it. std::collections is a great example of this.


This is an area in which I haven't quite figured out how to communicate properly; I feel like I have a good understanding of how Rust maps to asm, but I don't know how to transfer that understanding to other people.

I'll certainly be reading your link below, thanks for that!


Would an updated and expanded Rust for C++ Programmers make sense as a companion to the book with references to concepts in the book along with low-level details on data structures or implementation? It would nice to see that and Nomicon more closely related and fully up-to-date.

It soulds like an informal "specification" of #[repr(c)] or #[repr(packed)] for common platforms would also be useful for FFI.


Yeah it'd be great to have.


> it was hard to map the Rust concepts with what actually runs when it is compiled.

This. The primary reason to choose rust is performance -- that is, you want more advanced abstraction/safety capabilities than C++, and you want that with the same or better performance. And performance implies CPU cost and memory usage/layout control. There really isn't any point otherwise.

Therefore, going into at least a little bit of detail on the idioms and performance impact of those idioms is important. Rust is a supposed to be a systems programming language to replace C, do not pretend it's as abstract as ML in the documentation.

What throws me for a loop when learning Rust isn't high level details like the borrow checker or sum types, it's what is happening with cpu & memory[1]. When things are copied, when they aren't, how much do the std derives cost, matching cost, sum type storage cost, etc. Because while the semantics are similar to C++, they are not the same. And you don't have to go nuts specifying it (compilers will differ), but at least give a general understanding/hint of what to do and what to avoid.

To be fair, the doc does have this scattered about, but it doesn't feel a priority (There are many things I've had to search the web for or ask on #rust, or just look at disassembly for).

[1] To take an extremely simple example, RVA is something Rust supposedly does much more consistently than C++, thus returning a new struct by value to be placed wherever is idiomatic. However this isn't called out very clearly (the last time I checked) in the doc, and to a C programmer, it feels very wrong. Stuff like this is extremely important, otherwise we'd just ignore performance and use a JVM based language with the same (or better) abstraction features and a faster compiler. :-)


> . Is it to be avoided if at all possible, or should I go there any time the 'safe' part of the language is making it difficult to express what I want?

The main reasons to use unsafe code are when you're doing FFI and have to regrettably talk to C/++ libraries, or when designing new abstractions with a safe API boundary. It's tricky to ensure that the former is safe since you eventually have to trust C++ (but then, that's not Rust's fault). It's not hard to ensure that the latter is safe. Looking at a page of code and ensuring that it can't cause segfaults is a much easier task than doing it for the entire codebase.

This is almost all the unsafe code out there. There's a bit of it used for doing manual optimizations. When Rust doesn't let you do what you want, often there are abstractions like RefCell that have a small cost that you can use (and they contribute to the overall safety). In case this happens in performance critical code, you can use unsafe again, but this is very rare.

In Servo, for example, almost all of the unsafe code is of the first two kinds. I've been hacking on Servo for years and didn't write much unsafe code at all -- when I did, it had to do with talking to Spidermonkey, and even that was pretty rare. More recently I'm working on integrating Servo's style system into Firefox (which is C++), and only now have I been regularly writing unsafe code. Even for this project the unsafe code I'm writing abstracts away the inherent unsafety of Firefox's C++ so that others can talk to Firefox with safe Rust code.

But many projects have no unsafe code at all. It's not that common to have unsafe code.

> it was difficult for me to find performance characteristics of the underlying abstractions and std library (for example, are algebraic data structures just tagged unions or does the compiler do more fancy things with them? What about iterators?).

Note that a C++ book won't help here for C++ too. What is a switch compiled down to? Does it use a jump table? :)

But yeah, it would be nice to have a thing for this. I don't think it belongs in the official book, but it should exist :)

ADTs are tagged unions. When non-nullable pointers are involved sometimes the tag is stored as a null pointer (e.g. `Option<Box<Foo>>` is a single pointer, and is None when null. Aside from that, nothing fancy.

Iterators compile down to the equivalent for loops. I can't think of any stdlib iterator which implicitly allocates; they all operate on the stack. In general these are just zero-cost abstractions, they will compile down the the code you would have written with manual loops. This is a recurring theme with the stdlib and even crates from the ecosystem. "extra" costs for abstractions are eschewed in Rust and will often be documented when they exist. So as a rule of thumb assuming that a random abstraction doesn't have an overhead unless explicitly mentioned is good.


I would also suggest taking a page out of Apple's playbook and providing some official sample apps like https://developer.apple.com/library/content/navigation/#sect.... Reading the books is one thing, but seeing the patterns actually used is another. You don't need as many as Apple, only a couple really, but make sure they are well written and straddle a couple of use cases. And like go crazy with the idioms, I want to see the most idiomatic Rust code.

When learning a new platform, it's always frustrating having to hunt for good code. I actually find these similarly important as docs (maybe even more because I tend to reference good open source for much longer than docs).


I agree with the example apps. Some annotated source would be great, too. Something similar to the way dc.js annotates this example: https://dc-js.github.io/dc.js/docs/stock.html


+1 for usinf dc.js type examples. I wished many times "Rust by example" could follow this type of detailed explanations and complex examples.


Personally I think Rust should take this one step further. Actually build templates into the platform itself.

For example one for creating a microservice complete with routes, test cases, JSON support etc. Another for a command line application.

In languages with a steeper learning curve like Rust there needs to be more of an opinionated approach to teaching users.


I'm not sure how valuable this type of thing really is. Templates tend to fall out of date, since they're not actively used, they're not actively updated to new practices. I've seen this with lots of templates in other languages.

The libraries having excellent documentation and pointing to apps that are similarly implemented tends to be better maintained, IMO.


Maybe this could be some kind of cargo command:

    cargo template-gen name something



Thanks for point it out, it just isn't clear what the current state is.

It looks kind of dropped with the github repository only having a basic example.


Yeah. Nobody has championed it as of yet. That's open source :)


I would love to see Rust with a REPL. I use Python professionally a lot and have done a fair amount of OCaml in my spare time and both have excellent REPls in the form of `ipython` and `coretop`. Quick experiments with auto-complete is incredibly helpful for exploring a language. That's the only thing I really miss from those languages; when I want to wrap my head around a bit of syntax or a library feature in Rust, I build out a quick little experiment and see if it compiles and behaves as I expect it. With a good enough REPL, that's unneeded.


Although not exactly a REPL, the rust playground has proven very useful to me for similar purposes. Of course, you need to be online.

https://play.rust-lang.org/


For languages that do not have REPLs I usually just write unit tests.

This works particularly well for Java as the Eclipse compiler iteratively compiles and most IDEs will let you run a single method.

I imagine a similar stopgap could be used for Rust. I know its not the same but in some ways I have sort of disliked REPLs because its is very easy to accidentally delete or loose what you have type and you might as well make your playing around a test (just my 2 cents).



Scala does this extremely well too. Not sure whether or not a snippet will work? Run 'sbt console' and you're in a REPL with all of your code imported.


Something like `cargo shell` or `cargo repl` that automatically imports whatever crate you happen to be working in would be fantastic.

If you had something like that, I imagine you could probably write a Kernel for Jupyter which would be a huge win.


coming from haskell too, i missed this. ghci is really fantastic.


I heard a while back that there was some work going into an interpreter for rust's mid-level IR (MIR), with the intention being to support a REPL. Not sure how that is going though.


The project you're thinking of is called miri, I don't really know the state of it either.

https://github.com/solson/miri


The Dyon[1] language would probably make a good basis for a repl for Rust. It shares many concepts with Rust including a version of ownership.

[1] https://github.com/PistonDevelopers/dyon

You might also be able to link the repl with Rust Language Server which could be linked with incremental compilation, letting you define modules or pub fn's dynamically.


I assume you mean OCaml's "utop" ?


Technically yes. "utop" is the extended repl, but "coretop" is "utop" except it all ready has Jane Street Core loaded up.


It's important to note that this is an RFC which has been proposed, and there's likely to be a good deal of discussion and revision before it's merged/adopted as the official 2017 roadmap.

The conversation can be followed/joined/whatever here: https://github.com/rust-lang/rfcs/pull/1774


"accordance with RFC 1728" - could they have chosen a naming scheme so their documents are not confused with the _real_ RFC's? Even RRFC 1728 would helpful. Or maybe just #1728?


Lexical scope. You should assume an RFC cited in a Rust RFC is a Rust RFC, you're in the Rust RFC scope. If an IETF RFC is cited in a Rust RFC, it won't be ambiguous.


Why do "the _real_ RFC's" have exclusive rights to that term?


Awesome. I have used Iron a few times to write small web services, but I can't wait for Rust to really have a strong story for the backend. If that happens it'll be the first language I reach for whenever I need to write a backend.

I think it has a lot of great bonuses already. It's language ergonomics are that of a high-level language, yet it is extremely fast, and I can be very confident in my code if it compiles.

Rust is one of my favorite languages, keep up the great work guys.


Yup, happy to see everything on the list.

If one thing more than any stands out it's Rust's commitment not just to building a language but building an incredible community. With how seriously the release processes are being taken to the way new people are welcomed. Serious kudos, I know it's not the most technically engaging work but it makes a huge difference.


Yes, as far I can tell, Rust is probably (pun intended) the gold standard in these areas. I only looked at Rust briefly, because the language itself felt too complex for my simple needs, but I was hugely impressed by how much concern and effort the core team seem to put into every aspect of the developer experience. The only other group that I know with that level of focus is the .NET people at Microsoft.


Are you saying that Rust, a system programming language, can compete with the scripting languages in speed of developing features?!

Or are you saying that it is a pleasure to use, so you'll use it for smaller backends where development speed isn't that critical?

(Asking, not flaming. :-) )


> Are you saying that Rust, a system programming language, can compete with the scripting languages in speed of developing features?!

I'll say it. At work, the backend is Rails, and I am a Rust contributor in my freetime. I am in the early stages of working on a framework for web apps in Rust & I believe it will be comparatively productive to Rails. Only your code will be faster and many bugs will be caught at compile time.


Have you heard about Helix? - http://blog.skylight.io/introducing-helix/

I think this is a great way to introduce large existing Ruby codebases to Rust in increments, much like Sentry demonstrated recently, but for Python: https://blog.sentry.io/2016/10/19/fixing-python-performance-...


Yep. :) Different usecase from what I'm working on; helix is for embedding Rust inside of a Ruby program, my library is for writing an HTTP service in Rust; so it will sit on top of tokio + hyper.


Bold. :-) I do hope you're correct.

(Given a modest test suite and checking parameters at external API interfaces, scripting languages didn't have much problems with bugs that could be found at compile time, imho.)


> (Given a modest test suite and checking parameters at external API interfaces, scripting languages didn't have much problems with bugs that could be found at compile time, imho.)

Not my experience at all.


I've given JavaScript about six years to prove that tests make up for static analysis and all I've learned is that we need MORE static analysis.

I've met about four people in a dozen years that write better tests than me, and pretty much everyone else is writing garbage. And when I look back at my own I'm never satisfied.

Essentially the only thing worse than no tests is thinking you have test safety when you don't.

Particularly, negative tests in a dynamic language are easy to write but are virtually guaranteed to break at a later date without anyone noticing, because you are now looking in the wrong spot for a piece of data.


Coming from dynamic land, including JavaScript, one thing I noticed about rust's types and compile time safety was how it would probably eliminate 90% of the unit tests needed for an equivalent JavaScript program, essentially allowing the programmer to focus on less pendantic and more useful-looking unit tests.


Good point about tests and dynamic data structures. Thanks.


"Bugs that can be founderstood at compile time" is a complicated category, since it depends on how much you lean on your compiler to help. For example, wrapping primitive types in semantic wrapper types can be an easy way to catch errors where you mix up parameters. And with a sufficiently powerful type system like Idris you can theoretically catch literally any possible bug at compile time.


Absolutely. But the trade-off is not so simple. I can be fairly confident that if my code compiles there are far fewer bugs than if I had written it in nodejs or php. So while it may take longer to get to the initial point that I have something running, I save time on subsequent refactors.

And the added bonus, is that my code runs 10-20x faster. So I really didn't give up anything to get the increased confidence.

However, I have to backtrack a bit here, because the largest Rust backend I've written was probably a little less than 10k lines. So while it was non-trivial it wasn't a massive undertaking.


There's a substantial discount in many developers heads about the ongoing maintenance costs of their code. To them the up front costs are the only thing that matters.

If you're not spending significant time on rework it means your coworkers are doing it for you.

As version control has gotten better it's harder to hide who is to blame but most people still don't bother to look. Which means you get no feedback about how good your code is.


It is possible to have both capabilities in a programming la language.


I note the long list of obvious examples...? :-)

But sure, the world would be a better place if Rust, Go and/or Swift could fulfill that niche. I really don't know enough. (Go seems a bit boring. Rust is too new. Swift is a bit limited re Linux etc support.)

(I've seen the claim that some C++ gurus could develop features as quick as seasoned scripters, but... how many years did it take to get to that state? :-( )

(Sorry for coming back late.)


Go doesn't have any of the same safety guarantees as Rust. If you look at most production Go code, very few make use of channels, most Go code is actually locking/unlocking memory, and the Go compiler doesn't validate against data races in the same way that Rust code would. So I don't believe that Go actually makes things better, in that respect anyway.


OCaml and Lisp come to mind as possible examples.

Also note that for me personally, I don't see any value in using scripting languages for anything other than system administration instead of the more cryptic shell variants.


Ah, of course.

I have long wondered why (Common) Lisp didn't take over the world at the end of the 90s to the beginning of the millennium. :-( They have a GC, but so do Java. There were good compilers since at least the 80s.

But the lisp variants aren't really a system programming language that will replace C. (You can do bit fiddling, but... here the GC comes in, especially for embedded applications with KB of RAM. And so on.)

>> I don't see any value in using scripting languages for anything other than system administration

The point with scripting languages is that you can develop quickly. For applications where most time is in the DB anyway, they make a good solution. (I.e. for the last 20 years, system administration and web development.)


A thing called UNIX where the source code was available for free happened, but lets forget that fact.

GC bashers would be surprised how much their lifes actually depends on embedded systems running real time Java deployments.


Huh? There were early Lisps with free source, too.

Also, 1, I am not a GC basher. 2. I was wondering why Lisp didn't take over most of the world 15+ years ago. Overhead was more expensive then. (And small embedded systems will always have too little RAM.)


Overhead was more expensive and there was a cheaper alternative called UNIX.

I wonder how ir would have turned out if it wasn't available.


Uh... did I miss something? You don't need a lisp machine or a PDP-10 to run a lisp variant?

(Do you argue that it was too late when sh had scripting? The sh is gone now, bash will probably be replaced too.)


Scripting via the UNIX shell is a poor man's REPL.

Personally I don't remember the last time I used it for anything other than setting environment variables, for anything else there are better actual programming languages.


I never argued anything positive about shell programming. Ever. :-)

I still don't get your point, but assume you are saying that Unix is a poor replacement for a real lisp machine environment?


No, I am saying that if UNIX was available at the same price as the other OS of the time, it would never have been as successful as it was.

In general, UNIX is a poor replacement for any of the OSes designed at Xerox PARC, they did at least three remarkable ones.

Or at ETHZ for that matter.


Uhm, I still don't get why the development (/execution) environment would have been different for Lisp?

I am aware of that Unix/Linux is/was quite a simple solution. It is designed to be. It was still (more or less) the basis for Next and the present MacOS. Quite nice.

(Disclaimer: Today I really only use Linux computers, for apt and compatibility with servers. No MacOS flames, please. :-) )


> Uhm, I still don't get why the development (/execution) environment would have been different for Lisp?

I still don't get why you keep focusing this discussion on Lisp alone, I explicitly referred to several OSes.

All of them more expensive to buy than UNIX, which was available for peanuts.

As for OS X.

UNIX was the base for NeXTSTEP, because NeXT was after the workstation market owned by Sun and SGI workstations.

But anyone that has bothered to learn the NeXTSTEP stack and respective APIs, knows how little UNIX culture it had, beyond "bring your stuff to our platform".

The hybrid Mach/BSD kernel, device drivers written in Objective-C and the whole user space frameworks and GUI workflows.

Mac OS X follows that tradition as any developer committed to OS X technology stack knows.

Drivers are written in a C++ subset, we have the frameworks, automation via Apple Script, Objective-C and now Swift for GUI applications.


It thought UNIX originally belonged to AT&T and they sold licenses. See the UNIX history.


They were prevented to sell them in the beginning, which was when some of the AT&T guys brought the code into universities like Berkeley and Stanford.

EDIT: Well, they actually did sold licenses, but I guess $99 even in 70's was quite cheap vs what the alternatives were asking for

http://engineering2.berkeley.edu/labnotes/history_unix.html

"This led to requests for the system, but under a 1956 consent decree in settlement of an antitrust case, the Bell System (the parent organization of Bell Labs) was forbidden from entering any business other than "common carrier communications services", and was required to license any patents it had upon request.[6] Unix could not, therefore, be turned into a product. Bell Labs instead shipped the system for the cost of media and shipping."

-- Wikipedia


cheap, but not 'free'.

See also the later fate of UNIX (-> Novell, ...).


Other than that, if we had some tool inside Rust ecosystem to support server side rendering for frontend, Rust will be the ultimate language of web.


Awesome to see the learning curve addressed with such high priority.

I like Rust and I'd like to use it more, but I generally lean on Go or C for writing anything that needs performance. I can't quite get past the awkward, fumbling stage with Rust. Go, on the other hand, was really easy to get acquainted with and not too substantial of an effort to get to strong expertise. I've been able to teach Go to junior devs with no background and hand off projects within a couple weeks. Rust is awesome, but fits in a very different bracket IMO.


It's important to note that Go and C have very different design goals from Rust. Go was never designed to have zero-cost abstractions (garbage collection being the most obvious outcome of this, but there are many others), and it has a runtime. C was never designed for safety and security, memory safety or otherwise. In short, Go sacrifices performance and C sacrifices safety, while Rust's goal is to sacrifice neither.

Addressing the learning curve of Rust is important, but we aren't going to be able to do it by adopting concepts from Go or C. By and large, they just don't apply.


What are your thought on making `rustc` poly-lingual?

Ideally, all libraries (i.e. should be written by experts) are written in Rust, but applications are allowed to be written in RustScript.

RustScript would be optimized for lower learning curve. example[0]: - No `unsafe` allowed. - Everything is implicitly an `Arc<_>`. - All numerics are BigNum. - etc.

This is inspired by the popularity of using Rust within other languages, like Ruby. The way this would differ from using another language: All the Rust tooling would be the same. Realistically it would just be a different parser for the Rust AST. Therefore no memory layout issues, or FFI involved. Like syntax-sugar to the extreme, the compiler (except the parser) wouldn't even know the difference.

[0] I'm just listing simplifications, regardless of if they are a good idea.


So I'm actually a big fan of writing apps in multiple languages, and I wholeheartedly support writing apps in higher-level languages that incorporate low-level components written in Rust. (This is, in fact, what Servo-based Web apps are!)

But I'm not sure I agree that there would be benefits in introducing a new language as opposed to just using an existing one. I think a Rust that used automatic memory management would effectively be a new language. There would be all the costs that come with introducing a new language: new tooling (even if based on Rust tooling, it'd have to be heavily modified), bootstrapping an ecosystem and community, and so forth. While making new languages is awesome, we have our hands full with the one we have :)


There are a few Rust hosted dynamic languages already, see: https://github.com/ruse-lang/langs-in-rust

Dyon in particular sounds interesting for realtime apps as it uses a lifetime checker instead of GC.


not the OP, but i strongly disagree. you don't need to be an expert to write rust. Many of it's features are in a huge amount of other languages; parametric polymorphism, ad hoc polymorphism, type inference, etc. there's nothing new and groundbreaking besides ownership and lifetimes, from a language syntax perspective. it really just takes a little bit to get used to ownership and you're off to the races. the compiler helps you out in a big way with writing correct programs.

i also feel fracturing the community would be terrible. i came from basically a dynamic programming background (had done java and c++ in soft. eng. in school and that's it) and I had no problem learning rust.


https://github.com/jonathandturner/rhai ?

It doesn't do exactly this, but it's similar I think.


It's also important to note that Go was designed for programmer productivity and Rust wasn't.

GC is incredible productivity boost which is why no recent language (except Rust) does manual memory management.

Compile times in Go are a fraction of Rust compile times, which is another productivity boost.

Go gives you one good way to do concurrency, Rust believes in tyranny of choice.

Without mentioning that you don't paint the whole picture, just the parts that are favorable to Rust.


> It's also important to note that Go was designed for programmer productivity and Rust wasn't.

Rust was absolutely designed for programmer productivity. Source: I was one of the designers.

> GC is incredible productivity boost which is why no recent language (except Rust) does manual memory management.

That's true, it is a productivity boost. But it also has a cost. Rust didn't want to pay that cost, especially when the same mechanisms are useful for preventing data races. (In fact, removal of GC was something that happened as a consequence of the data race freedom features; it wasn't a design goal from the beginning.)

> Compile times in Go are a fraction of Rust compile times, which is another productivity boost.

And rustc performs much more optimization. Again, a tradeoff.

> Go gives you one good way to do concurrency, Rust believes in tyranny of choice.

That's untrue. Go has threads, channels, mutexes, condition variables, and atomics. Rust has threads, channels, mutexes, condition variables, and atomics. The only thing I can think of that Rust has that Go doesn't is, like, SIMD, and I'm sure your comment wasn't just referring to SIMD.


> The only thing I can think of that Rust has that Go doesn't is, like, SIMD, and I'm sure your comment wasn't just referring to SIMD.

Just to clarify, are you referring to Rust using SIMD by way of LLVM, or by way of being able to use SIMD primitives / intrinsics directly in Rust code?

The former works much better than I had anticipated. I've been surprised by the extent that my iterator code ends up vectorized without me doing much work.

The latter does not give me warm Rust feelings today. There's a SIMD crate, but it doesn't look maintained and only works with the nightly compiler releases. I didn't think there was any stable way to do inline assembly, so I think linking C is my best bet here?


> The latter does not give me warm Rust feelings today. There's a SIMD crate, but it doesn't look maintained and only works with the nightly compiler releases. I didn't think there was any stable way to do inline assembly, so I think linking C is my best bet here?

Yeah, the person formerly working on simd is no longer able to contribute to random open source projects. It's still something the Rust team wants to make stable, just not right now. SIMD is a bit complicated because you need to address target support in a straightforward way.


> Rust was absolutely designed for programmer productivity. Source: I was one of the designers.

If you mean productivity as "memory safe and no null pointer errors" then Yes, Rust is productive compared to C,C++ or even Go. If you mean productivity as "ease of use" then no, Rust is not easy to learn or to use compared to Go.


I agree with some of your points, but think this is phrased a bit harsh. FWIW, I write both Go and Rust on a regular basis.

Here's where I would agree with you:

- Go makes it harder for someone to write overly abstract code (a common affliction!).

- Being able to occasionally do type assertions in an ergonomic way is surprisingly nice.

- I wish Rust had something in the stdlib like net/http.

- I like that go fmt is so unconfigurable and canonical.

Here's where I would disagree:

- I find ADTs (Rust's enums) super helpful for productivity.

- Removing nil pointer derefs is wonderful, particularly for refactoring.

- I spend too much time in Go rewriting bits of code that I would just use generics for in Rust or C++. Rust's iterators are wonderful and I end up using them over and over again.

- Maybe it's my C background, but I like being able to occasionally use macros. Even for tests it makes things much more readable.

- The borrow checker ends up moving many concurrency issues from runtime debugging to compile time debugging.

- I think cargo is more pleasant to use to manage code than using go + godeps/glide/etc.


I forgot to add - I partially agree with you about the productivity of GC. For non-performance sensitive code, GC simplifies a lot. Otherwise, I find Go much nicer to reason about than Java since it has real arrays (value types!).

But I find it a mixed bag of whether the naive Go version of something that shares memory by GC is simpler than the naive Rust version of something that shares memory. Sometimes ref-counting (Rc<T> in Rust) is fine, although that's more expensive than GC. Sometimes Rust's ownership model nudges you to make the code much simpler and makes it clear that something only has a single writer. Sometimes you wish you were in C and just did it yourself...


> Sometimes ref-counting (Rc<T> in Rust) is fine, although that's more expensive than GC.

To nitpick: The jury's still out on that one, because Rc in Rust isn't thread-safe reference counting. I believe that non-thread-safe reference counting is quite competitive with global, cross-thread tracing GC.

When people (rightly) talk about how much slower reference counting is than tracing GC, they're almost always talking about either thread-safe tracing GC vs. thread-safe reference counting or single-threaded tracing GC vs. single-threaded reference counting. When you compare Rc to a typical multithreaded GC'd language, you're comparing multithreaded tracing GC to single-threaded reference counting, which is a much more interesting comparison.


> When people (rightly) talk about how much slower reference counting is than tracing GC, they're almost always talking about either thread-safe tracing GC vs. thread-safe reference counting or single-threaded tracing GC vs. single-threaded reference counting.

Reference counting has always been slower, even in single-threaded cases. This should be obvious because pure reference counting requires modifying counts whenever locals are assigned, which happen orders of magnitude more often than main memory updates, and now each local assignment requires touching main memory too.

As soon as you defer these updates somehow to recover that cost, you've introduced partial tracing. You can find papers from way back acknowledging this overhead, and suggesting optimizations [1].

[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.25.9...


> This should be obvious because pure reference counting requires modifying counts whenever locals are assigned, which happen orders of magnitude more often than main memory updates, and now each local assignment requires touching main memory too.

I guess so, but it's worth noting that this isn't the case in Rust, because Rc benefits from move semantics. Most assignments don't touch the reference counts at all.


Right, Rust's borrowing is like the optimization link I provided, just slightly less general IIRC.


Yes, this is a great point! I was a bit sloppy in my wording above.


> Sometimes ref-counting (Rc<T> in Rust) is fine, although that's more expensive than GC.

This is a very interesting and hard comparison to make.

The marginal cost of GCing one more thing is much less than the marginal cost of RCing one more thing. But you need to pay the cost of the GC runtime to get there, and Rust doesn't, so it's a rather hard comparison to make.

However, the cost of RCing something itself is pretty small (and you don't RC very often in Rust anyway), so this rarely matters :)


As a long-time C++ developer I find I can be very productive writing Rust. Many modern C++ idioms translate almost directly to out-of-the-box Rust-isms. Some (including some important ones) don't, but these are relatively easy to understand and work with.

Compile times are actually not that bad. I've seen some real doozies in my day, and plenty of hacks to overcome C++ compile times (distributed compiling, pre-compile steps to collect many dozens or more source files into a single TU, etc.). Rust is just fine. Besides, for the kinds of programs Rust is targeting being productive isn't synonymous with belting out as much code as fast as one can.

As for concurrency, I think having options is great. There is rarely a one size fits all solution for concurrency and having choice mirrors real use cases very nicely.

Furthermore I really enjoy generic programming. One of the worst parts of C is the lack of real generics. It gets tiresome writing the same array, hashtable, etc. code for each different type. Hiding it behind macros is terrible. It's one of many things that make both C++ and Rust superior to Go, in my view.

That said, I do perceive a certain "feature my preferred language offers is the most important thing ever" myopia about some aspects of the Rust community. It's similar to a certain segment of the C++ community zealously beating the drum of "performance, performance performance" to the exclusion of other important things. Rust hasn't the popularity or use in the wild for its warts to be exposed yet, but time will tell. Every language has flaws and proper (and improper) uses.


I would offer a counterpoint -- Rust's guarantees and strong typing offer a different kind of productivity to an intermediate or experienced user. One where a compiled program is much more likely to be "correct" than in any other mainstream-esque language I've used. The properties which combine to create this effect make refactoring much less scary, allow easier navigation in large codebases (IMO), and allow for other valuable properties for productivity.


This is why I think there's room for another bigtime language that hits the rust sweet spot of functional-flavored imperative programming with a GC.


GC for in-memory data structures, RAII and lifetimes for system resources, ML-style type and module systems. That would be very close to the ideal language IMO.


The key problem: in-memory data structures can embed ownership of system resources.


I'd take an approach inspired by Standard ML's eqtypes. In Standard ML:

(0) Some types are eqtypes (akin to Rust types that implement the Eq trait, but managed entirely by the language, you can't define custom Eq impls).

(1) If a type constructor is an eqtype, then the result of applying it to an eqtype is an eqtype, but the result of applying it to a non-eqtype is a non-eqtype. For example, “list of ints” is an eqtype, but “list of functions” isn't.

Similarly, I propose that:

(0) Some types are copytypes (akin to Rust types that implement the Copy trait, again, managed entirely by the language).

(1) If a type constructor is a copytype, the result of applying it to a copytype is a copytype, but the result of applying it to a non-copytype is a non-copytype. For example, “list of ints” is a copytype, but “list of file objects” isn't.

Subtleties:

(0) If we have first-class functions, functions must be parameterized over whether they can be called more than once (akin to the distinction between FnOnce and Fn in Rust).

(1) When a value goes out of scope, its destructor is called. For copytypes, the destructor is guaranteed to be trivial, and can be optimized away. For “base non-copytypes” (e.g., file objects), the destructor is explicitly implemented by the programmer. For “derived non-copytypes” (e.g., lists of file objects, closures that have captured file objects), the destructor is automatically generated, and it does the obvious thing (destroy all file objects in the list, or captured by the closure).


FWIW, Copy already behaves like that in Rust: a "custom" Copy impl is just opting in to the default copy implementation, there's no way for the programmer to inject any code.

The reason the impl is required is to ensure people write what they mean: it is backwards incompatible to go from Copy to non-Copy, so it would be unfortunate for a type to accidentally be Copy because an early version of the type happened to only contain Copy types. (The trait is really just a marker for "this type can be safely duplicated with memcpy".)

The Send and Sync traits are similar, and are in fact almost identical to eqtypes in that an explicit implementation is not required.


> Copy already behaves like that in Rust: a "custom" Copy impl is just opting in to the default copy implementation

Yeah, I realized that, then deleted that part of my post.

> The reason the impl is required is to ensure people write what they mean: it is backwards incompatible to go from Copy to non-Copy, so it would be unfortunate for a type to accidentally be Copy because an early version of the type happened to only contain Copy types.

In my proposal, with an ML-style module system, you can define an abstract non-copytype whose internal representation is a copytype, just like in Standard ML you can define an abstract non-eqtype whose internal representation is an eqtype.


If you want a modern language with (optional) GC, with seemless FFI to C, and with almost C performance then you should take a closer look at Nim. Rust can learn a lot from Nim regarding productivity.

http://nim-lang.org


Arguably, programmer productivity is one consequence of safety (and productivity isn't necessarily only gained by increased safety either of course!).

Are there different productivity levels between Go and Rust? Probably. But to say Rust wasn't "designed for programmer productivity" definitely isn't right from where I'm standing.


Honestly, the more you program in rust, the more you get used to its mannerisms. I don't at this point find that I'm less productive in it than in Go, and with gennerics I'm generally able to reuse code more often meaning that I usually write less code. Similarly in C I lose time on screwing up basic things that I have to debug in testing, whereas in Rust I don't have to even worry about that.

I'm actually not buying the productivity argument, after you've learned the language. Claiming that it is less productive when you barely know it isn't fair, it's similar to saying that Italian (as an English speaker) is much more complex and hard to communicate in than English, when you've only studied the language for a month.

It does have a steep learning curve, so it's either worth it to you to learn it, or not. But once you know it, I think your overall productivity is higher or the same as other languages, and on top of that you end up with fewer production issues!


I'd say my experience matches this. I've written well over 50 KLOC of both Go and Rust in the last few years (I love both), and I don't personally experience a big productivity difference between them. There is one dimension where I feel like Rust makes me more productive though: when writing code that I want to go as fast as possible.

I bet there are things that would be more productive in Go, maybe building web applications, but I haven't built any in Rust yet, so it's hard to compare.

As a reference point, I feel much more productive in Rust/Go than I do in any unityped language. I suspect the same is true of C, but I've never been a heavy C practitioner.

I'll stop here though, because this is all pretty subjective and wishy washy, and it will undoubtedly vary from person to person and be dependent on what kinds of problems you're solving most frequently.


    Claiming that it is less productive when you barely know it isn't fair...
I'm really quite familiar with both go and rust, and I can say straight out that go is more productive than rust.

...but because rust is in any way a worse or less productive language (that remains to be seen), but because it has an immature ecosystem with few high quality crates and virtually no tooling.

Go has a lot of very polished tooling (eg. gocode) that is supported in multiple editors, and a variety of high level packages for all kinds of things.

Certainly I'm willing to acknowledge that if you're using vim without any plugins to write your code, or if you're evaluating the effectiveness of writing individual functions, it's a much more abstract kind of thing.

...but have you actually bench marked yourself coding a real world project, from scratch, in both, from idea to delivery? I have, and for me rust was an order of magnitude less productive than go. At least.

Once the tooling for rust gets up and going we'll be a much better position, but...

You're effectively saying that there is zero productivity boost in the extensive go tooling ecosystem (which rust currently doesn't have); that's pretty hard sell...


> Go has a lot of very polished tooling (eg. gocode) that is supported in multiple editors

Have you tried racer? It seems quite similar, and apparently it does more (see [1]).

I mean, I agree in the abstract that yes, having tooling and a bigger library ecosystem matters a lot. But the things you've cited so far (code completion and an AWS library) are things that Rust does have. I'd be more interested in hearing specific problems with those.

[1]: https://github.com/nsf/gocode/issues/307#issuecomment-155080...


     It seems quite similar, and *apparently it does more*
Have you tried the go-plus (https://atom.io/packages/go-plus) tooling?

Really, please try it out. That's the kind of experience that's really missing from rust; autocomplete, fmt, test watcher, linter, code coverage, debugger, doc summary; you hit install and it works.

Racer isn't there yet.


> You're effectively saying that there is zero productivity boost in the extensive go tooling ecosystem (which rust currently doesn't have); that's pretty hard sell...

You could say something like "you're effectively saying that there is zero productivity drain from not having features like generics (which Rust currently does have); that's a pretty hard sell...

Tooling matters a lot, and that's why we're investing in it. But "tooling" is not a synonym for "productivity." There's a lot of factors.


You'd have a much easier time arguing that go not having a package manager makes you less productive compared to cargo (very true); you're kidding yourself if you think having generics makes up for having to write your own AWS client in terms of productivity.

Can you build something more quickly in go right now, than in rust?

For many applications, the answer is, right now: Yes.

Building (using the tooling that does exist, and the libraries that do exist) make completing a project successfully easier and quicker than in rust.

Does that make go 'more productive' than rust?

Or is 'productivity' just how effective you are at expressing logic as code?

Well, I guess it depends what word games you want to play.

My personal experience has been that using rust is great, fun (if verbose) and you can spend 5 or 6 hours writing a nice crate that doesn't do anything meaningful; and when you do have to do something meaningful, its a tonne of work to get it to compile (ugh, c libraries...) and you end up having to write or modify/fix the packages to do many of the things yourself.

That is not productive.

Perhaps you see a different side to it from where you sit, but I do think #rust suffers from a certain degree of confirmation bias.

I think 'productivity' depends on your problem domain.

For building actual applications rust simply isnt even in the same space as java, c++, c# or go at this point in terms of 'productivity'.


> you're kidding yourself if you think having generics makes up for having to write your own AWS client in terms of productivity.

It doesn't appear you need to write your own AWS client - or at least, not from scratch: https://github.com/rusoto/rusoto

> Perhaps you see a different side to it from where you sit, but I do think #rust suffers from a certain degree of confirmation bias.

From the outsider perspective of one who knows neither Go nor Rust well - Rust seems to acknowledge it has problems and gaps they're working on. Poor IDE support is one close to my heart. Compile times are another. Heck, I'm just rehashing the roadmap, aren't I.

With Go I hear more about how they've intentionally avoided things in the name of simplicity, and having some opinionated stance on having one correct way to do things. I get the impression that Go will never have generics, by design. C# already covers most of the things I'd use Go for pretty well - I don't see much advantage to switching to Go.

Rust, even in it's relatively untooled state, already has me seriously considering trying my hand at nontrivial projects in it. I've been chasing static analysis and appropriate annotations to catch threading bugs, data races, iterator invalidation, potential null derefs, etc. in C++ for some time, to great effect - and who knows how much time saved - in bugs avoided. I'm convinced it's a question of when, not if, I'll try switching over properly.


I'm arguing _against_ any one feature; I picked generics because it's the classic criticism of Go. I do think that package management might be a bigger factor, but my point is that there are multiple factors. You can't say "well Rust doesn't have tooling, so it's not productive."

I think we're mostly in agreement here, other than that I think your first post put too much weight on one specific part of a complex equation, and that you feel there's a clear-cut answer, and I don't.

(I do agree that "there's a package for this" vs "there isn't" is a huge factor of productivity. It's actually the one I personally cite most often when talking about my own productivity in Rust.)


No sure why you were down-voted, but it probably has something to do with connotation. I agree though -- library maturity, tooling maturity are also very important factors when it comes to productivity. A steep learning curve, however, probably has less to do with productivity and more to do with adoption, which I think is what the parent was trying to get across.

But, of course, adoption rate can indirectly impact library and tooling maturity. :)


> GC is incredible productivity boost which is why no recent language (except Rust) does manual memory management.

Swift is a recent high-level productive language without a GC.


Swift still has automatic memory management. It may not have a tracing GC, buy that's not super relevant from an ergonomics perspective (except that potential cycles require extra work/annotations).


If syntax isn't going to be changed, is that really addressing the learning curve of the language?

Edit: I'm going to answer my own question here: yes. I just sometimes think Rust is a victim of its own success when people expect to learn Rust quickly because they learned Python in two days.

No one expects to master C++ in a weekend.


Go outperforms rust on all kinds of benchmarks, has a larger community, better documentation, more third party libraries, and significantly better tooling.

What's the value proposition for using rust over go?


Where are these benchmarks showing Go outperforming Rust? I've only seen the opposite with Rust convincingly out performing Go, it would also have more predictable performance thanks to the lack of a GC.

At a language-level Rust has generics, a better Type system and is better suited to functional programming.

Seems Systems-level programming, High-performance computing and resource-constrained embedded devices are areas where Rust would shine over Go.


Source?

First Google result for "Rust vs Go speed": https://benchmarksgame.alioth.debian.org/u64q/compare.php?la...

Rust seems to perform significantly better on some workloads while Go outperforms it marginally in other ones (except reverse-complement where Go is almost 50% faster than Rust).

Edit: As for the other points you make – they are often subjective and I've read the opposite to your statement at least as often.


The ones where Go outperforms us currently are the ones with heavy SIMD use; explicit SIMD is still unstable in Rust, so you're at the whims of LLVM generating it. We'll get there...



I assume that they're better about generating it implicitly. That is, it's about why Rust is slow, not about why Go is fast.


I don't think Go emits SIMD at all. Their assembler doesn't even support parsing it.

I think these are probably just bugs that we need to look at. The benchmarks where we do worse are the string benchmarks; perhaps it's our Unicode correctness that is hurting us, or something like that.


It definitely does: https://github.com/golang/go/blob/b851ded09a300033849b60ab47...

I should look into those benchmarks if you think there might be string problems. To be frank, I've never looked too closely at anything except for regex-dna.


Oh, interesting. I was inferring that from the Go code out there that uses machine code directly: https://github.com/minio/blake2b-simd/blob/master/compressAv...

I guess it must be a recent addition.


I think it was recent, yeah. I also recall there being problems with their assembler not being able to parse various SIMD instructions in the past. I also recall seeing code like in your link too.

Hmm, Go 1.7 introduced support[1] for various AVX instructions (plus at least one SSE 4.2 instruction), some of which are used in blake2b-simd.

I can't wait to get SIMD on Rust stable. It's going to be exciting.

[1] - https://golang.org/doc/go1.7#tools


Oh, by the way, if you want to investigate the string benchmarks, that would be really awesome--I'm curious to know what's going on there :)


> about why Rust is slow

If the Go programs don't use explicit SIMD, then "explicit SIMD is still unstable in Rust" does not explain the difference.


> What's the value proposition for using rust over go?

Productivity features like generics, a more mature optimization pipeline, freestanding (runtime-less use), highly optimized libraries like serialization frameworks and regular expressions, etc. etc.


The hard part doesn't go away with unless you pull in a GC or something similar to take care of lifetime bugs. In Go you always use a GC and circumvent the explicit lifetime management syntax of Rust. Some data structures are very hard to write without something akin to a GC, so my bets are on a few opcodes trickling down into mainstream cpus making GC easier to implement in optimal time and space. I'm surprised Android hasn't forced anything like that in their supported ARM designs, since such cpu support has been available in production systems in the 1970s and hence isn't anything revolutionary, just not on the radar like vector instructions have been.


Is this based on your experience learning lifetimes in Rust? I very rarely find myself needing to do convoluted lifetime tricks -- most of the code I write doesn't even need explicit annotations. So I'd be curious to hear what project you may have worked on as a beginner where lifetime management because a serious obstacle.


I'm not talking about Rust lifetimes but lifetimes as a common term used in many languages. What I mean is that in Rust you're forced to explicitly deal with the lifetime aspects of variables before the compiler will generate any code. In C and C++ you're free to skip that and introduce leaks or use after free bugs, which are accepted by the compiler due to the language definitions. I didn't mean to say it's more difficult. In fact, it's less work to deal with these issues in Rust before the code gets generated rather than debugging mysterious bugs. Put differently, you're debugging your code while trying to compile it instead of debugging it after hopefully someone was able to trigger it and provide enough clues for you to identify the issue.


Could you provide references for the claim that hardware support for GCs had been in CPUs for decades?


Burroughs line of machines and Intel iAPX 432 and it's derivative i960 (which is actually in use). The project for which the Intel CPU was designed together with Siemens would have given us a safer foundation due to using this processor in an Ada based operating system in the 1980s. But then UNIX won and gave us the hegemony of C and all the avoidable security bugs associated with it. Lisp machines had similar designs. These days there are RISC-V designs that have similar features. Just like DJB has been wishing for a few extra arithmetic instructions for crypto in x86 processors, I wish for instructions that would make garbage collectors more efficient and also provide support for memory safety features.

https://en.wikipedia.org/wiki/Intel_iAPX_432

https://en.wikipedia.org/wiki/Intel_i960

https://en.wikipedia.org/wiki/Burroughs_large_systems#Tagged...

http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...

I'm sure there were similar design in the 1990s, but I don't have references ready off the top of my head like the above.


Thanks. I will definitely look into these when I have the time. I'm actually a grad student in EE but this topic interests me. Maybe I can get something published on hardware accelerated GCs... one day!


Genuinely curious, do they not teach historic designs in EE course plans? To learn from and improve, but not reinvent something half-way or worse. Also, older designs are much easier to study completely compared to the super complex logic inside your current day x86.

Personally, I would call it hardware-assisted gc if we consider hardware acceleration to be things like GPUs, crypto accelerators, etc.


An undergrad computer architecture course would typically gloss over the history of CPUs and focus either on MIPS or x86 (or both). For example, I did two courses on x86, starting from 8086 (and 8088), through Pentium, and some bits of Itanium.

You're right that starting with a simple architecture makes things much easier. 8086 for example operated in 16-bit real mode (segmented memory), and so the memory layout was trivial compared to 32-bit protected mode in x86.

I haven't taken any graduate architecture courses yet, but my assumption that they would go into more detail on the development of CPUs through history.


> Rust should integrate easily with C++ code

The day that Rust manages to have always up to date Qt bindings that do not force you to make compromises compared to the C++ API, I think the C++ support will be at a comfortable level. Right now there are fresh efforts in the Rust and Haskell camps to solve this cleanly in a modern way. It would certainly help if the consumption of C++ APIs via llvm (which is used a lot byt Rust) was made a first class feature like you can import and export C APIs. Exporting C++ classes is a whole different story and may not map in any reasonable way to Rust modules or crates.


I'm ignorant of Rusts ABI/requirements, but can the C ABI be used as a lingua franca, or are there necessary facilities that are unsupportable there?


FFI through a C ABI is how Rust <-> C++ currently works, but IIUC it's quite limiting when one might want to make use of C++ specific features. Especially important when a given C++ project doesn't have a fully-featured C API, or any C API at all.


There is no general C++ ABI, the ABI depends on the compiler and the standard library.


One way to improve productivity of Rust with respect to the learning curve is to add more screencasts/tutorials to http://intorust.com

I found the existing ones very helpful and instructive.


Thanks, I'm watching now. I recommend it.

"The primary author ... and the narrator on the screencasts, is Nicholas Matsakis ... Niko has been involved in Rust since 2011 and is a member of the Rust core team as well as the leader of the language and compiler teams"


This is a better link, as it will include the discussion: https://github.com/rust-lang/rfcs/pull/1774


> Rust should provide a solid, but basic IDE experience

After using IntelliJ IDEA plugin [1] (works in free community version too), I don't understand what else people need :) Plugin is pretty powerful. Also Racer works great in VS Code and other editors.

> Rust should have a lower learning curve

Rust is easy enough to learn, the only issue is a steep start. After couple of months everything is not harder than JS and smart compiler actually makes me more and more lazy :) So I think only thing we need is books/articles to help people switch from other languages, to help them overstep initial difficulties.

[1] https://github.com/intellij-rust/intellij-rust


It's amazing how much the rust team has nailed it. they know exactly what the problems are. I do not know of any other team as talented as rust. Then language is also quiet brilliant. I have started rewriting all my code in rust the days. It just makes sense.


> 1 in 19: lack of IDEs (1 in 4 non-users)

I expected that to be higher actually, but that's still the only reason I don't use the language. I've come to rely heavily on IDEs. Whether right or not it's a huge factor in my decision to wait for the language.


I've been writing Java in Vim from time to time for 3 years. Rust is much more pleasant to deal with in a plain text editor. I encourage you to give it a try.


If that's the case I highly recommend giving VSCode a spin. With racer I've got solid code complete, goto def and docs.

They've also been making leaps and bounds in terms of features so I can only imagine it improving even more.


What kind of IDE support do you typically use that isn't handled by a modern text editor (such as go to definition, variable renaming, find and replace, etc)? I'm starting to use CLion for my C++ work now, because "go to definition" is much more reliable than most editors can manage with C++ code, but I find that most of the fancy refactoring features like extracting functions or subclasses are not used often enough to be really useful.


The features I need are:

   - Refactoring Support (rename variable, class, function)
   - Code Folding
   - Automatic type checking
   - Automatic error detection (tells me when syntactical or other errors are made)
   - Built-in documentation loading/display on hover of cursor. 
   - Same thing as above except with any documentation I add myself on a function. 
   - Debugger with comment overlay of values (like PyCharm)
   - Dependency management/Building/Deploying all a *button* (not 10 commands) away 
   - VCS integration
   - Automatic code formatting 
   - Automatic code "cleaner" (change for (int i = 0; i < arr.length; i++) to an expanded loop for example)
   - Automatic import cleaning
   - Automatic project creation for basic use cases (console, GUI, maybe a web framework)
   - Change CWD/environment variable for running the program
   - GOTO definition 
   - A Graphical User Interface (Graphical being the biggest requirement) 
Outside of all of those, there is a much much much more important feature: Auto Complete. A smart auto complete. An autocomplete that understands the type system and the function's return types.

For instance if I am typing:

   String sub = someString.
It shouldn't recommend "length()" as well as in the case of:

   String lengthStr = Integer.toString(someString.
It should recommend things that return an integer. This needs to be fast and needs to be non-case-sensitive. Also, if I am typing

   Lis
It should recommend List, ArrayList, etc in that order with the closest thing up front. It should also in the recommendation screen show documentation, a link to official documentation, and possibly an example of how to initialize/use the code. If something isn't imported and so it's not currently usable, if the user clicks enter or whatever on the auto-complete screen you should import it. If it has that, and has that working quickly, and has that working completely it's a "real" IDE in my book. That list is just "wants", this one thing is a "need"

If I had that for rust, I'd switch over to rust as my main language. Anything that doesn't have something like that will never compete with Eclipse in ease-of-adoption in my book.


>We asked both current and potential users what most stands in the way of their using Rust, and got some pretty clear answers:

>1 in 4: learning curve

>1 in 7: lack of libraries

>1 in 9: general “maturity” concerns

>1 in 19: lack of IDEs (1 in 4 non-users)

>1 in 20: compiler performance

>None of these obstacles is directly about the core language or std; people are generally happy with what the language offers today. Instead, the connecting theme is productivity—how quickly can I start writing real code? bring up a team? prototype and iterate? debug my code? And so on.

I don't really see how this conclusion was reached from that data. The "learning curve" is almost certainly a reflection of the language being a little esoteric and the demands of the borrow checker. And "lack of libraries" is partly a comment on the ecosystem, but also partly shows that std is obviously lacking for some needs. Finally, "general maturity concerns" refers to worries that the language or std will keep changing, as is assumed they have not been perfected yet.

Those are the top 3 concerns and to me, both seem to relate directly to the language or standard library.


I also don't think library maturity is necessarily the only problem that makes working with libraries difficult at time. You can get burned pretty easily by someone not having e.g. Send + Sync on an essential library data structure. (Though maybe there's a way around it other than forking?) That metadata ends up being very important


Send + Sync are automatically implemented for a type X if all of X's constituents are also Send + Sync. In practice, this means that unless you're using unsafe code, Cell/RefCell or Rc, then the Right Thing will just happen automatically. If you are using Cell/RefCell/Rc/unsafe, then you need to think carefully about your synchronization story anyway, so it's typically not something you just forget to do.

For more details, see: https://doc.rust-lang.org/std/marker/trait.Sync.html and https://doc.rust-lang.org/std/marker/trait.Send.html


Key word is "directly". Of course everything is about the language, eventually. But that doesn't mean that the solution for something exists inside the language. Learning curve can be addressed with docs and diagnostics, for example. Not everything needs to be (or can be, because stability) addressed by the language.


Overall it looks good.

When I look at this :

"Production use measures our design success; it's the ultimate reality check. Rust takes a unique stance on a number of tradeoffs, which we believe to position it well for writing fast and reliable software. The real test of those beliefs is people using Rust to build large, production systems, on which they're betting time and money."

I see a little bit of haste towards production use. From 'Rust 1.0 is out just last year' to 'We really need successful production use', It does some to be a push from management to justify continued investment in Rust project.


The intent wasn't to say that we should be seeing big production use right this second, but rather to set out an overall "north star" for work on Rust. The investments in Rust today should be made with an eye toward driving or enabling production use in the future.

I've updated the RFC to clarify this; thanks for the comment!


We want rust to be successful and that means being adopted and used.

It has nothing to do with management and justifying investing in rust. We are fortunate that Mozilla invests in rust to use rust and is currently very happy doing so.


Re: "Here are some strategies we might take to lower the learning curve: ... Improved docs. While the existing Rust book has been successful, we've learned a lot about teaching Rust, and there's a rewrite in the works. The effort is laser-focused on the key areas that trip people up today (ownership, modules, strings, errors)."

Yes, please. I get better every day with Rust. I'd ask that the docs on lifetimes be improved with more elaboration on the ideas and more examples. Frankly, I still find the details confusing, so I can't offer more particulars right now.


For what it's worth I hope serde or whatever default serializers are improved. I had some serious issues trying to combine libraries because of serde changing too much. Albeit this was about a year ago so it probably is fixed.

Second it would be nice to see some best practices not just for libraries but application development. What to do and what not to do. Maybe a cookbook or even... design patterns. I had a hard time trying to figure out if I should embrace closures or go traits (at the time closures were not working as well so maybe this is not an issue either).


We have already landed the underlying things Serde needs to be better in nightly Rust, and it should be stable soon.

I would love to see such a thing as well; I wonder if we're not quite old enough to have said patterns really gel yet. We'll see if someone comes up with something.


I'd also like to see a push towards applications in scientific computing.


First class multi-dimensional arrays with natural indexing syntax would be a huge draw. You get that in Numpy, Julia, Fortran and it is extremely powerful in terms of expressing the types of operations that are common across many scientific domains.


Which shouldn't be hard since most scientific libraries rely on a standard body of Fortran code being reused underneath. The nice bits are the interactive development environment, or visual repl if you will, and Rust could reuse the efforts of Servo here to make something shiny and comfortable that is easier to get running than existing solutions on many platforms.


Focus on high-performance servers, async I/O, C integration is very promising. These are Go's weak points, could become a good alternative and a direct competition for Go.


That async IO is hidden from you is one of the few nice things about go.

As long as you split your code into goroutines, you just treat IO as synchronous, while it's actually async underneath.


Care to elaborate on Go having weak async I/O support?


The programming model exposed to the user is synchronous, which happens to be implemented on top of the asynchronous primitives the OS provides.


It doesn't have it, Go uses a different concurrency model.


I manage a Lua framework that wraps Linux epoll, BSD kqueue, and Solaris Ports. Lua provides asymmetric coroutines, which the framework uses as logical "threads" for execution state. A Lua coroutine is a just a couple hundred bytes of state, likely smaller than a JavaScript/NodeJS closure all things considered, though slightly larger than a Lua closure.

Lua also has very clean and elegant bindings to C. Lua is designed to be implemented in strict ISO C, yet also to make the C API a first-class citizen--not a hack around the VM like Python, Ruby, etc. The one caveat to this equivalence is coroutines--when a coroutine yields and resumes, it can't revive any C invocation frames (e.g. when Lua calls C calls Lua). So the C API to invoke a Lua routine can take a callback and cookie. If the VM yields and then resumes a coroutine across a C API boundary, the "return" from the Lua routine invocation happens by invoking the C callback. (See lua_callk in the manual.)

From the perspective of C, as well as the kernel, this is a classic asynchronous I/0 pattern. From the perspective of Lua-script code, everything is transparent.

I take it that because Go implements light-weight threads (goroutines, which is likely a pun on coroutines) you do not perceive it as offering async I/O. And yet from the perspective of the implementation as well as the kernel it's classic async I/O using epoll or kqueue, whether goroutines are bound to a single CPU core or not. The send and receive operations on a Go "channel" are very similar to the resume and yield operations of classic routines, and semantically identical in the context of async I/O programming (because with both goroutines and async I/O there are no guarantees about the order of resumption).

Do you equate async I/O with callback-style programming? Do you think callback-style is somehow intrinsically less costly in terms of CPU or memory? I would dispute both of those contentions.


I think the difference is in how people define "async I/O". When saying "Go doesn't use async I/O", what that really expands to is: "While Go can use epoll[0], it doesn't abstract over epoll optimally." i.e. There is a difference between "zero-overhead async-I/O" and goroutines.

I'm not an expert but I'll try to describe why goroutines would have overhead (Someone correct me):

The posterboys of async I/O are Nginx and Redis. I'm probably simplifying, but this is the basics of how they work are: When using epoll directly, the optimal way to store state per connection is using a state machine. The state machine is usually some C-like struct which the compiler can give a fixed size. Each state machine is then constructed to have X memory, and is unable to grow. In theory (I don't believe anyone actually does this), if the program was comfortable with some fixed connection limit, you could fit all of these state machines on the program's stack, and require no heap allocations.

Meanwhile, Go's routines have stacks which can grow. Each goroutine has some max size, and some initial size, both of which are pre-set by the Go-lang team (I'm sure they are configurable). Since the stack can grow: They have to be heap allocated, and need to be either segmented or copied when they need more space[1]. Additionally, there is "internal fragmentation" because a growable stack needs to be consistently overallocated, which is a "waste" of memory.

Very quick Googling suggests that Lua has growable stacks as well.

[0] FWIW: Go could use M:N scheduling of kernel threads to achieve goroutines. Which is another reason why saying goroutines are async I/O could be incorrect. I don't know how its actually implemented.

[1] https://blog.cloudflare.com/how-stacks-are-handled-in-go/


Really nice! I try to learn as little languages at the same time as possible so sadly I have no time for Rust at the moment, but it looks like it'll only get better.


Idiomatic Rust guides would be very helpful


My personal code base is C++/cmake and Python.

It would be really nice with a way to slowly start writing code in Rust instead.


You might be interested in this post about using rust with python: https://blog.sentry.io/2016/10/19/fixing-python-performance-...


> Rust should have a lower learning curve Rust should have a pleasant edit-compile-debug cycle Rust should provide a solid, but basic IDE experience

I know the list isn't necessarily prioritized, but these 3 feel backwards.

A good IDE will provide can help facilitate easier edit/compile/debug cycle which makes learning the language faster because it's cheaper (time wise) to figure things out.


They're not ordered.


[flagged]


>From a langtheory perspective, rust is one of the most backwards, mixed up, kludgy languages to come to light recently..

Would really like to know more about this. It is not very often that you come across Rust criticism in this forum.

>C++ already works, why don't we just improve on whatever's lacking?

But can it done in such a way that it works with the rest of the language and standard library?


C++ at this point can only be principally improved by drastically reducing the language, and that will never happen. Ever.


A Java "death", where it becomes one of the most popular programming languages in industry? Rust should be so lucky.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: