I briefly considered Rust when shopping for the right language for Snabb Switch [1]. I decided against it in part because the community did a good job of setting my expectations. "They" told me that Rust will especially appeal to hardcore C++ hackers, which is a very different profile to our target group of casual programmers. So I was able to move on quickly from a choice that really would not have suited me, with no ill will along the way.
If for whatever reason I had been pressured to move ahead with Rust then I would have spent a lot of time swearing about the size and compilation time of LLVM, etc, before finally moving on to a tool that is better suited to the peculiar job at hand.
So from my perspective that is great work by the Rust community - defining and communicating who the target user is - and now I would certainly consider Rust for other projects in the future.
Holy hell that title. When did critique become "hate-writing?" I do mostly agree with the article, though.
> If the person is spewing anger, try to minimize the harm done by that anger by either asking them to stop or removing yourself and others from observing or participating in it.
The fact that this is thought of the original piece and that people need to be told not to get passionate about a tool is deeply worrying.
> Imagine Rust was a kid you were sending to preschool.
No it's not. It's a tool. The problem here is that people are attaching emotions to tools and code, as though they were their spouse or children. Emotions are an irrational/illogical process, programming is rational/logical - don't mix the two.
Rust is doing just fine, some criticism levied against it is healthy. The tone of the criticism is completely irrelevant. Ignoring that hurtful criticism is merely going to do the beloved tool harm in the long run - no matter how much "<3" was put into it.
So, the first thing to recognize is that everybody has feelings. This seems like kind of an obvious and maybe a bit of a condescending thing to say, but the fact of the matter is that when you pour your life into something over 4 years, you're not going to come out of it with a "rational/logical" process—no matter what your ideal of programming is.
Is ignoring criticism a good thing? No, obviously not. But there's a wide gulf between constructive criticism and posts like the one being obliquely referenced here[0]. When an author makes only the most cursory effort to examine a language, and then pronounces it inferior to what they prefer with only very vague (and in many cases flat out incorrect) arguments, I think its safe to assume that they're not doing it out of the kindness of their heart.
When I look at code I wrote in the past I frequently think "what utter horse shit." I'm a better coder than I was 4 years a go and I will be a better coder in 4 years time. The extension of that means that, yes, the code I write today will be terrible by the standards of my future self. So yes, I am right now writing terrible code and it's fine. If someone takes the time to critique my code maybe it will be less terrible and in 4 years time I will be proud of it.
This doesn't mean you can't take pride in what you do.
> I think its safe to assume that they're not doing it out of the kindness of their heart.
Valuable criticism is not determined if it was done by the kindness of someone's heart. Your most valuable critique can often come from your worst enemy. You might have to read between the lines/vitriol but there is gold in there somewhere.
It's some guy's opinion about a programming language, not some hate-filled screed. There's no kindness or unkindness factor here. Certainly if you want to be unkind towards Rust people, be it the sharks or the remoras, writing an article in Russian isn't the way to go. There are plenty of reasons to say that Rust sucks and he hit a few of them.
I think you're completely missing the point. Graydon isn't talking about the receipt of constructive criticism. This is literally the first sentence:
"Each now and then someone on the internet decides to write a screed or rant in a comment section about how Rust is a terrible thing full of mistakes and stupidity, and they cannot wait for it to die soon enough."
He's also not talking about "ignoring" criticism, hurtful or otherwise. This entire post is his take on how to respond calmly to irrational antipathy.
Yeah, I did read the article. I only dedicated one line to the title. On recollection maybe he's attempting to click-bait the people who would agree with the title (and hence need to read the article).
Someone made the point yesterday, a good point I think, that the Go and Rust crowd would be better served by showing positive blogs and examples (i.e. Here is how to build a small app) rather than ones that serve only to whine and complain and bemoan C/C++.
You don't win friends by criticism.
As a seasoned C++ programmer, I am interested in Go and Rust, but not because a few ardent posters told me I have to.
> that the Go and Rust crowd would be better served by showing positive blogs and examples
Are we reading the same Hacker News? The community practically fellates itself over these two languages with any project/blog in the title containing Go or Rust shooting up to the front page.
If you simply search "Rust" on HN search, the first 4 pages all contain positive blogs and examples, with 200+ points and 100+ comments, and the first negative one on page 4 being "Author of “Unix in Rust” Abandons Rust in Favour of Nim"[1].
If you aren't seeing the "positive" stuff I'd have to say its because you don't want to see them. Even more strange you say that the Go crowd just whines and complains about C/C++ when the community has given up trying to be a replacement for C++ years ago and has embraced the "I need compiled Python" crowd.[2]
Each of these have been at the top of HN on the day of publication, they've been relatively hard to miss. None of them bemoan C++. In fact, they cite C++ as inspiration.
I refuse to consider it a test, thank you, because I'd end up failing it. I was genuinely curious when I asked the grandparent poster where he was seeing people whine about C++, because I just haven't seen them. And I'm a moderator of /r/rust, so I like to think that I see every Rust-related blog post that springs forth from the bowels of the internet.
In the course of researching this comment I ranked the top-rated posts on the subreddit over the past month, and not only did I not find any that were bashing C++, but the fourth-highest-rated post is actually a criticism of Rust by a Boost developer: https://plus.google.com/+nialldouglas/posts/AXFJRSM8u2t
So I ask again: who is going around unjustly denouncing C++ in Rust's name? I ask because I want to stop them!
> Technical and human pluralism enriches the world. Monocultures do not.
Unlike human culture, tech cultures can't communicate to enrich one-another. (The people in them can, but this is a different thing.)
To explain what I mean: you can't import a Ruby library into a Python program. You can import a C library in a Rust program, or vice-versa—but that's a special case. Even then, you can't write a patch for a C library in Rust, or vice-versa, because the original maintainers of the library you want to contribute to are only going to accept contributions in languages they understand and can maintain directly.
I really believe that fixing this—decoupling "runtime/platform" from "language semantics", and "language semantics" from "syntax"—is the single most important thing that will happen in software engineering in the next 20 years.
Imagine a program somewhat like go-fmt(1), that would run on checkout of a source repo, to transform a base-level AST into the syntactic representation of your choice, and pattern-match-decompile any low-level statements with high-level "shape" into the macro-statements in your chosen language that would generate them[1]. Imagine every library being not only available in every language, but able to be contributed to by programmers who know any language. Etc.
The result would be, in one sense, a "monoculture"—a single ecosystem of libraries, rather than 12 ecosystems between the CRT, the JVM, the CLR, the Erlang VM, etc. But it wouldn't disallow competition between versions of said libraries—just allow obvious winners to win once-and-for-all, instead of limiting their success to their technological "country of citizenship."
---
[1] Or, better yet, a FUSE server where an underlying directory of AST files gets mounted as syntaxified files. Much cleaner from the rest of the toolchain's perspective.
Unfortunately, that's not how languages work. There may be language equivalence classes (that would put, say, Ruby, Python and JS in the same class, C, C++ and Rust in another, and Java, C# and Go in a third), but languages from different classes can't just be translated from one to the other. First, a program written in one language isn't just a description of an algorithm, but a description of the algorithm plus language-specific information that's supposed to help the particular platform execute the algorithm efficiently. Second, there are essential runtime differences that can't be easily ported, such as a threading model and a memory model -- and those may significantly affect the suitability of an algorithm for a certain language. Finally, there are fundamental differences in the appropriateness of certain algorithms; specifically, imperative algorithms can't be automatically translated to pure-functional algorithms without possibly changing their time and space asymptotic complexity.
In short, there's a lot more to a computer program than a description of an algorithm, and even algorithms vary considerably in their suitability depending on the language/runtime executing them. Even if you could solve the first problem, you're still left with the second.
I do agree with the premise, though: unless your new language/runtime is so revolutionary that it reduces development cost dramatically, fracturing the ecosystem is often too high a price to pay for merely a
"nice" improvement. The problem is that many languages claim dramatic cost reductions, but it usually takes years to learn whether or not they deliver. The reason for that is not just the number of data points, but the fact that the cost of software is spread across a codebase's lifetime (10 years is about the average lifespan of a production system codebase), and how development looks -- and what activities contribute to costs -- in the first and second year is very different from how it looks in the sixth.
"unless your new language/runtime is so revolutionary that it reduces development cost dramatically, fracturing the ecosystem is often too high a price to pay for merely a "nice" improvement. "
I agree. Yet I do think both Rust and Go will probably end up meeting the standard.
Go is a great alternative from a technical standpoint from Java. For a Python(2) user like myself, there's a lot of improvements but how easy it is to get started has replaced Python3 with Go for myself when I'm ready to move on someday. For me to make that judgement tells me boring old Go did simplicity as a feature correctly. Rust may also live up to the hype, but more for the C(++) usecases.
Both Java and C++ were getting quite long in the tooth and I feel Go and Rust respectively offer enough to move away from the former two in time.
> Go is a great alternative from a technical standpoint from Java
What? It doesn't require warmup -- that's why it's great for short command-line apps but other than that? It's slower than Java, doesn't have anywhere near the same level of deep monitoring, doesn't have any live code loading capabilities, and has really bad interoperation with other languages. I use it when I need a simple script to run faster than Python, but a Java replacement??
> Both Java and C++ were getting quite long in the tooth
I don't know if you're aware, but OpenJDK's JVM is not only the most advanced runtime environment around by a longshot, but the one that's moving fastest with some really incredible research that's gradually finding its way into production. Even Google is putting more effort into improving OpenJDK than into Go. The JVM is much more modern (in terms of new code and concepts being integrated into it) than Go (whose runtime is almost primitive by comparison).
Go is a great solution for that middleground between a script and a "heavyweight" application, but let's not compare its power to that of the JVM. It is a nice tool -- with excellent beginner friendliness -- but it doesn't offer any technical breakthroughs.
As to Rust vs. C++ -- I'm hopeful, too, but like I said, there's no we can know how big its impact is until it's been used by several organizations in production for quite a few years yet.
" It's slower than Java, doesn't have anywhere near the same level of deep monitoring, doesn't have any live code loading capabilities, and has really bad interoperation with other languages. "
Curious, what projects have you worked on where Go's runtime performance was slow, requiring a Java rewrite?
Link to your scrapped repo or didn't happen.
" I use it when I need a simple script to run faster than Python,"
Same thing, source to a situation where Python was too slow and it necessitated a Go rewrite for you?
"OpenJDK's JVM is not only the most advanced runtime environment around by a longshot,"
Most advanced for memory usage? I'm not sure what your point is about being "advanced". MIT theory at it's worst.
"but let's not compare its power to that of the JVM"
Why not? JVMs are overengineered piles. Simple is better.
"It (Go) is a nice tool -- with excellent beginner friendliness -- but it doesn't offer any technical breakthroughs."
And thank Zeus it doesn't. I think I've had enough "technical breakthroughs" with OpenJDK's JVM, Node.js and whatever else is being sold today.
Few are building new product on Java, other than those already entrenched. Android is the biggest draw and once Go is a 1st class citizen there, expect the life to be sucked out. Even with OpenJDK's JVM being the best thing since sliced bread.
I don't think you know what JVMs do if you think they're overengineered. Go's "simplicity", while great for small apps, means we don't get anywhere near the same level of monitoring we get with Java, slower performance (though not a dramatic difference), and much less flexibility (no polyglotism, no hot code swapping, no dependency-conflict resolution) -- not that you should notice the JVM's "overengineering" if you don't need it. Besides, everything is overengineered until you need exactly what it's engineered to do. A helicopter may seem "overengineered" if all you need is a car, but if you need a helicopter, then a car may be seriously lacking. In the case of JVM vs Go, if you need large data sets in memory, interesting concurrent data structures (even concurrent hash maps), or even if you need to know exactly what your server is doing -- the JVM doesn't seem so overengineered any more. As to new projects being written in Java vs other platforms (Go included), you have no idea what you're talking about. All of Android doesn't even amount to 5% of Java developers.
I don't think performance costs are that big of a deal in general. The real costs with those sorts of services is perhaps centralization and dependency on an outside entity.
If we did manage to create some magical cross-language library (compile everything to C?) then maybe libraries would be back in style.
I don't know, at least on the JVM, libraries are very much in style. I know Google writes most large internal libraries in C++ with a C API and uses them from Java, Python, Go and C++.
> but a description of the algorithm plus language-specific information that's supposed to help the particular platform execute the algorithm efficiently
Tracing JITs should be able to spit out, as they run, the tables of Bayesian confidences they've built up for various static properties of the code, which can sit along with things like source maps, and be confirmed/rejected by the programmer in their IDE, or just live-reloaded into a new VM ala a Smalltalk image. (Future-tech, remember.) You can see the potential for this in things like Erlang's typer+dialyzer system.
Likewise, static analysis tools should be able to work on foreign code after transpilation. There's nothing stopping you from transforming C code into Rust code in order to get the Rust compiler's opinion on its ownership semantics.
Note that I'm not saying that there's one universal underlying language semantics. Just that language semantics (which consist of such things as a type system, a threading/memory model, etc.) have no reason to be tied to either a particular syntax, or a particular VM. (Effectively, a language semantics forms an abstract machine that executes more or less efficiently on the substrate of any given VM. MRI Ruby is a direct substrate for Ruby semantics; IronRuby is less-clear substrate; etc.)
> those may significantly affect the suitability of an algorithm for a certain language
This is a problem of transparent distribution. I've been working on an Elixir DSL for writing Haskell "inside" Elixir for efficiency. The result is not a Haskell compiled module getting linked into the Erlang VM, but rather a separate Erlang-VM-managed Haskell "application server" being run as a port program. I foresee much more of this, and much more cleverness about it: writing code that compiles to a bunch of separate modules for separate VMs, which then form a micro-distributed-system all within a Docker container or somesuch.
Again, it's not about eliminating the plurality of runtimes—it's about rendering that plurality moot, abstracting it away from the perspective of the programmer and leaving it up to the implementation to decide how to optimize abstract-machine-to-VM allocation.
There's no reason that you can't have every language semantics available to any library, to use in any combination (this parameterized type system with that green-threading and this other GC, etc.) It's just that, in so doing, you're either transparently importing into your single runtime some virtualization layers for all the other abstract machines you've coded in terms of (somewhat like writing a Windows app on Linux by linking it to Wine), or you've let the Sufficiently Smart Code-generator go beyond the premise of having a single target platform.
Not so much future tech. The most sophisticated JIT in production use nowadays is HotSpot's optimizing JIT, and next year it's getting this: http://openjdk.java.net/jeps/165 (basically, metadata that the programmer can use to help the JIT with some decisions). HotSpot's next-gen JIT, Graal, takes five more steps forward and allow a full programmer-controlled JITting, that can suggest and improve various speculations on the semantic AST: https://wiki.openjdk.java.net/display/Graal/Publications+and...
I have full confidence in the general utility of JITs for server-side applications, but JITs themselves are a tradeoff (require warmup or caching, more RAM and more energy).
> There's nothing stopping you from transforming C code into Rust code
Not really, because Rust requires manual type annotations. Either they are added manually -- in which case you may as well translate manually -- or automatically, in which case the tranlator would need to infer the very same properties you want to verify.
> Just that language semantics (which consist of such things as a type system, a threading/memory model, etc.) have no reason to be tied to either a particular syntax, or a particular VM.
But they do because it's not just syntax -- the language and runtime are really a part of the algorithm (in fact, most algorithms presuppose a certain execution context like random, constant-time access to memory). The threading/memory model is what determines whether certain algorithms can be implemented at all; functional purity can completely change the complexity (by more than a constant) etc.
> I foresee much more of this, and much more cleverness about it: writing code that compiles to a bunch of separate modules for separate VMs, which then form a micro-distributed-system all within a Docker container or somesuch.
The problem with that is that once you go distributed (even on the same machine) there are tremendous performance costs involved. Not only marshalling/demarshalling of data, but fanning-in and then fanning out your concurrency on each end. There's no way you can use, say, a Java parallel stream, whose parallel operation is performed over the wire. There are only very specific places where you can place that boundary.
I do, however, think that a platform such as OpenJDK, with such powerful GCs and JITs, can -- and indeed does -- contribute significantly to the ability to interoperate among various languages -- but certainly not all of them.
> But they do because it's not just syntax -- the language and runtime are really a part of the algorithm.
I don't think you understood my statement above—what you're attributing to the runtime (memory-model et al) is part of the abstract machine specified by the language's semantics, but that's orthogonal to the runtime.
A runtime can be a better or worse fit for a given language's semantics, but any VM can virtualize any abstract machine with enough glue. VMs are Turing machines, after all. (In practice, the overhead can be surprisingly low; you don't have to emulate what you can trace or dynamically recompile. The work around ASM.js is rich in data about how to squeeze performance out of the not-particularly-direct mapping of the C abstract-machine to the JS VM.)
The abstract machine formed by the language semantics can, of course, demand live support machinery of the underlying runtime (like a GC, say) that may have to be "polyfilled." This is what we've been doing forever—a PIT signalling a processor interrupt which is checked for between each cycle is just a polyfill for having concurrent processes on other processors sitting in blocking-sleep for a bounded time, for example. Any hardware emulator is full of these. Usually you can find ways to make them less necessary—IronRuby doesn't contain its own GC with Ruby semantics, it just translates Ruby's GCing requirements into calls to the CLR allocator+GC and the result works "well enough."
> Not only marshalling/demarshalling of data, but fanning-in and then fanning out your concurrency on each end.
Why are you assuming message-passing distribution? Shared-memory cross-runtime distribution works too. That's how ZeroMQ works, for example.
> what you're attributing to the runtime (memory-model et al) is part of the abstract machine specified by the language's semantics
OK, I see where you're going with this. But now you'll need a "bottom representation", which, even if it is easier to transform than machine instructions, will be pretty hard to decompile to another language. For example, most interesting lock-free algorithms require some sort of garbage collection; how do you describe that GC behavior, which doesn't necessarily need to be general purpose, in a way that can be ported to both C and Go?)
Theoretically it may be doable, but in practice the problem is very, very hard.
The big question, then, is why bother? The JVM already provides excellent interoperation with excellent performance that can cover the vast majority of (at least) server-side applications out there, and Graal/Truffle are extending the range of languages that the JVM can provide the fastest implementation for (it's early days and it's already pretty much on par with V8 when running JS, and faster than PyPy for Python). Those applications not within this profile range will use other languages (like C++), but those languages are already more expensive to develop in, and their developers happily pay more for more specialized algorithm implementations.
> Shared-memory cross-runtime distribution works too.
That's true (provided both sides can agree on an ownership and concurrency behavior).
> To explain what I mean: you can't import a Ruby library into a Python program.
Not to detract from your overall point, but there's actually a lot of Ruby <-> Python interop stuff out there. At one point I was playing around with importing C++ modules exported via boost::python to consume from Ruby (admittedly only for a toy project.) I'm not sure if I was using RubyPython[1] - searching google turns up multiple options. And I'm not sure how well any of them hold up in production.
And then on the CLR there's both IronPython and IronRuby, which might play well together.
I think the "problem" you describe will get worse, not better. Software will get more fractured indefinitely; I believe it's an economic inevitability. There is plenty of old code that works and nobody has the time or knowledge to rewrite it.
Go isn't going to replace Python; you'll have Go AND Python. Moreover you'll have Python 2 and Python 3, and perhaps Go 1 and Go 2. Julia won't replace R or Matlab; you'll have all 3.
You're basically advocating a modernist view, which I hear a lot of programmers espouse. I think this is confusing what you want to happen with what will happen.
A more accurate picture of the future of software is something like Richard Gabriel's "Design Beyond Human Abilities". It's long so I won't try to summarize it, but it paints a unique picture, and I think an accurate one.
We are going to build greater and greater things, but they will be more balkanized and full of conflicting forces. They will look a little more like the evolved human body than a perfect crystal.
I think the deep wisdom of Unix goes a long way toward easing the pain. Text is the lowest common denominator; it bridges the gap between all these conflicting systems. There will be no universal model; only localized models and subcultures that talk to each other through sloppy protocols.
You're probably right. Something like Urbit[1] is more a romantic notion than a way forward—people won't give up the many for the one.
Still, though, I'm heartened by the Unified Theory Of Gradual Protocol Ossification—this being the idea that, even though we've ratcheted up from computer networks speaking any kind of IP-based transport protocol to only being able to speak TCP, and then in some places only being able to speak HTTP over TCP—that what this really implies is that HTTP is commoditizing the transport layer, and that, as soon as the abstraction is airtight—which it seems increasingly to be with HTTP2—we'll be able to just sweep the redundancy under the rug by replacing the IP+TCP+HTTP2 mess with a single cleaned-up protocol that presents the same abstraction.
In this case, I imagine that the way forward is meta-languages that are effectively DSLs targeting multiple abstract machines, allowing you to mix-and-match semantics in isolated subsets of your programs, and resulting in mixed-target code generation. Once people are on that abstraction—where "language" is not 1:1 with "abstract machine", and "abstract machine" is not 1:1 with deployment platform—then we can come along and sweep the redundancy under the rug.
Huh? It sounds like you're engaging in the same kind of wishful thinking about HTTP2 now, rather than programming languages. What you are describing will never happen.
Doesn't work. There are fundamental differences between Haskell and Ruby, or between Java and Clojure, or between Scala and JavaScript.
Solutions written in these languages will look vastly different. Nobody is attempting the actor model in Haskell because the actor model is anti static typing, non-deterministic and there's nothing FP about it. You don't see the State or the IO monads in many other languages. Or type-classes and higher-kinded types for that matter. LISP macros are a language feature, not a runtime feature, so you can forget about interoperability. Ditto for Linq in .NET or for scala-async. ActiveRecord / ActiveModel has been a good idea for Ruby on Rails, but would be terrible in Scala. Garbage collection changes everything - for one, building immutable data-structure becomes something you can do. Etc, etc...
Point is, programming languages aren't equal and the universe of languages is not unidimensional. And how a library gets implemented is not orthogonal to the picked language.
I answered most of your other objections in a sibling comment, but to this:
> Nobody is attempting the actor model in Haskell because the actor model is anti static typing
People write servers in Haskell, no? Servers that speak to the internet? If so, they're writing actor-modelled programs; each actor is just an entire OS process.
Now, if you want to write several servers at once, and have them scheduled efficiently, you might want a DSL that allows you to write specify the code for several Haskell server processes, and also specify their relationships. Sounds like you're going to be writing some actor-modelled code!
Which is all to say, Haskell (and strongly-statically-typed FP et al) works perfectly fine as a language for specifying what goes on within an actor.
The actor boundary is precisely where the tradeoff between static typing and dynamic typing is made, because when you want to communicate between actors (where an actor may also represent a socket or somesuch), you have to stop speaking in terms of typed data, and start speaking in terms of pattern-matching on binary messages that a given version of an actor might not recognize the totality of.
Haskell could have actors. Erlang could have strong-typing within functions. Both languages would basically look like they're embedding the other by doing so. The right solution probably involves exactly the kind of mixing of semantics that that sounds like.
It is definitely possible (though not trivial in many cases) to detach the language syntax from the language semantics, allowing multiple syntaxes for the same language semantics. But detaching the semantics from the runtime, something you do not describe above, would be a very different thing: the runtime is a /part/ of the language semantics, and while you can definitely create multiple languages that share a runtime (as the JVM languages do, for example), any decision about the shape of the runtime is going to involve tradeoffs in capabilities, so there will never be one runtime environment to rule them all.
What's needed are means of easily communicating between runtime environments, which is of course what protocols are.
So basically you're advocating a single LLVM like solution over diversity? I don't think that would be able to provide the "prefect" solution to everyone's problem.
There are so many solutions because there are so many problems.
And to say that programming languages and platforms don't influence each other is no more correct than saying human languages don't influence each other.
> Imagine a program somewhat like go-fmt(1), that would run on checkout of a source repo, to transform a base-level AST into the syntactic representation of your choice, and pattern-match-decompile any higher-level structures into the macro-statements in your chosen language that would generate them[1].
People have been imagining this for a long time, and realistically it isn't even all that hard to do. But the result of this would, honestly, probably be about as useful as running Shakespeare through google translate to Chinese and back again [1]. Code is communication, and like all communication it has nuance.
It is absolutely not clear that programming as we understand it, without a massive paradigm shift in the entire field, can really work with all that nuance stripped out.
Heh. "People are idiots" ... "C++ doesn't restrict programmers".
Seriously though, the article was just negative with not a whole lot of real points. It's also amusing to see C++'s more and more convoluted codegen (see the VS2015+ObjV video) to make up for fundamental mistakes users keep making no matter what.
I can see how this might be discouraging, but I'd take it as a good thing if a poor rant is all they got.
This happens whenever a Go post crops up on HN too, the amount of vitriol some people have over a language they evidently don't want/need to use is shocking.
I don't cope well with light text on dark background because it hurts my eyes. And I haven't personally encountered Rust hate-writing yet, as the author describes it.
Apart from that, I recommend this text to everyone. If you find yourself in a situation that feels conflict-shaped, read this text before you crawl up the walls. It's good advice.
I briefly considered Rust when shopping for the right language for Snabb Switch [1]. I decided against it in part because the community did a good job of setting my expectations. "They" told me that Rust will especially appeal to hardcore C++ hackers, which is a very different profile to our target group of casual programmers. So I was able to move on quickly from a choice that really would not have suited me, with no ill will along the way.
If for whatever reason I had been pressured to move ahead with Rust then I would have spent a lot of time swearing about the size and compilation time of LLVM, etc, before finally moving on to a tool that is better suited to the peculiar job at hand.
So from my perspective that is great work by the Rust community - defining and communicating who the target user is - and now I would certainly consider Rust for other projects in the future.
[1] http://lukego.github.io/blog/2012/09/25/lukes-highly-opinion...