" so it’s just-in-time compiled" and "As a fully compiled language, Rust isn’t as portable as Java (in theory)"
Look, JIT and static compilation are just compilation mechanisms. You can apply them to pretty much any language you want. You can statically compile java[1]. You can JIT Rust. You can compile java with llvm (Azul does it). You can interpret java, rust, C++, etc.
You can have hybrids where you start with an optimized statically compiled form and reoptimize it at runtime.
Don't ever tie current optimization mechanisms to comparisons of languages.
If it's really important to gain performance, someone will do it.
For example, JIT of C++ is often not done because it's not worth it compared to automatic profiling and other reoptimization mechanisms. But it can be done :)
If you want to compare languages, compare languages.
If you want to compare particular implementations of languages, compare particular implementations of languages. Don't say you are doing one and do the other, because it's confusing, and, well, if it matters enough, someone will come along and make you wrong.
:)
[1] If you are willing to guarantee no class loading. Otherwise, you can statically compile everything that is loaded or run an interpreter or do what you want to new code at runtime
Most languages have a single implementation that everyone uses. Unless I'm interested in writing a new implementation (never), then I'm going to be interested in comparisons of languages based on their well supported implementations. Otherwise we get into discussions about sufficiently smart compilers (http://c2.com/cgi/wiki?SufficientlySmartCompiler). I get what you're saying, but there are tons of people who are justified in only caring about implementations.
But for the most part those alternative implementations have similar designs and performance characteristics to the most popular implementation. E.g. for Java, all major modern implementations use a JIT, and the language lends itself to a JIT runtime, as virtual-methods-by-default performs poorly when statically compiled.
> If it's really important to gain performance, someone will do it.
Counter-examples:
I love Python, but it's slow. People thought performance in Python was important and ... well, here we are years later and no one has made Python performant. PyPy is a splendid display of engineering and optimization, but it doesn't make Python truly performant on real world applications versus C/C++. It's more of a stop-gap to delay the inevitable (which is, if you find yourself needing performance, you'll eventually have to bite the bullet and re-write in C++).
The same goes for JavaScript. Massive companies have brought their engineering might down upon JavaScript to build what are perhaps the most impressive optimizing compilers of our modern age. Yet JavaScript is still slow, and now what are we doing? Turning to asm.js and WebAssembly...
So while it's true that any language could be performant, given a sufficiently adept compiler, the real question is whether such a compiler is actually practical. And more importantly, should we pour the world's engineering resources into building such a compiler, when we could just build a better language instead?
Oh, come on. I think it was a nice article, and all the points you made were just pedantic. Of course we all know that, can't we just enjoy a nice article? Leave it to HN comments to say things like "Don't ever do this thing...".
In theory, a programming language only has a syntax and a semantics, and everything else is an implementation detail. In practice, programming languages are designed for specific use cases, and lend themselves to specific implementation strategies. For example:
(0) Type-checking C++ templates requires expanding them. However, nothing forces a C++ implementor to translate the individual template instantiations to machine code (or whatever target language is used). A C++ implementor could use a strategy similar to what is used in Java and C#, and re-instantiate every template on demand at runtime. Of course, nobody does this, because it's bad for the implementor (more work, because instantiating templates is more complex than instantiating generics, so better not do it at runtime if it's already been done at compile time), bad for the user (worse runtime performance), and good for noone (except perhaps C++ detractors).
(1) A Python implementor could use whole-program analysis to assign variables and expressions more useful static types than `AnyObject` - similar to what STALIN does for Scheme. But, again, this is bad for the implementor (more work), bad for the user (less interactivity and instant gratification, due to the constant rechecking of the whole program every time a change is made), and good for noone (except perhaps Python detractors).
Now, there exist language designs that are less biased towards a fixed implementation strategy. For example, Common Lisp, Standard ML and Haskell. But these languages are also markedly less popular, which perhaps suggests that programmers usually prefer languages that have a concrete story about what use cases their design optimizes for.
> A C++ implementor could use a strategy similar to what is used in Java and C#, and re-instantiate every template on demand at runtime.
Are you sure? Expanding templates can affect the parsing of subsequent code, as this example illustrates: http://yosefk.com/c++fqa/web-vs-c++.html#misfeature-3 You pretty much have to expand them at compile time.
(This sort of complexity illustrates why I prefer generics to templates, incidentally.)
I'd say that it's in a large part a question of marketing. SML and Haskell are largely academic languages, even if there is now Haskell running in production at different locations. Likewise, you can hardly accuse OCaml of having the kind of marketing muscle and reach that a company like Mozilla can bring to bear (also, it's pretty old, which makes it difficult to sell as the new shiny).
> If you want to compare languages, compare languages. If you want to compare particular implementations of languages, compare particular implementations of languages.
Well, to be fair, it says this at the top:
This post compares Rust-1.8.0 nightly to OpenJDK-1.8.0_60
I think it's pretty clear it's comparing particular implementations of the languages.
The article compares the only currently available Rust implementation in version 1.8.0 nightly with OpenJDK 1.8.0_60 – which is sufficiently similar to Oracle's JRE, which is the predominantly used VM, so I'm comparing how the languages are used by the majority of users. If you think this unfair, I'd like to read your blog post where you enlighten us about your implementation of choice.
> ... if it matters enough, someone will come along and make you wrong.
I'll be gladly proven wrong if someone comes around and speeds up my JVM. :-D
Look, JIT and static compilation are just compilation mechanisms. You can apply them to pretty much any language you want. You can statically compile java[1]. You can JIT Rust. You can compile java with llvm (Azul does it). You can interpret java, rust, C++, etc.
You can have hybrids where you start with an optimized statically compiled form and reoptimize it at runtime.
Don't ever tie current optimization mechanisms to comparisons of languages. If it's really important to gain performance, someone will do it.
For example, JIT of C++ is often not done because it's not worth it compared to automatic profiling and other reoptimization mechanisms. But it can be done :)
If you want to compare languages, compare languages. If you want to compare particular implementations of languages, compare particular implementations of languages. Don't say you are doing one and do the other, because it's confusing, and, well, if it matters enough, someone will come along and make you wrong.
:)
[1] If you are willing to guarantee no class loading. Otherwise, you can statically compile everything that is loaded or run an interpreter or do what you want to new code at runtime