Hacker News new | past | comments | ask | show | jobs | submit login
Yehuda Katz and Steve Klabnik Are Joining the Rust Core Team (rust-lang.org)
423 points by wycats on Dec 12, 2014 | hide | past | favorite | 131 comments



Congratulations Yehuda and Steve!

We've been using Rust in production for awhile now, specifically because of its combo of native speed and memory safety[1]. This bending of traditional tradeoffs has let us implement features that would have been otherwise impossible.

Interfacing with Ruby via its C APIs, we are able to do some pretty crazy stuff, like sample memory allocations in production with imperceptible overhead[2]. Most importantly, we can do it without fear of segfaulting our customers' servers.

It's really great to see people other than low-level bitbangers being added to the core team. In this case, having a production user and someone focused on new users demonstrates the Rust team's commitment to building a functionally diverse community.

Lastly, I think opening up the core team to a wide coalition of companies is the right way to build a robust, long-lived open source project. Yehuda talked a little bit about this in his Indie OSS talk[2]. There are several "pathogens" that can affect open source projects, and the more diverse the distribution of power, the more likely you are to be immune to these pathogens.

1: http://blog.skylight.io/bending-the-curve-writing-safe-fast-...

2: http://blog.skylight.io/announcing-memory-traces/

3: https://www.youtube.com/watch?v=YqXU4o24Hkg


Could you define "low-level bitbangers"? It sounds kind of derogatory, but maybe I am reading it wrong.


It's a sort of nickname derived from a technique for communication on very cheap microcontrollers.

https://en.wikipedia.org/wiki/Bit-banging


Low level in term of programming abstraction, not any kind of hierarchy i.e. banging your head against bits and bytes rather than monads and functors


> This bending of traditional tradeoffs has let us implement features that would have been otherwise impossible.

Could you talk more about what sort of features these are?


Err, he gave examples, as well as links.


The parent was probably just nitpicking because the word impossible rubs him the wrong way. Switch impossible with infeasible and add "for us" and everyone should be happy.


Thus continues Yehuda's quest to join every cool core team on the Internet. :) See also: Rails, Bundler, Ember.js, jQuery, W3C TAG, TC39.


Yeah, I wonder how he manages to juggle all of that. What's your secret wycats? (I'm not really expecting there to be a single secret).


For what it's worth, a big part of this is slowly winding down earlier commitments over time. I joined TAG a couple years ago, and decided not to run again this year because I had largely achieved my original goals of changing the composition and mission of the TAG, and others can carry the torch forward. I also retired from the Rails team this year after a somewhat-long period of inactivity (from day-to-day decisionmaking) for similar reasons.

The closest I can get to a single "secret" is aggressively delegating. What I mean by this is delegating before you feel comfortable delegating. The only way I know of to get everything done that I want to get done is to set the vision for something, do some initial implementation work, and find people who share the vision to help as early as possible. The primary reason I end up working on so many things is that things are way more connected than you might expect, and the full picture of something like Ember involves both day-to-day work on Ember and advocacy around a whole host of related technologies. I'm very grateful for the number of groups that have welcomed my participation.

I got involved in Rust largely because it was the shortest path to a low-overhead, non-crashy agent for Skylight, even counting the work we had to do to keep up with ongoing development over the past year. When getting involved in a technology so early, I tend to get really invested, and try to find holes that I know how to fill and fill them. That's what happened with Cargo and a whole bunch of other areas in Rust.


How would you respond to those who see you merely as somebody who just hops on the bandwagon, so to speak, quite frequently?

Whether this perception is right or wrong, there are a number of prominent individuals within the open source community who are widely seen as rapidly jumping between trendy or hyped projects as they arise, to benefit from the exposure that this involvement can bring.

I'm not passing any judgment, mind you. I personally don't care which projects you choose to get involved with. However, I do know that other people have noticed certain trends around how certain open source developers move among projects, and this does negatively affect the impressions these people have of these open source developers and the projects they get involved with.


To be blunt, I would not have picked Ember, Handlebars, TAG or TC39 due to their "hype factor".

In the cases of TAG and TC39, I saw an opportunity to take low-hype, low-bandwagon organizations and revitalize their missions and purpose. In particular, bringing on web developers as active participants and the follow-on effects of having them join GitHub and modern practices (slowly), helped increase their profile and stature. In both cases, I hardly did most of the work, but I did spend a lot of time articulating a vision for these organizations as ones that could be far more effective by involving more practitioners. I think it has worked.

In the cases of Ember and Handlebars, I saw something missing in the ecosystem and built my own tools. In both cases, the tools were hardly instant-winners, and I had to spend a ton of time recruiting fellow-minded collaborators who shared a vision for the future. If I was in it for the bandwagon-hopping, I would have tried to join Mustache or Angular, and not spend years to build up my own, relatively small-in-comparison ecosystems.

My real MO is to try to envision a better future for something related to the web or my product, and then either find existing projects that already share a part of that vision or create them if they do not. My involvement in many different projects is because of the fact that big-picture ideas involve improvements to multiple technologies.


high five bro!


Not sure I agree with your analysis of the situation. W3C TAG, for example, was perceived as a stodgy, out of touch standards body. Yehuda led the charge to reform it and bring it back into the realm of practitioners.

In this case, I think you've got the causality mixed up. Projects often gain prominence thanks to the work Yehuda puts into them, as he's particularly good at articulating their strengths to a wider audience. It's not that he's bandwagon hopping—it's that he's helping to create the bandwagon. ;)


Their loss. It's illogical to fault entrepreneurs or the businesses they are a part of simply because one of an entrepreneur's core strengths and motivators is bringing projects off the ground.

Likewise, it is illogical for faulting a developer for possessing that same drive when it comes to improving emerging technologies.

Squandering one's skills to appease some anti-progress group is silly.


Jumping between projects can also be seen as spreading knowledge and good practices around.

It would probably be much easier for them to stick to one thing but that wouldn't fully utilize them as a resource.


Bandwagon hopping is a Good Thing.

Furthermore, if you continue to bring something to the bandwagon, people will continue asking you to join them.


Well, they've obviously never seen wycats' work. As for the others you speak of, time will tell.


"...delegating before you feel comfortable delegating."

That just might be the best piece of advice I've heard in a long time.


Thanks, good to know. So I guess the real secret is being able to attract people to delegate work to.


Would you mind explaining more about how you find and recruit fellow-minded collaborators and eventually delegate tasks to them?


thanks for your answser! wycats rulez.You're doing a good job in TC39,cant wait to get the final spec.


Maybe "wycats" is plural and he has a team of people implementing him, like Stephen King or TJ Holowaychuk.


Note that nobody reads every post in linux-kernel. In fact, nobody who expects to have time left over to actually do any real kernel work will read even half. Except Alan Cox, but he's actually not human, but about a thousand gnomes working in under-ground caves in Swansea. None of the individual gnomes read all the postings either, they just work together really well.

—Linus Torvalds (2000-05-02)


I love that quote.


I have met him in person many times and so if that's it, they also have a single person handling interface. :)


Might be multiplets. Or maybe wycats is a clonal colony, like Pando.


What separates TJ from a lot of us is that he isnt afraid to actually go and read some source code,no matter what the language is. That's what separates the good devs from the average ones.How many of us actually read the source of libs or frameworks we use everyday? not that many. But he also have a thing for simplicity,and that is a matter of taste.


I watched his talk Endurance(https://www.youtube.com/watch?v=yYihop9gHj4) to be pretty amazing and inspiring. He talked about some really important points which makes me think and reflect on self. A must watch.


And each of them are lucky to have a share of his time. His contributions to open source are immense.

Much of the software our company use has been shaped, at least in part, by Yehuda. I hope we'll grow the company, so that at some point we'll be able to reciprocate.


Yes, on top of a relatively short number of years coding -- impressive indeed


How many years?


I believe Yehuda only started coding in 2006? Maybe 2008?


I did my first real coding in around 2005, and quickly got interested in both jQuery and Rails. Really good timing in retrospect!


Considering you've been coding less than a decade, can you comment on what you think the biggest factor was that lead you to developing such a high level of mastery of solid coding principles? Books, mentoring, reading framework and library code thoroughly?


Wow, you've asked just the question i've been wondering ever since I read that date. Very curious, because it seems like there's been a reliably intuitive focus since the very beginning which generally takes years to develop.


I think the best I can do here is my usual answer: take some time every week to work on a problem that is a little bit harder than what you already know how to do. It requires discipline, and it's hard, but it reliably results in levelling up.

It also helps with humility: if you're constantly working on things that are too hard for you to do, it's hard to build up an unhealthy belief in your own abilities. And humility will keep you open to unexpected learnings from collaborators and people working on related things.


I think this can be great advice, but is dependent on your level of humility; it can also a recipe for a serious case of imposter syndrome


What do you recommend someone do / read / learn if he wants to follow on your path?


Guess who makes them cool :)


Congrats! I actually didn't know they weren't already part of the core team. What exactly does this change entail, then?

The rust community is lucky to have these two. I recently went through the rust guide and it's terrific! And as a happy rubygems user, I'm glad to see crates following a similar path.


Thanks! (especially re: the guide, which is my baby)

> What exactly does this change entail, then?

The core team is listed at https://github.com/rust-lang/rust/wiki/Note-core-team . Basically, like most projects, it just adds weight to our opinions in decisions that are made, and that we're sure to be involved in making those decisions. In general, we try to get rough consensus for major decisions from the team + community.


steve, does that mean there's a chance you'll get a contract extension to keep working on rust, assuming you'd like to?


I don't want to talk too much about these details yet, but rest assured that I will be quite heavily involved in Rust regardless of what shakes out. :)


Congrats Yehuda! You are a true inspiration, and i just want to thank you for everything you've done and i'm shure you'll take rust to another level. Congrats to Steve too. 2015, the year of rust(?) :P


Congratulations! The two of you easily became two of my biggest role models after I was introduced to web development; its been a pleasure to follow your general trajectories. Thanks for showing us what is possible when you're willing to actually do something!


<3


How will this affect Yehuda's involvement with Ember? I'm kinda scared he will jump ship and leave it in the air. :/


Most definitely not. Ember (and its ecosystem) and Rust are the two open source projects that are currently consuming most of my time. I have easily spent as much time on the planning for Ember 2.0, HTMLBars, and Service Side Rendering as I have on Cargo and Rust over the past six months or so.

Also, both Ember and Rust are critical for my business (Skylight), and keeping them both strong and effective are key to my business interests ;)


Thank you sincerely, Ember has been amazing and it's great to hear you don't plan to drop it.


Good to hear - building a project on Ember now and it (Ember) is an absolute joy to work with. Keep it going!


This is reassuring - I'm happy to know that you've bet your personal business on it, and really glad that they've got you and Steve on board. Looking forward to great things from Rust, and excited to get more involved!


Nice! As a reluctant Go developer, I look at rust with envy and excitement. I am very eager to see this project progress and it seems like they've got some great people on it.


I share your envy and excitement.

I'm looking forward to dumping Go in favor of Rust once it's competitive in terms of libraries/stdlib. I've been building some big stuff in Go recently (mostly around financial market simulation) and the language shortcomings have hurt a lot. I've been using Go since January 2013 as my primary language, but everything I've seen of and tried with Rust is amazing.


What are the shortcomings?


From what I gather, lack of user defined generics is primary reason, followed by the loose typing in interfaces.

I'm not an expert in Go, so feel free to correct me there. Additionally, IIRC Rust wanted to take the same route like GO regarding interfaces and generics, but Niko made an argument that it is wrong turn to make.


  > IIRC Rust wanted to take the same route like GO 
  > regarding interfaces and generics, but Niko made an 
  > argument that it is wrong turn to make.
This is a mistaken conception, as Rust has had something approximating generics since the very beginning (though they have changed drastically since the outset), and traits have always been explicit rather than implicit (and it was pcwalton who initially conceived traits, not nmatsakis (and traits as well have changed drastically since their inception (the bottom line is that pretty much everything about Rust has changed drastically since its inception))).


Personally, I just don't enjoy developing in it. And really, it's all the "meta" features that I can't stand, like the package/module system that is too opinionated and not intuitive enough, the annoying install system, and the folder structure. The language itself isn't fun enough to warrant looking beyond these downsides.


Anyone care to mention the best reasons to start using/learning Rust? There must be a lot with these guys joining the team.


It allows you to predictably squeeze more out of your system than you ever could in a higher level language without the worry of weird undefined behavior, buffer overflows, use-after-frees or segfaults. It also has an expressive static type system that draws a great deal from ML and Haskell.


What would be the reason to learn Rust over Haskell? They seem to be about in the same speed ballpark:

http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...

Is that not representative, or is the idea that Rust speeds the learning curve with its similarity to Ruby?


If you actually look at the source code, the benchmarks where Haskell beats Rust rely on unsafe functionality, calling out to C libraries, and are heavily optimised and hard to follow. The Rust code on the other hand is idiomatic, safe, and relatively easy to understand.


(1) Even in that benchmark, Haskell is over twice as slow as Rust on average.

(2) The Haskell code is nowhere near idiomatic.

It is still useful to know that Haskell can be quite fast if necessary, but it likely does not reflect what your actual experience using Haskell would be.


> (2) The Haskell code is nowhere near idiomatic

This isn't meant as a personal attack firstly. Secondly, I've heard this said more than a few times lately and am generally curious:

Do you know which Haskell examples are idiomatic, which aren't, and why?

I feel like in the case of some it's just parroting someone elses argument.


Just in the first two examples on that page, you'll find allocaArray, mallocByteString, writeAlu, unsafeUseAsCString, unsafeIndex, withForeignPtr, unsafeDrop, and more. You really don't have to know Haskell very well (and I'm not claiming to be an expert by any stretch) to understand that this is far from idiomatic in a lazily evaluated, pure functional language. No parroting required.


You should know it at least a tiny bit though, or else you lump perfectly "idiomatic" functions in with low level functions that aren't often used and pretend they're a problem. How is a indexing an array without a bounds check somehow counter to lazy evaluation or functional programming? It is the exact same thing as indexing it normally, just without checking the length.


How is indexing an array OK with functional programming of the level of purity Haskell strives for (not to mention without bounds checking)?

In "Real World Haskell" they advise you to forgo of arrays, and even has a chapter "Life without arrays", saying "arrays and hash tables are often used as collections indexed by key, and in Haskell we use trees for that purpose".


>How is it not? That is a complete non-sequitur and makes no sense. What sort of definition of functional programming are you using that you think it restricts what data types you are allowed to use?

The normal definition of functional programming, in which not all data structures used in imperative programming are idiomatic (or even considered "functional").

For starters, mutable data structures would be considered not purely functional. E.g:

"Most books on data structures assume an imperative language such as C or C++. However, data structures for these languages do not always translate well to functional languages such as Standard ML, Haskell, or Scheme. This book describes data structures from the point of view of functional languages, with examples, and presents design techniques that allow programmers to develop their own functional data structures" [1]

[1] http://www.cambridge.org/us/academic/subjects/computer-scien...


>The normal definition of functional programming, in which not all data structures used in imperative programming are idiomatic

I can't find any reference anywhere to any sort of definition like that. Or even one that mentions restricting data structures or memory layout at all. The only "normal" definitions I can find are: "functions are first class" which is a pretty weak definition and doesn't really exclude much these days, and: "the return values of functions depend only on the functions arguments" which is what everyone in the haskell world thinks of as "functional programming". Neither of these definitions exclude arrays.

>For starters, mutable data structures would be considered not purely functional

A data structure can not be functional or imperative. Only what you do with it. Mutation is perfectly fine if it is localized (remember the function's return value needs to only depend on its arguments, nothing more).


If you look at the context you can see that they are saying that using immutable arrays has a performance hit compared to mutable ones. That doesn't mean you shouldn't use arrays if they suit your purpose.


>How is indexing an array OK with functional programming of the level of purity Haskell strives for (not to mention without bounds checking)?

How is it not? That is a complete non-sequitur and makes no sense. What sort of definition of functional programming are you using that you think it restricts what data types you are allowed to use? Do you think you aren't allowed to use lists in C?

>In "Real World Haskell" they advise you to forgo of arrays

You might want to read it instead of misrepresenting something out of context. You might also wish to consider that many of the haskell shootout submissions came from the authors of RWH.


The point is that is unsafe. You don't need to do that in idiomatic Rust code, because the bounds-check-free indexing is encapsulated inside the iterator library. So to re-frame it: fast, idiomatic Rust code is safer and easier to write than fast, idiomatic Haskell code.


>The point is that is unsafe

No, the point was that it is "far from idiomatic". Did you read the post I was replying to?


>Do you know which Haskell examples are idiomatic, which aren't, and why?

Why wouldn't he?

With a little familiarity with programming at large and Haskell syntax can learn to discern the two cases even if you're not a Haskell programmer.


A glance at the source quickly reveals an unsafe implementation.


I didn't see an import for unsafe or foreign in:

fannkuch-redux pidigits binary-trees


fannkuch-redux heavily utilizes unsafe operations on a mutable vector.

binary-trees uses artificial strictness for the sake of the benchmark.

Nine of the ten Haskell implementations are demonstrations of writing C in Haskell (without reaching C's performance).


> binary-trees uses artificial strictness for the sake of the benchmark

Using strict evaluation when appropriate is most certainly idiomatic Haskell code.

> Nine of the ten Haskell implementations are demonstrations of writing C in Haskell (without reaching C's performance).

This (C in Haskell accusation) is debatable, though I'm not sure there's much to be gained even given a totally successful discussion where we both communicate our points 100% effectively.


There probably isn't a point to it, as we're likely trying to argue different points.

I posit that using Haskell in such a way is a disservice to the language. If you want to code in such a way, why use Haskell?

Haskell is plenty fast as is and doesn't need to resort to unsafe code. What's the point?


Mutability and strictness aren't a disservice to Haskell. They're both considered to be things which are available as choices and should be taken when needed. In particular, the entire ST monad is all about mutability and is very well behaved.

Furthermore, even if you write nearly everything in the IO monad Haskell you'll gain a lot from the type system and from easily peeling out small, meaningful pure segments.


>fannkuch-redux heavily utilizes unsafe operations on a mutable vector.

They are just not bounds checked. That is not "writing C in Haskell".

>binary-trees uses artificial strictness for the sake of the benchmark

What on earth is "artificial strictness"? Forcing evaluation when you need something to be evaluated immediately is not artificial, not unidiomatic, and not "writing C in Haskell".

>Nine of the ten Haskell implementations are demonstrations of writing C in Haskell

And end up shorter than rust? The idea that writing lower level code is "writing C in haskell" is nonsense, the ability to write more verbose but faster code is not harmful, it is useful. But the fact that lower level haskell is still shorter than rust and yet you want to act like it is some sort of horrible thing makes it hard to believe that you are really concerned about the horrors of writing slightly more verbose code.


The Rust code is both safer and faster at the cost of more keystrokes.

I'm not certain the point of writing unsafe code in Haskell when there are better tools for the job. Haskell is fast enough as is and using it to hammer screws just reeks of wrong tool for the job.


As an aside, if your interested in safer and faster I'm benchmarking some of the ATS2 code examples[0] which were created for (but not yet in[1][2]) the computer language benchmarks game. It seems to beat C and C++ in some (many?) cases and create TINY binaries. I was trying to benchmark all the examples but got tired of doing it ;) Here are a couple anyway:

Pidigits:

    $ patscc -I/home/cody/sources/ATS-Postiats-contrib/contrib -pipe -O3 -fomit-frame-pointer -march=native pidigits.dats -o bin/pidigits.ats_run -lgmp
    $ gcc -pipe -Wall -O3 -fomit-frame-pointer -march=native  pidigits.c -o bin/pidigits.gcc_run -lgmp
    $ time ./bin/pidigits.gcc_run 10000 > /dev/null
    
    real	0m0.969s
    user	0m0.963s
    sys	0m0.004s
    $ time ./bin/pidigits.ats_run 10000 > /dev/null
    
    real	0m0.972s
    user	0m0.968s
    sys	0m0.004s
    $ ls -larth bin/pidigits.*
    -rwxrwxr-x 1 cody cody 15K Dec 13 18:59 bin/pidigits.ats_run
    -rwxrwxr-x 1 cody cody 14K Dec 13 18:59 bin/pidigits.gcc_run

k-nucleotide:

    $ g++ -c -pipe -O3 -fomit-frame-pointer -march=native -std=c++0x k-nucleotide_gpp3.c++ 
    $ $(PATSCC) -DATS_MEMALLOC_LIBC -pipe -O3 -fomit-frame-pointer -march=native -std=c99 k-nucleotide.dats 
    $ patscc -DATS_MEMALLOC_LIBC -pipe -O3 -fomit-frame-pointer -march=native -std=c99 k-nucleotide.dats 
    $ time ./k-nucleotide_gpp3 < ~/Downloads/knucleotide-input.txt 
    
    real	0m0.177s
    user	0m0.300s
    sys	0m0.055s
    $ time ./k-nucleotide < ~/Downloads/knucleotide-input.txt 
    
    real	0m0.056s
    user	0m0.036s
    sys	0m0.020s
    $ ls -larth k-nucleotide k-nucleotide_gpp3
    -rwxrwxr-x 1 cody cody 38K Dec 13 19:07 k-nucleotide
    -rwxrwxr-x 1 cody cody 96K Dec 13 19:07 k-nucleotide_gpp3


[0]: https://github.com/githwxi/ATS-Postiats-contrib/tree/master/...

[1]: https://alioth.debian.org/forum/forum.php?thread_id=14942&fo...

[2]: https://groups.google.com/forum/#!topic/ats-lang-users/QdwKp...


I don't know much about ATS, but you've given me reason to investigate. Thank you!


Your comment says absolutely nothing, and that appears to be the goal. What is the point of writing any code in any language? It is a benchmark, the point of it is to write the fastest code you can. You aren't obligated to write lower level code in haskell, nobody has a gun to your head. So why do you pretend it is a problem, and claim other languages are "better tools" for that job despite haskell clearly being very good at that job?


>>It is a benchmark, the point of it is to write the fastest code you can.<<

The point of it is to show the performance differences, but that doesn't exclude a program like --

http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...

-- or like --

http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...

etc


It doesn't exclude "idiomatic" versions, but the data is presented so as to make them useless. How do I compare "idiomatic" rust to "idiomatic" haskell? All you can get is "here's the fastest versions from each language". If you want to compare any other version you have to do it on an individual basis, one at a time. So there's absolutely no reason for people to bother adding "idiomatic" versions.


"Idiomatic" in whose opinion? "Idiomatic" is a slogan not a well-defined property.

The reason for people to bother adding "idiomatic" versions, is that other people really do work through looking at the source code "on an individual basis, one at a time."


This exact thread came up last year when we started talking about the shootout and haskell. And Its always the same people who bring up the shootout and defend the existing entries.

Please stop bringing up the shootout (esp when haskell is mentioned) and dragging everyone through this conversation again.

Benchmarks are great to debate but this one is causing more harm than good; let's talk about other benchmarks.


>>Please stop bringing up the shootout…<<

I didn't!

Your accusation is completely wrong!

(And it hasn't been called that for over 7 years.)


>"Idiomatic" in whose opinion?

That's the point I'm making. People constantly dismiss the results they don't like as "unidiomatic" and embrace they ones they do as "idiomatic".

>is that other people really do work through looking at the source code "on an individual basis, one at a time."

Except that people don't bother adding them, so your reasoning for why isn't relevant. If you want to claim the shootout is useful for this, you need to allow for grouping benchmarks together to be presented against the "standard" ones. Let Bob and Sally submit "Bob and Sally version of Foo benchmarks" and let me compare them against Foo and against other languages.


>>Except that people don't bother adding them …<<

I already provided links to two programs and you're still saying that people don't bother adding them.


Do you think that is contradictory for some reason?


>I feel like in the case of some it's just parroting someone elses argument.

It usually is. The reality is that like most memes like this, there's a grain of truth at the center, but most of the people repeating it don't know where the truth ends and the exaggeration begins.

Regex-dna is using low level ByteString operations to squeeze out a bit of extra performance. This makes it quite a bit longer and would rarely be done in practice since the gain is small.

Fasta again does some low level ByteString operations, but not nearly as significant as regex-dna. Not that bad.

Others like fannkuch-redux are just using "unsafe" vector functions. All that is doing is leaving out bounds checks. That makes an miniscule difference in performance most of the time, but the way the shootout works there's only incentive to completely optimize for speed, so you get 1% optimizations you wouldn't normally see. I'd hardly call it unidiomatic though, you just normally use "read" instead of "unsafeRead", etc. In some cases I would say the unsafe calls are idiomatic. If you are going to be doing an operation repeatedly in a loop, doing the bounds checking once before looping and using the non-bounds checked calls in the loop is pretty standard.

Pidigits is normal haskell (although you could argue the lack of spacing isn't idiomatic, but that hardly matters).

I stopped there since the rest are getting enough slower than rust, but looking them over there's still nothing outrageous, just more low level ByteString usage. Overall none of them are that far from idiomatic, none of them do anything particularly crazy, and I think the speed gains from doing most of the optimizations they are doing are fairly small. I find it particularly odd that people act like the ability to write lower level code that is faster is a problem with haskell. Especially given that the "low level" haskell versions are often still shorter and no harder to read than the "idiomatic" rust versions.


>I'd hardly call it unidiomatic though, you just normally use "read" instead of "unsafeRead", etc.

Isn't "idiomatic" defined by what you "normally" use?


Does using a less commonly used function for its intended purpose make the code unidiomatic?


If the goal of using the less commonly used functions is performance, then it probably does.

I mean: it's not like the specific problem domain needed a "less commonly used function" (e.g. you need a function to do X, where X is something that you rarely need to perform).

Rather it's: "we need to code A, but we will use less commonly used functions instead of what we'd normally use for A, just to get more performance".

Plus, it's not like you merely exchange function f with g, while all other factors stay the same. The choice of those "less commonly used functions" also affects other aspects of the program's design (e.g. making it into a more imperative style, going for unsafe, opting for mutability, etc).

So the interesting question to call it idiomatic or not, for me, is:

"If ultimate performance wasn't a factor, would a Haskell programmer write this program in the same way?"


>Plus, it's not like you merely exchange function f with g, while all other factors stay the same

Yes, it is exactly like that. That was the point. It is literally changing out the name of a function. It has absolutely no effect on the rest of the code. Which is why the code is idiomatic by any reasonable definition.


If the only differences between the functions are the names and the performance characteristics, then why even bother having the slower versions? There has to be some difference somewhere, right?


Are you serious? Read the thread.


Ok, so perhaps I grant you the fact that unsafe indexing is idiomatic. The Rust code is still safer and more robust against buffer overflows in the long term. Also, isn't regex-dna using libpcre? That's a C library, no? Rust's regex library is pure Rust, with no unsafe code.

I love Haskell, but I still don't think it occupies the same space as Rust.


> I love Haskell, but I still don't think it occupies the same space as Rust.

My opinion is that Haskell can overlap, but something that occupies the same space as rust would be Ivory[0] or ATS[1].

0: http://hackage.haskell.org/package/ivory 1: http://www.ats-lang.org/


>Also, isn't regex-dna using libpcre?

It is in virtually every language on the shootout. That's one of the reasons the shootout is so terrible. A regex benchmark doesn't tell you much of anything unless the task itself is to write the regex engine.

>I love Haskell, but I still don't think it occupies the same space as Rust.

Neither do I. I am simply correcting this weird meme that "haskell is really slow and the shootout shows you have to write insane code to be fast" when the whole point of the shootout is to write the fastest code you can, the haskell code is pretty ordinary and still significantly better than most other languages, and an "idiomatic" version is not 10 times slower, it is 10% slower.


> A regex benchmark doesn't tell you much of anything…

And yet, the programs don't all perform the same.

One regex task doesn't tell you much of anything because so much can be different with a different task. Then again, people are usually surprised by V8 and Irregexp.

The GHC version has been updated so many times since the last programs were contributed, that I hope the code would look better if it was written to use the latest greatest Haskell.


>And yet, the programs don't all perform the same.

Because they don't all use pcre. Duh?

>The GHC version has been updated so many times since the last programs were contributed, that I hope the code would look better if it was written to use the latest greatest Haskell.

Why on earth would the code change because of a new compiler release? Do people rewrite the C one every time a new GCC is released? And the code looks fine, what "better" do you want?


>>Because they don't all use pcre. Duh?<<

"Duh?" because you haven't looked at the code?

http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...

http://benchmarksgame.alioth.debian.org/u64q/program.php?tes...

>>Why on earth would the code change because of a new compiler release?<<

New GHC releases (and libraries) have in the past provided new ways to write the code.

>>what "better" do you want?<<

Any "better" that 6 or 7 releases has provided.


>"Duh?" because you haven't looked at the code?

Me: "Not all use pcre" You: "Two that use pcre"

What on earth is that supposed to show?

>Any "better" that 6 or 7 releases has provided.

You have some weird ideas about haskell/ghc.


>>What on earth is that supposed to show?<<

Those two that both use pcre don't perform the same.

>>You have some weird ideas about haskell/ghc.<<

If I have, I got them from seeing programs being re-written when new stuff became available.


You got them from seeing someone writing a faster version. The part where you thought it had anything to do with "new stuff" is entirely your own imagination


Idiomatic, typical Rust code does not involve a garbage collector, which makes it suitable for embedding in environments with a garbage collector as well as any application with real-time (no pauses) requirements.


Do you have an opinion on how Rust compares to Go?


Rust and Go do not overlap much other than they are both natively compiled, have curly braces and are backed by 'web' companies. Go was designed for building server software and for encouraging standardized, easy to maintain software. Rust on the other hand is generally targeted at the same areas where C and C++ are used for today.


The main similarity is that Go was called a systems language for some reason, and the idea is now cached in people's minds.


Could you explain what differences make Go optimized for server software and Rust for systems software/C replacement? I'm not being rhetorical -- I genuinely don't understand and would like to.

Related: Would Rust be a bad choice for backend web development (ie, an alternative for Ruby/Python) and why or why not?


The mandatory garbage collection by default in Go would be a good example of a trade-off they make differently.

A rust programmer must be aware of the complexity of memory management. Although they are not forced to use all of it it, the standard library does expose enough of it, to force every rust programmer to be comfortable with the complexity of it.

In return they get the opportunity to write code that can compete with c/c++ in terms of performance, and control over latency, while still retaining safety.

>Would Rust be a bad choice for backend web development (ie, an alternative for Ruby/Python) and why or why not?

Depends on how much you care about maturity and what timescale the code has to run in production.

Rust code will be much faster than equivalent Ruby/Python code, but the memory management of it, will be more complicated. I would argue Go offers a better trade-off in performance, complexity and maturity for those types of projects, even though Rust is a much more interesting language.

So, if you can afford to not micro optimize your allocation strategies and just use a garbage collector, i would always choose to do so. Even with a language design as pretty as that of Rust.


It's not mature enough to be a contender for Ruby/Rails or Python/Django yet, but assuming it gets there, it may offer incredible performance. Teepee (a Rust HTTP library under development) put together a page that explains the current state of affairs: http://arewewebyet.com/


I think some people refer to "Systems language" as a language you would write services and back-end code in, like Java, rather than than a language you would use to write a kernel driver or a render engine, like C.

It confused me a bit initially.


Thanks for that reply. It's not made it that much clearer in my mind though, because C and C++ can be used to write fast web servers.


I replied to this a bit on the other HN thread on this topic: https://news.ycombinator.com/item?id=8741987

TL;DR embedding small libraries with high-performance or low-memory requirements into programs written in high-level languages.


How much C boilerplate of the Ruby C extensions API do you have to write to get Rust to interop with Ruby nicely? I've played with this a bit but still found myself writing a bit of C to wire it all together. It would be nice to hear your approach to this.


wycats has actually been teasing us for a while with a library for exactly that purpose. Here's hoping he gets to releasing it soon. :)


Good to know. I look forward to him releasing that. I was about to start writing one myself, which would probably turn out really ugly.


Since you know a lot about web frameworks : do you see rust becoming a good language for building a modern web framework anytime soon ?


One of the nice things about Rust is that it provides a lot of high-level abstraction capability in a way that doesn't impose any cost on the abstraction.

This means that, in theory, one could make a pretty good web framework in Rust that would be both quite fast and ergonomic. That said, writing those abstractions will take time, so we'll probably see lower-level HTTP libraries, followed by Express-style abstractions, followed further on by full-stack solutions like Rails.

The earlier parts of the stack are coming along quite well, so give it a few years and I think we'll have a pretty good story to tell. In the meantime, you can use Rust today very effectively as a language that you embed inside a high-level language, and I think a sizable chunk of the Rust community will use Rust for those kinds of embedding use-cases in perpetuity.


Having programmed a bit of Rust, while Rust will definitely be a very capable language for web frameworks/applications, I don't think that it'll ever be the most popular choice.

To be honest, if you think about your Rails controllers/models, 1. There isn't really all that much there(fat-models/controllers are pretty slim when you're language was built for web browsers/operating systems). 2. Think about your priorities. I can guarantee its not performance or safety. You want to build a content-rich/feature-full application. Basically, prioritize iteration speed, at all costs. No matter how good the Rust frameworks get, I don't think Rust will be the "iteration speed" language.

Meanwhile, I think, in the spirit of wycats' comment, image if ActiveRecord, Unicorn, Rack, the template renderers, etc, were all built with Rust, but still used through a Ruby interface. That would be quite the performance improvement for Rails and it would be interesting to compare its performance characteristics to a fully Rust web framework/application.


from the parent answer i would presume he thinks it's more useful as a "bottleneck fixer" in high level web frameworks. (when the bottleneck is the language of the framework).

For example, you might implement a few parts of your rails app with rust to make them faster.


Yes, I have high hopes for Rust as an excellent way to write natively implemented functions I can call from Elixir or Erlang.


The best reason is it's getting released soon.


Congratulations!


Congratulations to both of you!


Very exciting!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: