Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Erlang is glacially slow. Even on a 20 core machine, a multithreaded Erlang implementation will usually be trounced by a good singlethreaded C++/Go/Java implementation. All this stuff about multicore scaling is baloney - who cares it is scales and is still slow?


> multithreaded Erlang implementation will usually be trounced by a good singlethreaded C++/Go/Java implementation

And Go/Java implementation can be trounced by hand written assembly and ASIC accelerators probably.

> All this stuff about multicore scaling is baloney

You say baloney I say money in the pocket. I've seen it scale, I've seen it work reliably in large clusters, I've been able to inspect, debug and hotpatch running systems while they are still running. I have seen systems which had non-critical components crash and auto-restart for days without impacting customers and needed teams of "devops" to babysit it.

Moreover I've see single and multi-threaded C++ and Java applications with threads and and data races which take weeks or months to find. Or they are screaming fast until they take a nosedive and segfault (also in some minor stupid new feature which nobody uses). You know what the transaction processing rate of a segfaulted process is? - 0 tps.

That's why teams like Whatsapp could get by with only 10 or so back-end engineers handling billions of messages / day from various devices new and old, while other companies need 10x or even 20x more than that.

It is not just being able to run fast. Assembly runs very fast. It is also about being able to have the right tools and abstraction to define a problem. Erlang has those and they come built-in (the OTP library, the distribution protocol etc), C++ doesn't, so have to start from STL and boost and so on, then get serialization, monitoring, supervision, etc bootstrapped.


speed isn't a scalar. sometimes you care about latency. sometimes you care about throughput. sometimes you care about arithmetic.

http://www.phoenixframework.org/blog/the-road-to-2-million-w...


Because at some point the volume of 'work' you process will be more than what a single CPU can handle.

I think of it like big corps vs startups. Startups may have more efficient engineers and development cycles and produce higher quality work because of the selectivity of their employees, but the sheer amount of work a big Corp can do is significantly more.

(Just an example, not saying big corps don't have talent - far from it)

When designing your system you need to decide which model it needs to follow. There are places for both.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: