Erlang may be useful for coordinating computation tasks, but, yes, even with HIPE it is not a good numerical language on its own.
It would be interest to re-engineer a language today that tries to fit into Erlang's niche but has a stronger performance focus. Rust, Cloud Haskell, and Go all sort of cluster in the area I'm thinking about, but none are quite what I'm thinking of. Cloud Haskell is probably closest but writing high-performing Haskell can be harder than you'd like. Rust shows you don't have to clone Erlang's immutability and all the associated issues it brings with it for inter-process safety, but Rust of course is "just" a standard programming language next to what Erlang brings for multi-node communication.
This is an easily solved problem. You've been around so what I'm about to tell you is nothing new, but...
In the aughts Ruby and Python were really slow so if you had computational-heavy problems you had to drop into C.
It worked but C isn't great. The thing is - we have a lot of languages that can do computational problems easily now - Rust, Go, Nim, and the list goes on....
It becomes relatively trivial to create libraries which could wrap these languages for Erlang/Elixir. In the few cases where Elixir isn't fast enough, just drop down to something else. Write a small piece of code in Rust (you may as well call Elixir/Rust peanut butter and jelly). Optionally, you could create some kind of DSL which compiles down to another language in Elixir. I did the same with xjs (Elixir syntax, Javascript semantics) [0], and I must say, for a 200-line hack it works really well.
This isn't even considering the fact that a lot more work could be invested in Erlang's VM. Yes, Ericsson is its corporate sponsor but imagine if you had companies trying to make it fast the way they try with Ruby, Python, JS, or another of other more complicated languages.
I think this is relatively low-hanging fruit. You can't tell me that Erlang and Elixir are harder to make fast than other dynamic languages (in most contexts).
The drive for fault-tolerance, distributed computing and on the other hand, speed, are all becoming prominent leaving Erlang in a good place right now, but in need of other major changes in the computing world. Wrapping languages or 'dropping down to C' is no longer going to cut it, even if it is low-hanging fruit. Rust or Pony's guarantees only hold if you stay in their pen. We need a way of marrying PLs like Erlang/LFE/Elixr and Pony to newer hardware paradigms to take advantage of all those multicores, and potentially custom FPGA vs. ASIC chips rigs that will be arriving to market. Why Erlang matters is that it showed you can allow for failure, albeit brief and inconsequential failure, to succeed. No zero-risk or failure here. Acceptable bounds that are easy to see now, but revolutionary at the time. Custom hardware is already in use at HFT and bitcoin mining companies. The U.S. is going to try and beat China's Tianhe-2, that is currently the world's fastest supercomputer. I'm not sure why, since the Chinese scientists say it would take a decade of programming to utilize the potential of the Tianhe-2's hardware. If you think I'm calling the spirit of Lisp Machines from the dead, you're close ;) I think the von Neumann HW architecture, and its straddled type of OS, are straining at the edges of high-stakes usage, not the common user. We don't need supercomputers, we need new hardware architectures at a lower-level than 'super', that can be programmed in months not decades. Programming languages in the OTP/BEAM category, old, battle-tested languages like APL and J, which have always dealt with the array as their unit of computation, will be the basis for new languages, or they will be adapted in an new one. The money, big data, and mission-critical business needs will drive it to market.
You could write NIFs in Rust (well not sure if you can now, but I don't see any reason it couldn't be supported) for the high perf bits and use Erlang to coordinate, I figure. At least Rust code is less likely to explode and bring down the whole VM than C.
And yes, I am doing this, and the personal reason is for fault-tolerance and the professional reason (or how I get time to do it) is based on security criteria.
Any idea if a network-level FFI has been started? I'm thinking along the lines of the Haskell erlang-ffi [0].
Speaks the Erlang network protocol and impersonates
an Erlang node on the network. Fully capable of
bi-directional communication with Erlang.
NIFs still limit you to < ~1ms computations, from what I understand, but impersonating a node (on another machine, even) seems a lot more flexible. Just wondering; NIFs in Rust are still a great idea.
There's support for "dirty NIFs" in new versions, R19 will make it the default. Dirty NIFs allow for long running NIFs managed by the VM. In older versions, you can use nif_create_thread to create background workers, and your NIF will only block for as long as it takes to acquire a lock for your queue.
You can also use c nodes (or the JVM interface, which is pretty similar to c nodes I think).
... You can also use ports which define an interface for communicating with external processes.
You can't really achieve Erlang's goals with a statically typed language, at least it would be very hard to make it easy to intuit whether a live reload will be sound.
A language that allows mutable state aside from at process scope is also a no-go. In Erlang you're not supposed to think about what code will be running on which machines as you write the business logic, so you have to assume that every process is running on a seperate machine with no shared memory, so no mutable state above the process level. And mutable state at local scope makes hot swapping code messier, although that's easier to work with than static typing.
Nothing about Erlang is inherently slow. Someone could hire a bunch of developers from v8 or Spidermonkey or maybe just Mike Pall to write a better Erlang runtime.
Live reload is a funny thing with Erlang. When I claim it's a feature, but describe the pain it is to use, I get told nearly nobody uses it. When I claim that nobody uses it, I get told that lots of people use it. I'm not sure it's something that an Erlang competitor would have to get right. And it would be valid to use a different mechanism for live reloads, perhaps something that explicitly migrates state between OS processes instead. At the very least I think the Erlang community would have to agree that it's a dodgy, improvable process.
"A language that allows mutable state aside from at process scope is also a no-go."
Well, I did say the language I'm spec'ing probably doesn't exist. Rust is an interesting example of what can be done to make it so that not actually copying memory is safe, but you'd still have to do some work to make it do the copying more transparently across nodes, which is why I said it's "just" a regular language from Erlang's point of view.
"Nothing about Erlang is inherently slow."
I now believe that dynamically-typed languages that are not built from speed from the beginning (LuaJIT being pretty much the only reason I even have to add that parenthetical) are inherently slow. I've been hearing people claim for 20 years that "languages aren't slow, only implementations are", I've even echoed this myself in my younger days, yet (almost) none of the dynamic languages go faster than "an order of magnitude slower than C with a huge memory penalty" even today, after a ton of effort has been poured into them. Some of them still clock in in the 15-20x slower range. Erlang is a great deal simpler than most of them, and I don't know whether that would net an advantage (fewer things to have to constantly dynamically check, although "code reloading" mitigates against some of these) or disadvantage (less information for the JIT to work with). Still, at this point, if someone's going to claim that Erlang could go C speed or even close to it in the general case, I'm very firmly in the "show me and I'll believe it" camp.
At some point it's time to just accept the inevitable and admit that, yes, languages can be slow. If there is a hypothetical JS or Python or PHP interpreter that could run on a modern computer and be "C-fast", humans do not seem to be capable of producing it on a useful time scale.
> When I claim it's a feature, but describe the pain it is to use, I get told nearly nobody uses it. When I claim that nobody uses it, I get told that lots of people use it.
Heh it is funny. I might have an idea why. It is hard to use regularly to perform day to day releases. Simply because building correct apups and so on take a lot of time. Most systems are already decomposed into separate nodes and cand handle single node maintenance, so that is what we do at least. Take a node down, upgrade, bring it back up. Care has to to taken to have mixed version in a cluster but that is easier than proper 100% clean hot upgrade.
But having said that, I have used hotpatching by hand probably 5-6 times in the last couple of months. Once on a 12 node live cluster. That was to fix an one-off bug, for that customer before having to wait for a full release, another time was to catch an function_clause error that was crashing gen_server and so on. It was is very valuable having that ability.
> till, at this point, if someone's going to claim that Erlang could go C speed or even close to it in the general case, I'm very firmly in the "show me and I'll believe it" camp.
It doesn't matter if it goes C speed, it has the fault tolerance, expressive language, it is battle tested, it has good runtime inspection and monitoring capability, if someone came one day and said you lose all those but you gain C speed, I wouldn't make that trade.
I think you are seeing two definitions of "live reload".
One is where you live upgrade a full running release, including all applications and version, where you mutate state that had its format changed. All this in production, without any downtime. This is incredibly hard to get right. Erlang gives you a lot of tools (OTP & friends) to achieve this, but it is still very complex.
The other is reloading Erlang code in a runtime system. I.e. recompiling and reloading one or several modules in a runtime system. This is usually done during development (see Phoenix for Elixir for example) or perhaps even in production when you know what you're doing. This is relatively easy, with some risks of course if you are doing it in production.
I haven't seen it used directly, but it seems like Elixir macro based code could be altered and recompiled based on runtime configuration.
An example would be changing log level settings. Normally Elixir log blocks can be compiled entirely out when running in production mode. But it should be possible to fairly safely recompile with debug logs enabled and reload without missing a beat.
We do this in our project, for two reasons. One for logging (as you mentioned) and the other for configuration (compiling configuration into a module for efficiency reasons). The Elixir primitives makes this a breeze.
I don't know if live reload is widely used or not, but the other features of erlang allow you to create systems that see decades of uptime. But without code hot swapping your uptime is limited by how often you ship new code. Typically a distributed application will be made of independent programs on various machines that you can upgrade and spawn and kill at your leisure. In erlang your entire distributed application is kind of like just one program, and what was an executable in the traditional model is now a module, so in order to match the capabilities of a traditional system, you need to be able to upgrade modules without killing everything.
---
There are a number of other dynamically typed languages that have fast implementations. Javascript has a few; Common Lisp, Self, Julia, to name some others. They'll never be as fast as C, certainly not when comparing highly optimized programs, but they're fast. It looks like most dynamically typed languages can be made to run 10x slower than C. Compare that to CPython and HiPE which are more like 100x slower.
I don't think code reloading would hurt JIT performance too much. The prerequisites for runtime specialization of procedures basically accomodate eveything hot swapping would need. I also think the way people use Erlang's type system is probably more ammenable to conservative type inference than the existing fast dynamic languages, and that's one of the more important metrics.
> yet (almost) none of the dynamic languages go faster than "an order of magnitude slower than C with a huge memory penalty" even today, after a ton of effort has been poured into them
Interesting. What are the exceptions you have in mind to warrant the (almost) hypothetical? You mention LuaJIT. I've also heard that Q/KDB are quite fast. Anything else?
Agreed. It'd be really interesting to see such a language.
One language that I think is woefully underappreciated for how ubiquitous it is is GLSL. With OpenGL 4 Compute Shader you get surprisingly close to general-purpose use for the type of tasks that benefit from massive parallelism. And GLSL is really quite a nice language; driver bugs are the main things holding it back.
Sure, it's not very good for task parallelism (though some of the extensions that AMD is introducing for APUs are very interesting!) But if you've got an embarrassingly data-parallel problem, you can't beat its performance.
That performance is largely dependent on drivers + HW though, right?
Then again I'm used to mobile GPUs where any conditional statement used to cause the shader to be evaluated 2^n for each and the gathered at the end(aka forget about any branching).
For my 2c I'm a fan of Elixir + Rust, Rust has a nice C ABI that should make it easy to embed.
pony [http://www.ponylang.org/] follows the actors-everywhere model, and has a strong performance focus. not sure how well it fits into erlang's niche, but at least the built-in actor model means a lot of erlang idioms and design patterns should port over easily.
It would be interest to re-engineer a language today that tries to fit into Erlang's niche but has a stronger performance focus. Rust, Cloud Haskell, and Go all sort of cluster in the area I'm thinking about, but none are quite what I'm thinking of. Cloud Haskell is probably closest but writing high-performing Haskell can be harder than you'd like. Rust shows you don't have to clone Erlang's immutability and all the associated issues it brings with it for inter-process safety, but Rust of course is "just" a standard programming language next to what Erlang brings for multi-node communication.