Using Node.js again instead of my usual Elixir and all I can think is "Y'all live like this?".
I feel for anyone using a Node.js backend. Y'all really don't know what you're missing.
Also dealing with bugs is infinitely harder. Try catch is infinitely more wordy and messy. And you lose out on so so many valuable CI steps the Elixir community has from sobelow to credo.
And then there's writing tests. In the Node.js community it is an actual joke. It is so hard to write tests and I have yet to see any framework create a nice sandboxed test environment that just works.
All of this stuff in Elixir is free.
And like someone else said, the best ORM is Ecto and it's hard to disagree with that.
Making a reliable and organized Node.js application is actually so difficult.
If my pet project was in Node.js I would not even be able to accomplish it. And that's a fact. Elixir makes me so much more productive and the code I write is so much less buggy and less complex.
I love that HN loves Elixir. If only it was more common in the workplace.
Were you using Typescript with Node? I really can't imagine working on a project that's even moderately complex without static typing. Having a static type checker is like having an assistant that writes automated tests on its own, and having those tests constantly running in the background.
Refactoring code without static type checking is a nightmare. If you're using a strict statically typed ORM like Prisma [1], you can change your data model then run tsc --noEmit --watch in your terminal and the compiler notifies you of every error with their associated file names and line numbers. In VSCode you can even click the line number and it'll open the specific file where the error occurred. Before adopting Typescript, I basically never attempted large refactors, and when I did it consisted of a long process of grepping for specific symbols in the project, and manually writing lots of tedious and verbose tests to do a fraction of what the compiler does automatically.
I remember developers from the Elm community years ago constantly advocating for static typing, and thinking "this has to be hyperbolic". Richard Feldman gave a talk about how he wrote code on an airplane for hours without an internet connection, and without actually running the code, and how the compiler ensured that the code just worked [2]. Typescript is not as sound as a Hindler-Milney type system, but the benefits are similar (especially if you're running Typescript in strict mode).
Or in other words, the presence of a static typing ecosystem is one of the advantages Node has over Elixir. So while the original commenter might not care that Elixir lacks static typing, they should care because it increases productivity.
The only reason I would want types is for specific error types that I could ideally match against instead of them being raised (in Node.js or Typescript).
In Elixir you have the option of calling functions that raise vs functions that return an error tuple so there's no contention there (the difference between function() (this does error tuple) and function!() (this raises).
Elixir also has this idea of a typespec and I have never felt I needed more, personally.
Typespecs in Elixir (combined with Dialyzer) give you a very limited version of what TypeScript offers. From what I understand Dialyzer is designed to only raise an error when it's absolutely certain that you're mistaken about the types in your program. While this does prevent Dialyzer from complaining about false positives, it only catches the most trivial of bugs [1].
Also even with Elixir's typespecs, static typing is only fully useful if the entire ecosystem around a language embraces it. If some of the libraries you use don't ship with type definitions, it's going to be difficult for any static analysis tool to do a decent job. Almost every major JS library either ships with its own types, or has quality third party types available via Definitely Typed. It doesn't seem like the Elixir community has really embraced static typing to the same extent that TypeScript developers have. The creator of Phoenix for example hates dialyzer [2].
I was referring to Elixir's blog post on static types where the creator of Elixir José Valim wrote: "The Dialyzer project, implemented in Erlang and available for Elixir projects, chose to have no false positives. However, that implies certain bugs may not be caught. At this point in time, it seems the overall community would prefer a system that flags more potential bugs, even if it means more false positives." [1]
From my experience with TypeScript, you really want your type checker to be eager about complaining about type issues, which is why so many people turn on strict mode. In fact, occasionally I'll have to suppress a false positive error with a @ts-expect-error comment.
But even assuming Dialyzer is as good as TypeScript, have you found that the libraries you use actually have quality type definitions? The few times I've used a JS library without type definitions it ended up breaking type inference, and infecting the codebase with the any type, which renders the type checker basically useless. How has your experience been in regards to that with Elixir?
> have you found that the libraries you use actually have quality type definitions?
I have some experience with Typescript and it was way better than vanilla JavaScript. Typescript mappings we’re available for most libraries, though for a couple they were not up to date. Similarly for Elixir, the coverage for nearly all APIs is pretty good but also has some holes. In my experience it hasn’t been a big deal either. Any serious library developer puts types alongside their function signatures/implementation.
> The few times I've used a JS library without type definitions it ended up breaking type inference, and infecting the codebase with the any type, which renders the type checker basically useless. How has your experience been in regards to that with Elixir?
Being a functional language, Dialyzer can easily make inferences on untyped code based on ‘success typing’ (what Jose talks about), though it can make for some very obtuse and verbose warning messages. This inference is in addition to safety provided by typespecs which every Elixir library developer typically intermingles with their API surface.
If I would level criticisms at dialyzer it would be its sometimes difficult to read warnings, it’s speed (despite being multithreaded) and the race conditions in the VS Code plugin (which is looking for extra maintainers – if I had time I would help). For the weird warnings I find I just need to add extra typing to my code to narrow it down, which I should be doing anyway
> If I would level criticisms at dialyzer it would be its sometimes difficult to read warnings, it’s speed (despite being multithreaded) and the race conditions in the VS Code plugin (which is looking for extra maintainers – if I had time I would help).
One of the advantages of TypeScript is that VSCode is written in TypeScript, and both VSCode and TypeScript are developed by the same company, so there's a really nice synergy there. I imagine Kotlin users feel the same way using Jetbrains products, and Swift users feel the same way about XCode.
Dialyzer looks interesting, but I can't imagine giving up on the expressiveness of TypeScript. Some of the things you can do with generics, mapped types, intersection types, template literal types, conditional types, and utility types are almost mind boggling. It's difficult to reap all of the benefits of static analysis without some of these advanced type operators. The type manipulation section of the TS manual is really underrated.
Someone for example wrote an SQL parser in TypeScript that requires no runtime code [1]. It can infer the types of an SQL query's result based on an SQL string without any runtime code execution. There was a similar project where someone built a JSON parser entirely using the type system [2]. There's also an ongoing discussion on Github about the the fact that TypeScript's type system appears to be a Turing-complete language with some other cool examples [3]. My point is that the type system is incredibly expressive. You rarely run into an idiom that can't be typed effectively.
The SQL stuff sounds cool, but Ecto is so expressive I don’t even need to resort to raw SQL like I do in other languages. Elixir typing can handle a subset of intersection type which is rather niche, and of course handles union types which are more prevalent. Genetics and inheritance are replaced with a more permissive style of polymorphism with Elixir behaviors and are also part of the typing system.
All languages have their own flavor, and their own pros and cons, and if the advanced types of Typescript work for you, then great! In my own experience, I have found Elixir and it’s typing capabilities to work well for me
> The SQL stuff sounds cool, but Ecto is so expressive I don’t even need to resort to raw SQL like I do in other languages.
The SQL example I linked to isn't something you'd use to interact with a database in production, for that you'd probably reach for an ORM like Prisma. I was just trying to demonstrate the level of type inference you can achieve with TS. Going from a pure string of SQL or JSON to a concrete type without actually executing any runtime code is pretty crazy.
> Elixir typing can handle a subset of intersection type which is rather niche
I personally use intersection types quite a bit. If union types are like the logical or operator, then intersection types are like the logical and operator. Being able to define a type that combines one type and another is not a niche workflow for me.
> In my own experience, I have found Elixir and it’s typing capabilities to work well for me
Can't argue with that! Everyone has their preferences.
> Going from a pure string of SQL or JSON to a concrete type without actually executing any runtime code is pretty crazy.
Going from a JSON string to a type for it is actually one of the easier examples of inference I can imagine. JSON is a data description format in which all the base types are syntactically distinguishable, it has no variables, no arrows (i.e. functions), no generics. In the topic of type inference, you can't have a much easier example.
SQL is more complex, indeed, but still doesn't seem too crazy if you have access to table schemas. It's also a matter of whether triggers and stored procedures are taken into account, but I assume they're not.
There's a lot of prior art described in literature as well as practical programming implementations with much crazier, yet successfully working type inference.
I just want to make sure we're on the same page here. The JSON example I linked to isn't inferring the types of JSON that's already been parsed and deserialized, that would be trivially easy in any language (including TS). If I have an object that's been parsed from JSON, I can just use the typeof operator in TypeScript to infer the type of that object.
The example I linked to is taking a serialized JSON string, and parsing the literal characters in the string (characters like double quotes, commas, whitespace, etc) into a type, purely using type annotations. And the structure of that JSON can be of arbitrary nested depth.
All of this is accomplished using template literal types which allow you to make assertions about the contents of a string. In TypeScript you can assert more than just "this value should be a string". You can make detailed assertions about the structure of strings, and that's what allows these parser demos to exist.
When you combine these template literal types with recursive types, conditional types, and TypeScript's infer keyword you can do some pretty interesting type level programming.
Just to further demonstrate the point, there's an interpreter
for the BF programming language, written entirely using TypeScript type annotations [1].
> There's a lot of prior art described in literature as well as practical programming implementations with much crazier, yet successfully working type inference.
I’m not the GP, but already Elixir can already accomplish compile time text processing with metaprogramming (like it does for inline html templating for ‘heex’ functions and files) and it’s not a huge stretch for it to be able to call a JSON parser on any string knowable at compile time and convert it into a type. That it hasn’t been done yet is probably because no one has deemed to worthwhile to implement. It does sounds cool though.
Metaprogramming is also why generics aren’t really needed in Elixir or Erlang. All specializations boil down to their reified types using macros
> I’m not the GP, but already Elixir can already accomplish compile time text processing with metaprogramming (like it does for inline html templating for ‘heex’ functions and files)
It's not the compile time text processing that's interesting, it's the fact that this compile time code can yield incredibly detailed types, and the fact that it's all done purely using type annotations. Almost every language that's used to build websites has some sort of server side templating language where you can sprinkle some code into your html templates.
> That it hasn’t been done yet is probably because no one has deemed to worthwhile to implement.
Maybe, or maybe it's less feasible without an advanced type system. The final version of the JSON example we're talking about ended up being 61 lines of code. Without some concrete Elixir code to inspect, this conversation is becoming very abstract.
Honestly even if some of this were possible with metaprogramming, my intuition is that it would be much more verbose and complex. I feel like if I tried to implement all of the features of TypeScript by mucking around with an abstract syntax tree using metaprogramming, I would end up with a difficult to maintain crude approximation of the real thing. I don't think we're giving compiler developers enough credit by saying everything they've worked on can be replaced with a quick macro. José Valim's blog post on static typing ended with an announcement that they've sponsored a PhD student to work on these problems.
After two days of discussion, I think we're reaching a bit of an impasse. Honestly just use what works for you!
> It's not the compile time text processing that's interesting
You've talked right past me. Metaprogramming is not compile time text processing.
> Honestly even if some of this were possible with metaprogramming, my intuition is that it would be much more verbose and complex.
No. It's probably even how the Microsoft team are achieving the typing you're talking about. The special sauce here isn't as special as you think.
> I think we're reaching a bit of an impasse
Not really. I was just saying that what you're talking about is totally possible with metaprogramming. Not making some rhetorical play.
> José Valim's blog post on static typing ended with an announcement that they've sponsored a PhD student to work on these problems.
That's specifically because strict static typing with guards^ is at least difficult, but maybe impossible, and that's what the PhDs are trying to figure out. However, dialyzer isn't as hobbled as you imagine/purport it to be – the crux of my counterargument – and actual experience trumps speculation or casual reading on the topic.
FYI, guards and pattern matching are one of the reasons why Elixir is so damn expressive, and this kind of function typing isn't available/possible in most non-functional languages
> You've talked right past me. Metaprogramming is not compile time text processing.
I'm not talking past you. I never said metaprogramming is compile time text processing. You said "Elixir can already accomplish compile time text processing with metaprogramming", and I was just pointing out that the text processing itself is not the most interesting part of the example, it's the resultant types.
> No. It's probably even how the Microsoft team are achieving the typing you're talking about. The special sauce here isn't as special as you think.
But I don't have to reimplement any of this, because Microsoft has already written it. With enough time maybe you could implement dependent typing using metaprogramming for example, but would you then say that Elixir is just as good at dependent typing as Idris, which has that feature built in?
> I was just saying that what you're talking about is totally possible with metaprogramming.
You've graduated from saying "I think someone more savvy with Elixir would know more." to "totally possible"[1]. This does not sound like an argument from experience.
> That's specifically because strict static typing with guards^ is at least difficult, but maybe impossible, and that's what the PhDs are trying to figure out.
A lot of what you're talking about has already been implemented in other languages. This is not untrodden ground. There's no syntactic sugar for guards in JS (which TS just adds type annotations to), but semantically it's very similar to type narrowing in TS [2], which I use daily. This is not something I've read about casually on the internet.
> This is not something I've read about casually on the internet
Was speaking specifically about your opinions on Dialyzer and Elixir. It's very much a cursory Google around and having stronger opinions about the limits of a language than people who spend time with the language.
> You've graduated from saying "I think someone more savvy with Elixir would know more."
It took further reflection on the idea. It helps to start with at least some epistemic humility, once again the crux of this thread. I'm willing to revert to, "It ought to be possible" given that macros boil down to the final generated code (including typespecs) which is then analyzed by Dialyzer
> Elixir is just as good at dependent typing as Idris, which has that feature built in?
Metaprogramming allows features to become 'built in'. e.g., being able to integrate type checking and compiler errors for HTML/XML or other DSLs.
> Was speaking specifically about your opinions on Dialyzer and Elixir. It's very much a cursory Google around and having stronger opinions about the limits of a language than people who spend time with the language.
I think your experience with Elixir and Dialyzer (or perhaps emotional attachment to these tools) is blinding you to the fact that there are features that your favorite language lacks. I gave you a concrete example of something that's a research topic in Elixir (inferring types from guards), which has an existent analogue in TS (type narrowing). You completely ignored that and fixated on the one sentence about experience. Your entire argument is that I should defer to your authority on the topic as an anonymous commenter, because you're supposedly very experienced. It feels like a very hollow appeal to authority.
> It took further reflection on the idea.
But by your standards simple reflection is not enough to establish authority on a topic. You need to have deep personal experience with using macros to parse text before you're qualified to make a firm judgement. You event went so far as to speculate on how TypeScript implements type checking and claimed that the "secret sauce" is probably metaprogramming, despite having never worked on the compiler. This all feels incredibly contradictory.
> Metaprogramming allows features to become 'built in'. e.g., being able to integrate type checking and compiler errors for HTML/XML or other DSLs.
Built in does not mean "I might hypothetically be able to write my own macro that does this". It means it already exists in the language today.
Honestly this discussion feels like it's devolved into repetitive bickering, which is why I let your original comment about metaprogramming stand unanswered, and you reacted by following me into a different comment thread to reiterate the point.
> Going from a pure string of SQL or JSON to a concrete type without actually executing any runtime code is pretty crazy.
Elixir/Erlang might already be able to do something like this with metaprogramming. It’s certainly possible to generate and run Elixir at compile time, and map types are already a good superset for JSON objects, so a compile time JSON to map could then provide an inferrable type. I think someone more savvy with Elixir would know more. I’d certainly not something that I’ve needed.
> But even assuming Dialyzer is as good as TypeScript, have you found that the libraries you use actually have quality type definitions?
Strictly speaking, Dialyzer has two big benefits over more traditional type systems:
1. It doesn't require type definitions to be present, it can type check code having none at all.
2. "Dialyzer is never wrong", i.e. it never returns false positives, i.e. it's an underapproximating checker.
These are the design decisions taken by the tool authors. There's rationale behind them, though the experience of using checkers for other languages might not be 1-to-1 applicable because of that. These decisions come with downsides, of course, some of which are covered in this thread, so I won't repeat them, but in general they were taken deliberately due to characteristics of the language(s) (it was originally created for Erlang, not Elixir) and the state of type system theory at the time. Please bear in mind Dialyzer was created in 2006, some 6-7 years before TypeScript.
If you're into trying out static typing in Elixir, please check out https://github.com/esl/gradient. It's still experimental, but already functional. We're happy to get any feedback or, better yet, contributions.
Ecto is good but I think the Django ORM does it better. The ORM in Django[0] is dead simple and easy to understand. The only caveat being that Django doesn't have changesets, which I do like.
It also integrates nicely with the Django Admin. You setup your models, and you can very easily generate an admin interface to CRUD on those models.
One annoyance I have with Ecto is that migration files are handwritten. There's a generator tool, but it only gives you a skeleton migration that you then have to manually fill in. This gives you 2 sources of truth, which is error prone. Automating this process like in Django solves this issue.
Ash[1] (which uses Ecto under the hood) does solve most of these issues, although at the moment the documentation can be hard to navigate.
> Ecto is good but I think the Django ORM does it better.
Oh no, I strongly disagree. I know both very well, and Ecto is much better and flexible. SQLAlchemy was great, when I last used it two aeons ago, but still subpar compared to Ecto.
But also apples and oranges. Django has an ORM (i.e. stateful), Ecto is a stateless data mapper.
I so far agree with all you said and strong think already based on limited info available to me that you are a well-versed Elixir dev. However nothing beats the short term dev speed of scripting up some JS with npm. Have you seen the latest test runner efforts on node.js and advancements in deno? I think this gap is slowly closing but node is still not near Elixir in terms of long-term productivity, in future it will take over though I think if types dont get into the Elixir language fast.
What's the point of selling languages like elixir as silver bullets like this? Each language has advantages and disadvantages over each other. I wouldn't want to work in the same place as someone who makes such stupid comments. programming perspective should not be so shallow.
It isn't a silver bullet. But it does have a lot of distinct advantages. I can come into any Elixir project and very easily write tests. I can't do this in Node.
This is a huge disadvantage, and extremely dangerous, and in my opinion should render the entire language useless as a backend language for any serious project.
Elixir may not be a silver bullet. But Node.js is definitely a vampire...that isn't on your side.
We sold it to our Fortune 100. Our Elixir application tracks a fair percentage of healthcare patients across the US in real-time, can handle loss of systems or data centers with zero downtime, data loss or interruption and we update it live. https://www.erlang-solutions.com/blog/how-hca-healthcare-use...
I love Elixir! One of the main reasons I haven’t been able to sell it as much was it’s lack of good numerical computational support, but with the release of nx now getting to a very mature point it’s really hard for me to not get excited about this ecosystem.
I also think that livebook has a couple of really killer features now that make anyone who spends a significant amount of their time in Jupyter notebooks to stop and take a second look (eg it’s so easy to share a livebook compared with sharing a jupyter notebook, or even easier to extract code from a livebook prototype to turn into a resilient production elixir module).
Yes there is still a ways to go to catch up to the maturity and breadth of the community that Python has in this domain, but that gap is closing way faster than I could have thought.
I think the biggest challenge for adoption is its functional nature. If you take the time to learn about elixir and the BEAM you are rewarded with a pretty complete runtime which ships with its own DB, in-memory Store and message queue (with a very easy path to adopting more battle tested versions of these when you outgrow what is included in OTP). This means spin up time prototyping new systems is spent less on vendor choices (which type of DB should I use) and more on solving your problem domain.
I wish I had know about all this stuff 5 years ago…
Regarding numeric stuff. I discussed this matter with some folks and it was quite interesting because just like Erlang/OTP was used at control plane for phone switches while the data plane was implemented in something else, one could theoretically leverage it to oversee ML processes (perhaps Python programs written with PyTorch or something else)
The topology is exactly the same
In fact, porting your vetted numeric code to Elixir is probably going to make the risk go sky high, but running the same code with supervision structure in Elixir would in fact reduce risk so is a no-brainer
I do exactly this. Elixir supervises Ports to python and C binaries and manages all the communication between processes including buffering and maintaining state when necessary. Using libraries like Horde even allows you to run these processes on remote nodes very easily. It has worked fantastically well.
Elixir and Explorer using the Rust implementation of Polars under the hood is practically the same as Python and the Polars Python bindings. The rest of the programming language makes a difference though. Python rejected a large portion of CS research and rejected to do a proper implementation for many things like pattern matching or lambda functions. I am pretty sure they had sound reasons to do this.
There may be such a source already, but it would help if the powerful BEAM features had a guide showing what good use cases were (for each of the features it provides). Bonus would be examples.
I just implemented a trading system in Elixir with backend + JS free frontend and I have to admit that this was the most pleasant experience for a long time.
I think pattern matching and the pipe operator combined with the let it crash approach work very well in practice. There are amazing libraries for Erlang/Elixir including database client, dataframes, JWT, http, AMQP client and many more. I haven't had a single library that was not working out of the box the way we wanted.
The code is pretty easy to follow and we can onboard a new engineer with ease. The usual ration for our codebase comparing to Python is 1:10 in favour of Elixir. The engineers on the team have 10+ years Python experience and only used Elixir for toy project before.
This is great. I think it's crucial to keep convincing the overall tech community about the good sides of Elixir.
And yeah, resilient design de-risks itself and that alone is a huge gain because it also reduces other forms of complexity.
Still, I think we have to keep pushing to debunk other myths from "it's impossible to find talent" and the fear of "no one uses" that are common myths about any technology that is not the #1 most used.
We use Elixir. Everyone we’ve hired, from junior to principal, has had no previous Elixir experience. They’ve mostly come from Ruby/Rails or JavaScript backgrounds. They’ve become productive in Elixir quickly and once they’ve understood some of the unique patterns and concepts, all thoroughly enjoyed working in it.
Of course, those with more experience tend to have heard of Elixir and be keen to work in it.
People with less experience are more concerned about whether investing in this language is a good career move, so we emphasise how learning something like Elixir will ultimately make you a more valuable developer.
If you’re considering adopting Elixir for your new project/startup and worried about finding talented developers, I don’t think Elixir will make it harder and might even make it easier.
I submitted a few patches to an elixir project and from that a company reached out to hire me. This was a while ago but I got the feeling that companies were starved for talent.
It's also hard to get a job as an elixir junior or senior. There is a disconnect and companies are trying to hire by cargo culting things that other startups/faang are doing, and repeatedly turning down perfectly good candidates. Also recruiters don't know how to find/screen elixir devs, etc.
Elixir / erlang is not just « not the #1 ». I don’t think it’s even in the top 99% of the programmer’s population commonly used language. once you’ve removed people using c/c++, java / kotlin, dot net, javascript, ruby php and python, you’re left with a really fringe minority of people. And even in that minority, i would say erlang / elixir isn’t even in the top 2 ( my guess go and rust have a much wider audience)
In the Fintech space and we use Go and Elixir. Seems to be pretty common among other companies. Company is out of Utah. It's pretty easy to find talent for Go and a little more difficult for elixir.
Go people might cringe but why wouldn't you just be 100% Rust?
Actual response: Some similarities, different goals, different strengths, different abstraction levels. Go is cool I hear but I don't see a need from over here doing Elixir.
No need to suppose, there are decent sources for this.[1][2] Go seems to be much more popular than you expect. Elixir and Erlang are both pretty obscure.
The irony is that for all its obscurity, most startups are using at least one mission critical service that runs on elixir -- slab, pagerduty, elixir, ramp, divvy, discord. brex is probably still half ish elixir, though they claimed to be wanting to replace it with kotlin when they got some new tech leadership. I wonder how that's going. I remember working at a startup where the cto was unsure about putting trust in elixir, claiming it was an "unproven entity". I pointed out to him we used THREE of those services, do if he didn't trust elixir we should probably pivot off of those products.
I doubt the reservation is with the language/runtime itself, especially with elixir and the BEAM. More likely, it’s with the maturity of the community. Especially at a small startup, building on elixir even today might still mean having to build things in house you may not in Django/Rails/JS.
The moat comment depends on the company, not all or even most businesses depend on actual innovation as their moat.
Also, I think we’re agreeing here, but there are a huge set of things that you may need to build an application that aren’t the core value prop of your company. Buying into a more mature ecosystem makes it more likely that you don’t have to build those things and can spend more time on the moat stuff.
> The community has proven to be sustainable, with ample social proof of further adoption and conditions amenable to generational succession
It depends on how you measure sustainable. What would happen to Elixir without its BDFL? Has Elixir achieved escape velocity and will continue? Rust was pushed out of the Mozilla nest and managed to live on. I don't see a similar capacity for Elixir handling such existential risk. Are there succession plans? Is there mentoring among members of the core?
I also think that it's necessary to point out that just because OTP facilitates resilience doesn't mean that the system you'll build with it will exhibit that behavior. You have to design your system to use OTP effectively. Easier said than done. I've seen Elixir systems reach a critical failure state following maximum restarts of its internal processes. However, at least Elixir has the potential of being hardened. It's just not going to act as expected without a lot of up front wisdom applied.
Agreed. There is no alternative to thinking. However a good collection of conventions, primitives & building blocks, can assist greatly in architecting a robust system.
We do test systems to the point of breakage. It is not something we skip just because the language is good.
As to existential risk:
I will try to be succinct. If there has been no exposure then there is no need to hedge.
Further, there are commercial firms which will maintain your critical systems. In other words, a long term maintenance contract for Elixir systems can be procured from more than one source.
I love elixir, but I feel like there’s something off about the focus on reliability and error handling. At least for web applications, it’s pretty boring the idea that a single error won’t take down the entire system. At worst, a single error might take down an entire process, but that’s really not that terrible. Also, error handling code that is not colocated with the logic that threw seems like a mess. Error handlers have to decide whether we can recover and whether we need to report the problem, and that doesn’t really change just because an error won’t bork an entire system. IME error handling is mostly about deciding what is best for the user.
What I love most about elixir is the ridiculously dead-simple vertical scaling. A close second is The World’s Best Take On ORMs (aka Ecto).
Almost every system scales simply vertically, vertical scaling is (to a first approximation) throwing faster hardware at the problem. If you're IO bound you get a faster IO bus (this is not necessarily easy to do, but if you do it the system scales to the limit of it). If you're CPU bound you get a faster CPU. And so on.
Horizontal scaling is what's hard for a lot of systems, but simple with Erlang and Elixir since they are, by design, meant to support concurrent and distributed workloads. That's why I was asking if they meant horizontal and not vertical.
Erlang and Elixir are not uniquely easy to scale horizontally, but they are (if you don't fight their nature and produce programs that consist of one giant execution thread) ridiculously easy to scale horizontally by tacking on additional CPU cores or physical CPUs.
> Almost every system scales simply vertically, vertical scaling is (to a first approximation) throwing faster hardware at the problem. If you're IO bound you get a faster IO bus (this is not necessarily easy to do, but if you do it the system scales to the limit of it). If you're CPU bound you get a faster CPU. And so on.
My interpretation was vertical scaling is within a single node and adding cores or CPUs to a single node is vertical. Multi-core and multi-node really present similar problems, so adding more cores or adding more nodes isn't terribly different from a design perspective.
> ...easy to scale horizontally by tacking on additional CPU cores or physical CPUs.
If you define adding more cores to a single machine is actually horizontal scaling, then we agree in principle, just not in the definition of scaling.
Not true in practice. You can scale hardware easily to hundreds of cores, if your app is able to spread the load, but the performance ceiling is very low if your app is single-threaded.
The BEAM with its design is very parallel, and the more cores, the happier it is.
I think it makes a lot of sense in theory to consider adding CPUs as horizontal scaling, but in practice I'm not sure it makes sense. Moving your app to a machine with twice as many CPUs is more like moving it to a machine with twice as much RAM than it is like doubling the number of machines your app is deployed to.
I meant vertical scaling, but it sounds like our understanding of vertical scaling is different. My framing is "horizontal scaling = more machines, vertical scaling = more powerful machines". I guess one way to think about it is that Elixir makes scaling the number of cores a strictly vertical kind of scaling.
Yeah, additional cores is a fuzzy area. That's why I asked. Elixir takes advantage of additional cores trivially, so upgrading a CPU (how many people see vertical scaling) to have additional cores helps your Elixir workload. But it's doing that by distribution (more typically falls under horizontal scaling) across those additional cores. Again, it's fuzzy. When the world consisted of mostly 1-4 core CPUs it was easier to separate the two concepts because we didn't have the option of throwing 128 cores into a single server, we had to buy 32 to 128 servers to get that many CPUs.
Erlang/elixir have a tremendus story to tell: in a world that digitizes increazingly vital aspects of its core infrastructure the focus on risk management that is in full display in this post is a unique and important aspect.
The challenge is the prevalence of hypes, which as pointed out insightfully, can lead to irrational choices.
What? Phoenix is dope because you get tailwind and esbuild without having to run a single npm install, then LiveView does most of the interactivity.
My stack these days is just Phoenix, Liveview and a sprinkle of client-side interactivity with Stimulus.js, which is super-simple, composable, progressive (i.e. just add it where you need it) and plays excellently with Liveview.
I am designing a super interactive, fully synchronized in real time dashboard with no more than 100 lines of Javascript code to handle: dropdown menus, show/hide burger menu, scroll to top after pagination. Just as it was originally designed, JS should only be the icing on the cake, not the entire buffet.
Yeah, I'm so hipster I'm already living in a post-Javascript world. Feels good, man.
The thing I don't like with Elixir/Phoenix is the code generators.
I hate code generators with a passion. They obfuscate the inner workings of the library. Have you tried to find a tutorial which do not use the Mix task "phx.new"? (an up-to-date one).
For example, if I need an API, I will pass `--no-assets --no-html --no-live --no-mailer --no-dashboard` and the generator still creates a static files folder and static files related code.
If I want the strict minimum, I have to take some time to remove the un-necessary code.
Don't misinterpret me, I think Phoenix is a great framework, just like Python/Django, and Ruby on Rails are.
But sometimes, I just want Flask/FastAPI. You can say "just use plug_cowboy", but IMHO, "plug_cowboy" is more like "express.js", not really a FastAPI-like framework.
Also, absinthe is the only GraphQL implementation that natively supports subscriptions (via phoenix+phoenix_pubsub). In Django (graphene), you need so much more boilerplate with django-channels and yet another library. In JS land, Apollo Server also do not support it natively, you need graphql-ws and a pubsub implementation as well. And most of those pubsub implementations will need a Redis (or whatever) to distribute the queue between multiple web workers, while Erlang/Elixir have been designed for this (which actually simplify your infrastructure).
In the end, I often go with Django Async (via ASGI) with Server-Sent-Event (via an async StreamingHTTPResponse) to avoid the complexity of GraphQL subscriptions. But even that, I need a pubsub via Redis to make my multiple workers aware of each other. While with Elixir, a simple libcluster dependency, and a `receive` in the endpoint handler would be enough :)
I have to agree. I never use generators, and really dislike that authentication is only available with `mix phx.gen.auth`.
But speaking of generators, I love this site: https://www.phoenixdiff.org/ — diff your local Phoenix version with the latest one to bring your codebase up to date. I use it every major release to adopt the latest best practices, even if Phoenix has decent backwards compatibility.
You're right, and I have used both. I meant to say that phx.gen.auth is a good starting point, but only available as a generator. There is no easy way to add it to an established project.
> I hate code generators with a passion. They obfuscate the inner workings of the library.
I can argue the exact opposite: you can just look at the generated code to understand what is going on. No magic involved.
> For example, if I need an API, I will pass `--no-assets --no-html --no-live --no-mailer --no-dashboard` and the generator still creates a static files folder and static files related code.
To me it sounds more like an issue with phoenix rather than with codegen in general. I would expect the static folder to not be generated if we pass the "--no-assets" flag.
Your code generation point it's good - it's great when you know the different parts, but for learning the tools it's horrible, just a lot of files that you do not have a clear understanding.
The problem is that the code generation tools does not let you learn the different parts. On top of that, all the documentation around just tell you to use the codegen tools and unless you go look at the API reference (or the source code directly), there are no resource to learn the different parts.
Take a look at how django does it. `django-admin startproject` creates:
- settings.py
- wsgi.py
- asgi.py
- urls.py
Then you just need to add your models and views. There is the `startapp` command, but you don't really need it.
Even with the codegen tools, you still need to learn what a plug router is, what a phoenix controller/view is, what an ecto repository is, etc...
The codegen tools just overwhelm you with irrelevant code (i'm making an API, why the hell do you generate static files? and why do i need to edit so many files to take it out?)
Another point I don't like about Ecto is that you can't easily switch the database backend depending on the environment.
With django, I can have an sqlite3 db for the dev environment, and a pgsql one for the production. I don't want to run a docker image for pgsql on my machine. I just need to read the DATABASE_URL environment variable, and the ORM takes care of the compatibility. If later on I need some specific pgsql feature, I will have to run the docker image on my computer anyway, but until them please keep the complexity out of my way.
Personally I want to move away from JavaScript and its ecosystem. There is https://hexdocs.pm/phoenix_live_view/js-interop.html though. For the record, Phoenix does use JavaScript for its "LiveView"[1], but I think it is optional? At least I remember someone telling me this when I was complaining that generating a basic Phoenix project (using "mix") is too bloated due to JavaScript and such.
Phoenix LiveView uses JS because the core of the tech requires the server to compute diffs & the client to patch the DOM accordingly. This is why a JS package is needed alongside the other default package from Phoenix, which brings Phoenix Sockets & Channels capability (which builds on top of transports such as Websocket).
For anyone wondering, this is where I complain about Phoenix being bloated[1], and others have informed me that "There’s no JavaScript unless you write it into the template.", and that I can reduce the bloat in this way: "You pass the `--no-assets` flag when using `mix phx.new`"[2]
As one of those others, I can confirm this is indeed still true as of Phoenix 1.7. `--no-assets` will not add any CSS or JS to the project other than empty `app.js` and `app.css` files in `/priv/static/assets` for you to fill in yourself if you want. You are completely on your own to implement an asset pipeline of your choosing. Of course, if you want LiveView you will have to bring some JS back in.
It depends what you do. If all you want is http requests and assets, no js needed. Being html or api.
If you want to handle a websocket and have a 2 way bindings, then no way around it. But that is not phoenix fault. It is the reality of the browsers api. It is pretty minimal js though.
It uses websockets, which as I understand it, browsers only allow interfacing with them via JavaScript (at least easily). Is there a tech stack/protocol that gets comparable functionality to websockets but without using JavaScript?
> I think Elixir/Phoenix could use a little bit of Javascript (and modern PHP) enthusiasm to help with adoption
Definitely, although as someone who finds that level of "enthusiasm" or fanboism a turn off, I like that it's not there. But it has to be there to ensure the success and growth of the language/ecosystem.
Might want to look into LiveSvelte[0]. LiveView by itself is already really nice. LiveSvelte on top of LiveView gets you into JS land for really rich interactive content.
People say it is hard to find talented Elixir developers, but I know of two people who would love to jump ship to work with Elixir. Also, I think the BEAM is the most sensible platform for the modern web, and it puzzles me a bit that it is not more widely adopted.
Question for anyone who has switched to Elixir from a typesafe language: doesn't the lack of complile time type checking bother people working on Elixir? Any opinions on that?
Yes it is the only thing that really bothers me about Elixir. In my personal projects, I heavily utilise typespecs which makes it somewhat bearable. But in my previous day job that had a high turnover rate (and hence a huge mess) doing any refactoring on the Elixir codebase was a nightmare even with tests.
I found maintaining the Typescript codebase in the same company much easier.
Not really. Working on a 2+ year, large, monolithic phoenix liveview app at Spotify. VS Code runs dialyzer on file save, which shows type mismatches. I also run dialyzer on pre-commit for a sweeping check. Ideally, we'd have the same optional typing flexibility with a compile time check, but it seems like a marginal gain over the current setup.
My favorite languages are Rust and Elixir, and yes the lack of static types is my one gripe with Elixir.
It's a little bit better than what it might look like on the surface level as you do a lot of pattern matching and dialyzer. It's far from what you get from Rust of course, but it's better than with Python for instance.
Worked at a startup for the last nine months as a founding engineer building everything in Elixir + Phoenix and LiveView. I was able to ship more features and fix more problems faster than I ever have. The startup is out of runway and it breaks my heart to think I might have to go back to the painful world of node.js.
I'm commenting from the perspective of someone with zero experience with Elixir and Erlang. I've heard of them, but not used them.
So, I leafed through this document, and did not see how it would help people sell Elixir. Engineers will not be offered reasons to prefer it to other languages (popular or otherwise), and managers / bean-counters will not be convinced by the claims of it being less risky and a robust choice for a glue language - when very few people know it, so maintaining the elixir code will depend on some local guru and/or training people to use elixir.
... and this all is not to disparage Elixir! I just don't feel I've been told almost anything about it.
This was written to talk to people already in the community, as a way to explain the current situation. It was not targeted at external people that discovers it. We do have multiple other places for that if you are interested though :)
For someone completely new to development Dockyard academy course resources [0] are open source and an excellent showcase for livebook.
For an experienced dev wanting to pick up Elixir I would suggest picking up Elixir in Action [1]. The third edition is currently in MEAP. Much more concise and great discussion of some Erlang/OTP internals too.
It's better to focus on the content than the medium (pun intended, as usually this kind of comment are usually when people post something on medium, as if they didn't have another copy of their post backed-up somewhere else.)
But I'll bite: What's wrong exactly with using Github's markdown renderer?
This is a plain markdown file, the author could host it on S3 and it'd look like:
There are a ton of markdown renderers, author could write one in a few hours for the feature used (#, ##, ###, >, and unordered lists), or even better, paste the thing in one of the many markdown-to-html tools (https://markdowntohtml.com/).
This is not like the author used some proprietary syntax, that only displays on Github.
The author isn't in control of the chrome around it. It becomes an advertisement for the proprietary service. You can slap Markdown through Pandoc and host statically somewhere where you do control the the chrome. This is a similar objection to say Medium, but Medium authors aren't usually developers and don't have the know-how.
GitHub has forked Markdown in a way that is incompatible with CommonMark, GitLab, etc. So the author might accidentally buy into that fork which there is no quality control for (just look at the lack of care put into their attempt at admonitions).
I've always found myself in fundamental disagreement with the Erlang/Elixir philosophy.
Resilience, within the context of this presentation, and the philosophy of Erlang, would be the capability to recover from errors while unattended.
I prefer systems that don't fail quite so smoothly. Take this example:
The application runs out of memory due to too many transactions being processed at once. The problem happens once every day. However the system gracefully recovers, and the team is able to resolve the issue peacefully without the problem escalating to an active incident.
If I was funding development of such a system, I would find this situation unacceptable. I'm paying for an application that doesn't crap out in production every single day, not one that "gracefully recovers" after each such event so the dev team can "peacefully" manage it. The users whose data is lost during these incidents probably wouldn't feel quite so peaceful about it, either.
Hard to respond to this, it's too high level and generic to disagree with.
But in practice, if you're on the BEAM and use OTP as intended, your code is fully parallel, split into fault-tolerant pieces, and one crash does not affect the entire system.
I have been responsible of two Elixir monoliths in the last 6 years, and I have not seen them crash even once. Not even during prototyping, not even during load testing, never once. Without having to adopt defensive coding strategies. I write 1/4 the error checking code than if it were Rust, and 1/10 than if it were Python or Go.
How many platforms can realistically promise this?
Note, I haven't said my code is bug free. It's buggy because I'm only human and make plenty of mistakes. But the BEAM saves me from myself, saves me from third-party JSON endpoints that for a split second return corrupted data, saves me because the DB connection dropped for a second, and saves me from the myriad of transient heisenbugs that happen in production.
The bugs are still there. But the system is fault-tolerant because it has been designed to be.
> If Saša Jurić can't prove it to you, then no one can
I think it would have been better if he had more time. The first part was nothing special as such, it could have been easily replicated in even crusty old things like Delphi, never mind modern C# or similar.
It only became interesting when he showed the introspection and when he scaled out to another instance. These things you cannot do so easily in other systems. So would have been more interesting to spend more time on that stuff.
I'm sure there are other videos doing just that, just my thoughts on this one as a "PR video".
Did you use live view for those projects? If so, what sort of scale are they? I've been seriously considering learning Elixir+Phoenix, but don't whether live view is suitable for serious projects (where serious blog project ∉ serious projects).
After wondering this myself, I finally just went all in on LiveView and was extremely surprised at how efficient it was. I have had over 4,000 concurrent users on a single LiveView with real-time messaging, streaming transcriptions, viewer counter updates (don't use presence for this), and the typical interactivity (signing on, navigating, etc.). On a single 8-core machine it maxed at 20% CPU during the few minutes everyone was signing on (via OAuth) and idled at less than 5%. RAM never exceeded 900MB. I've since distributed the same application into EMEA and APAC to provide better latency for those users (which takes libcluster and about 20 lines of config to set up), and zero problems with that either.
You have to provide how your view looks like, because a large part of memory consumption coming from diffs that server keeps. Each session will have their similar diffs.
Of course, each case will be unique. I heavily optimized with temporary assigns so the message lists would not be persisted in server memory. This is even better now with the new streams functionality. The knobs to tweak are there.
Not a direct answer but the BEAM, Elixir and LiveView are ridiculously easy to scale vertically (i.e. add more CPU and RAM, the scheduler takes care of everything else), and horizontally (cluster forming and distribution of processes among multiple nodes is a first-class feature)
If people build multi-billion dollar companies on top of PHP and Node.js, I would not even be concerned about scaling Phoenix.
I'm sure your software is great, and I don't mean this personally, but I just find this philosophy really strange. You don't write defensive code, and you don't often check for errors. You just let pieces of your application fail, and feel that this is OK because the overall application continues on. IMHO, Elixir isn't "saving" you from "transient heisenbugs", it's encouraging you not worry about them, which means that they will proliferate.
The "let it crash" philosophy has a misleading name. It's not "don't handle errors", but it's "delegate the error handling to a component dedicated to this".
With OTP, you have what we call a "supervision tree". Your software is divided into components (erlang processes) and organized in a tree where the leafs are your components, and the nodes are the "supervisors" which will catch the error, and restart/retry the operation.
What's the result? Imagine you have a component that is polling data from the database and then sending a message to another component based on that data. For a split second:
- there is a network error
- you lose the connection to the database
- the component crashes
- its supervisor notice the crash, restart the process
If the network error was temporary, everything goes back to normal, and your system did not stop for a temporary network error. And your component does not need to have "retry code" (it is delegated to the parent supervisor).
If the network error was not temporary, the parent supervisor will notice that the component crashes far too often, and will decide to crash as a result. Letting the parent supervisor of the supervisor deal with the problem.
In Erlang/Elixir, processes can "monitor" other processes. They will receive a message when the target processes dies (normally, or abnormally). This allows you to delegate the error handling so yes, you write less defensive code, because that code is located in a dedicated component that will be notified automatically.
> You don't write defensive code, and you don't often check for errors.
There is a limit to how defensive you can make your code. As soon as you call out to another service for data, the universe of errorinf situations they can create goes to infinity. For example they could happily send you a 200 but return mistyped JSON. What do you do in that case? Are you genuinely going to write that as a test condition to make sure the transactional chain in your system handles this and rolls back the db inserts that preceded it? I can tell you that in 99% of cases idiomatically written elixir will "do the right thing" if this happens, and that is a consequence of "let it fail" philosophy. Moreover, if you try to write error handling code in the traditional sense, you will probably get it wrong, it will be hard to understand, and it will be hard to debug if your implementation is not quite right.
How can I ever defend my position of lead engineer responsible for these systems, when someone on the Internet, that has never seen my code running, is adamant my systems are unstable and lousy with bugs.
Trying to continue this conversation is a waste of our time. Feel free not to use Elixir.
The "let it crash" is a sound and proven design philosophy, but it would be easier to grok if you were to spend time learning the platform, rather than comparing it in a vacuum with the things you already know.
It is quite frustrating when someone has already decided something they don't know is dumb or makes no sense. It is more plausible the unknown unknowns are blinding your judgement, than Erlang and Elixir developers worldwide suffering a collective hallucination, is it not?
I'm an Elixirist, love the idea of "let it crash", and I still code defensively.
To me these phrases are more about the guarantees—if it does crash it recovers—and less about dogmatic advice that everyone follows.
But there's time and place for different levels of defensive programs. I get to pick and choose based on where I see risk, but I know the whole app isn't going to blow up if I missed something.
This example doesn't even rise to the level of an "active incident" in the Erlang philosophy. In other words, it's not a bug, so there's no urgency to improve it.
I agree. However, the linked article that I was quoting from seems to see things differently. It describes a situation in which transactions are failing (i.e. data is being lost), but it's not an incident.
Transaction failing does not mean dataloss. If you think it is, you do not understand what graceful recovery means.
Graceful recovery means that something handle that failure after these transactions failed. There is no data loss. They may have been slower, but i think we can agree that a slight temporary latency for no dataloss and graceful handling of unexpected stuff like your database machine being on fire is not so bad?
> The users whose data is lost during these incidents probably wouldn't feel quite so peaceful about it, either.
Who says data has been lost? Only if your protocol is fire-and-forget with no checking. But then you'd have data loss no matter the language(s) you implemented it in if your system became inundated with requests and crashed.
I feel for anyone using a Node.js backend. Y'all really don't know what you're missing.
Also dealing with bugs is infinitely harder. Try catch is infinitely more wordy and messy. And you lose out on so so many valuable CI steps the Elixir community has from sobelow to credo.
And then there's writing tests. In the Node.js community it is an actual joke. It is so hard to write tests and I have yet to see any framework create a nice sandboxed test environment that just works.
All of this stuff in Elixir is free.
And like someone else said, the best ORM is Ecto and it's hard to disagree with that.
Making a reliable and organized Node.js application is actually so difficult.
If my pet project was in Node.js I would not even be able to accomplish it. And that's a fact. Elixir makes me so much more productive and the code I write is so much less buggy and less complex.
I love that HN loves Elixir. If only it was more common in the workplace.