Hacker Newsnew | past | comments | ask | show | jobs | submit | sorentwo's commentslogin

This is absolutely true.

I can confirm, from firsthand knowledge, that Elixir is used at dozens of Fortune 500 companies in the US.


The roadmap is purely about AI, and reads like it was written by AI. It’s purely trendy and myopic.

My graduate university has a IBM speaker a few years ago where they spoke of a future on the Blockchain

Well they have automation on there, that's clearly distinct from AI.

It's funny because automation is the only thing you can expect from AI

The architecture is remarkable. The lengths they’ve gone to for language version compatibility, and protecting app namespaces is especially impressive.

https://github.com/elixir-lang/expert/blob/main/pages/archit...


What are namespaces in Elixir?


Namespaces aren't so much a concept in Elixir, but this refers to the names used for things like modules. Expert will rewrite the code of its "engine" so that the engine's code and dependencies and those of the application it is embedded into don't overlap.


Absolutely love monodraw for diagrams in documentation! All of the diagrams for Oban and Oban Pro are done this way:

Job Lifecycle: https://hexdocs.pm/oban/job_lifecycle.html

Composition: https://oban.pro/docs/pro/1.6.4/composition.html


Sidenote: thanks so much for taking the time to write the Oban docs. I'm a big user (and fan) of Oban, and the docs are fantastic.


Oban doesn't use advisory locks for fetching jobs (unless there is uniqueness involved)—it uses `FOR UPDATE SKIP LOCKED` as well to pull jobs.


Ping requires something persistent to check. That requires creating tuples, and most likely deleting them after they’ve been consumed. That puts pressure on the database and requires vacuuming in ways that pubsub doesn’t because it’s entirely ephemeral.

Not to mention that pubsub allows multiple consumers for a single message, whereas FOR UPDATE is single consumer by design.


Postgres LISTEN/NOTIFY was a consistent pain point for Oban (background job processing framework for Elixir) for a while. The payload size limitations and connection pooler issues alone would cause subtle breakage.

It was particularly ironic because Elixir has a fantastic distribution and pubsub story thanks to distributed Erlang. That’s much more commonly used in apps now compared to 5 or so years ago when 40-50% of apps didn’t weren’t clustered. Thanks to the rise of platforms like Fly that made it easier, and the decline of Heroku that made it nearly impossible.


How did you resolve this? Did you consider listening to the WAL?


We have Postgres based pubsub, but encourage people to use a distributed Erlang based notifier instead whenever possible. Another important change was removing insert triggers, partially for the exact reasons mentioned in this post.


> Another important change was removing insert triggers, partially for the exact reasons mentioned in this post.

What did you replace them with instead?


In app notifications, which can be disabled. Our triggers were only used to get subsecond job dispatching though.


Distributed Erlang if application is clustered, redis if it is not.

Source: Dev at one of the companies that hit this issue with Oban


What about Heroku made Erlang clustering difficult? It's had the same DNS clustering feature that Fly has, and they've had it since 2017: https://devcenter.heroku.com/articles/dyno-dns-service-disco....


The problem was with restrictive connections, not DNS based discovery for clustering. It wasn't possible (as far as I'm aware) to connect directly from one dyno to another through tcp/udp.


That is not an issue when using Private Spaces, which have been available since 2015


I didn’t realize Oban didn’t use Mnesia (Erlang built-in).


Very very few applications use mnsesia. There’s absolutely no way I would recommend it over Postgres.


I have heard the mnesia is very unreliable, which is a damn shame.

I wonder if that is fixable, or just inherent to its design.


My understanding is that mnesia is sort of a relic. Really hard to work with and lots of edge / failure cases.

I'm not sure if it should be salvaged?


I think RabbitMQ still uses by default for its metadata storage. Is it problematic?


They are in the process of migrating away from it https://www.rabbitmq.com/docs/metadata-store


can you explain why?


Mnesia along with clustering was a recipe for split brain disasters a few years ago I assume that hasn't been addressed.


I have only worked with a product that used it, so no direct experience, but one problem that was often mentioned is split-brains happening very frequently.


Shaun Ryder sets a low bar. Definitely the best maracas player though.


Kurt's our Ryder.


That's the issue with goroutines, threads, or any long running chain of processes. The tasks must be broken up into atomic chunks, and the state has to be serialized in some way. That allows failures to be retried, errors to be examined, results to be referenced later, and the whole thing to be distributed between multiple nodes.

It must in my view at least, as that's how Oban (https://github.com/oban-bg/oban) in Elixir models this kind of problem. Full disclosure, I'm an author and maintainer of the project.

It's Elixir specific, but this article emphasizes the importance of async task persistence: https://oban.pro/articles/oban-starts-where-tasks-end


Absolutely!

Elixir's lightweight processes and distribution story make it ideal for orchestration, and that includes orchestrating LLMs.

Shameless plug, but that's what many people have been using Oban Pro's Workflows for recently, and something we demonstrated in our "Cascading Workflows" article: https://oban.pro/articles/weaving-stories-with-cascading-wor...

Unlike hatchet, it actually runs locally, in your own application as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: