Hacker Newsnew | past | comments | ask | show | jobs | submit | quaunaut's commentslogin

This reads as if it isn't trivial to have an HTTP API for your public API in Erlang/Elixir, which is weird. Sure there isn't an included HTTP API for Erlang processes, but why exactly would you want one? They're not for the public internet, as their an implementation detail of your system. The majority of what they're capable of just isn't relevant to the public internet.


Unfortunately very little is trivial for me. Personally I have found the real value of Erlang to be internally between trusted nodes of my own physical infrastructure as a high-level distributed "brain" or control plane for health monitoring, config distribution (env vars, static config files, etc), smart failover decisions etc. Keep the “outside view” (HTTP, SMTP, DNS) all standards-based OSI, internally mapped to daemons each of which is individually robust (HAProxy, MySQL Cluster, Apache/Node.js, Postfix, PowerDNS etc.). Then use an Erlang/Elixir service as a live config and state authority, replicating state across infrastructure, pushing updates in real time, and having my legacy PHP/Python/JavaScript/etc code query this config via a simple HTTP/JSON API into the Erlang service. I'm not all the way there yet, but what works is most encouraging.


This stands to reason. If you need to bridge different languages together like in your case, they need to speak a common tongue. REST/GrahQL/gRPC solve this problem in different ways. There is no technical limitation keeping you from serving HTTP traffic from Erlang/Elixir, but from my own experience it isn't a pleasant experience. JavaScript or Python are dead simple, until you realise that 64-bit integers are not a thing in JS, and need to be handled as strings. Similarly, tuples will give you hell in Python.

On the other hand, if you don't need to cross that boundary, the BEAM will very happily talk to itself and let you send messages between processes without having to even think about serialisation or whether you're even on the same machine. After all, everything is just data with no pointers or cyclic references. That's more that can be said for most other languages, and while Python's pickle is pretty close, you can probably even share Erlang's equivalent of file descriptors across servers (haven't tried, correct me if I'm wrong), which is pretty insane when you think about it.

> I have found the real value of Erlang to be internally between trusted nodes of my own physical infrastructure as a high-level distributed "brain" or control plane

I think this is pretty high praise, considering it's about as old as C and was originally designed for real-time telephone switches.


> There is no technical limitation keeping you from serving HTTP traffic from Erlang/Elixir, but from my own experience it isn't a pleasant experience.

I would be interested in what was unpleasant? I've run inets httpd servers (which I did feel maybe exposed too much functionality), and yaws servers and yaws seems just fine. maybe yaws_api is a bit funky, too. I don't know the status of ACME integration, which I guess could make things unpleasant; when I was using it for work, we used a commercial CA, and my current personal work with it doesn't involve TLS, so I don't need a cert.

> you can probably even share Erlang's equivalent of file descriptors across servers (haven't tried, correct me if I'm wrong)

Ports are not network transparent. You can't directly send to a port from a different node. You could probably work with a remote Port with the rpc server, or some other service you write to proxy ports. You can pass ports over dist, and you can call erlang:node(Port) to find the origin node if you don't know it already, but you'd definitely need to write some sort of proxy if you want to receive from the port.


Perhaps I was a little harsh, this was a few years back when I was evaluating Elixir for a client, but ended up going back to a TS/Node.js stack instead. While the Phoenix documentation is stellar, I found it difficult to find good resources on best practices. I was probably doing something stupid and ran into internal and difficult to understand exceptions being raised on the Erlang side, from Cowboy if I recall. In another case, I was trying to validate API JSON input, the advice I got was to use Ecto (which I never really groked) or pattern match and fail. In JS, libraries like Zod and Valibot are a dream to work with.

The result was a lot of frustration, having been thoroughly impressed by Elixir and Phoenix in the past, knowing that I already knew how to achieve the same goal with Node.js with less code and would be able to justify the choice to a client. It didn't quite feel "there" to pick up and deploy, whereas SvelteKit with tRPC felt very enabling at the time and was easily picked up by others. Perhaps I need another project to try it out again and convince me otherwise. Funnily enough, a year later I replaced a problematic Node.js sever with Phoenix + Nerves running on a RPi Zero (ARM), flawless cross-compilation and deployment.

> Ports are not network transparent

I stand corrected, thank you for the explanation!


64 bit ints are a thing in JS for a while now


No, they aren't. You have to use BigInt, which will throw an error if you try to serialise it to JSON or combine it with ordinary numbers. If you happen to need to deserialise a 64-bit integer from JSON, which I sadly had to do, you need a custom parser to construct the BigInt from a raw string directly.


In case you didn't already know of it; CloudI is a cloud framework built with Erlang providing many of the features that you mention - https://cloudi.org/ See the FAQ for overview.


> Lets count the number of comercial sucesful released games that don't use mutability or object orientation.... > > I can't find any.

They use it, but the best tend to minimize mutability. And as for object orientation- merely 'having classes' is not the same thing.


Unreal, Unity, Godot, Source, virtually every game engine is object oriented.


It's a normal part of scaling because often bringing in the new technology introduces its own ways of causing the exact same problems. Often they're difficult to integrate into automated tests so folks mock them out, leading to issues. Or a configuration difference between prod/local introduces a problem.

Your DB on the other hand is usually a well-understood part of your system, and while scaling issues like that can cause problems, they're often fairly easy to predict- just unfortunate on timing. This means that while they'll disrupt, they're usually solved quickly, which you can't always say for additional systems.


Just for example, the site for the National Archives still shows Article 1 Sections 9 & 10: https://www.archives.gov/founding-docs/constitution-transcri...


Genuine question, as I've not invested much into understanding this. What features of the OS would enable these kinds of network restrictions? Basic googling/asking AI points me in the direction of things that seem a lot more difficult in general, unless using something like AppArmor, at which point it seems like you're not quite in OS land anymore.


The main lib everyone uses, :ex_aws, has been actively maintained for literally over a decade[1]. Official or not, it's used by literally the entire community, since even non-AWS services often will support its API.

1. https://github.com/ex-aws/ex_aws/releases?page=2


I'm not a big fan of Go, and I think you could write this about nearly any language.


Config/settings management are often paired with things that require at the very least an app reboot, where Feature Flags are explicitly something that should be capable of changing at will.

Now, could you have real-time config management that doesn't require a re-deploy/reboot of the app? Sure, but the typical 12-factor app can't really avail itself of that without significant rework.


I mean, this mechanism wouldn't make a server operator using entirely non-Google cert authorities more difficult to maintain. I'm pretty cynical on Google these days, but I don't see how this wouldn't be a boon to everybody pretty equally.


To tl;dr for people:

- As we've known for years, cryptographically-relevant quantum computers(CRQC) likely could wreck digital security pretty massively

- For HTTPS, 2 out of its 3 uses of cryptography are vulnerable to CRQC

- The currently accepted algorithms that fix these vulnerabilities transmit 30+ times the data of current solutions, which for more unreliable network conditions(like mobile) can introduce latency by as much as 40%

- Because attackers could store data now and decrypt it later with a CRQC, some applications need to deploy a solution now, so Chromium has enabled Kyber(aka ML-KEM) for those willing to accept that cost

- However, other algorithms are being worked on to reduce that data size, but server operators for your applications at the moment can generally only use one certificate, which older clients like smart TVs, kiosks, etc are unlikely to support

- So they're advocating for "trust anchor negotiation" by letting clients and servers negotiate on what certificate to use, allowing for servers to allow multiple at the same time

Honestly really impressively written article. I've understood the risk that a cryptographically-relevant quantum computer would pose for years, but I didn't really know/understand what was being done about it, or the current state of things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: