Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

HN hug of death. And well deserved, I had a great time with this one.


I was SO CLOSE to finishing the tale about the Officer Spider who ended up free his former employee of his biggest constraint, TIME, but... then it crashed. I really want to finish that story!


Damn! Sorry about that. I’m upping the servers right now. Can’t wait to hear the end of this inspirational tale :)


Should've built it in elixir, I heard it's webscale


I’d be curious to hear what went wrong and took the server down. Elixir apps usually handle these traffic spikes just fine.


That's sound interesting, I wish the website up soon otherwise I'll just forget and moving on with my day.


Isn't elixir supposed to scale better than this?


Elixir probably isn't the bottleneck. The VPS instance itself, and whatever server they're using there, probably bit the dust.


yeah so I'm seeing a ton of "(Postgrex.Error) FATAL 53300 (too_many_connections) remaining connection slots are reserved for non-replication superuser connections", which I'm guessing means the pool size is too small? Although I've tried upping that, CPU, and memory, and none have seemed to fix it. My devops skills still aren't great so if anyone has any ideas on how to help, let me know!

This might be a good opportunity to change the server to try out fly.io


Pgbouncer is your friend for postgres dB's with lots of client connections.

Larger pool size only helps if they're freed up quicker than they are used.


Would you mind elaborating on the second point? Is this unique to larger pools? My inclination is that this would be true for pools of all sizes


I suspect GP meant: increasing pool size is only prolonging the inevitable

I've also observed on other DBs that you can increase the connection pool but it shifts the problem to other resources.


Thanks for the clarification. Makes sense


BEAM is not fast. You're probably going to get better performance with Node.js. It is however really really low-latency. But that doesn't help with handling load. Anyway, even so I'd guess bandwidth or the database is the issue.


BEAM is not fast when it comes to CPU intensive workloads. "Better performance" has many dimensions, in many (most?) scenarios of web serving the bottleneck is latency to other services (i.e. Postgres) and not the processing of the web server, like you suspect in this case as well.

BEAM/Erlang/Elixir's main advantage IMHO is parallelism and "cheap" concurrency that's easy to wrap one's head around. It reuses multiple cores, and processes (lightweight threads) are both cheap (smaller stack size, per-process GC) and easy to get right (due to immutability of data structures and supervision). Also you'll get less spiky latency even under high load.

Node.js main advantages are the breadth of the library ecosystem, industry familiarity and its JIT compiler (if used correctly). At the cost of more setup one can also run an instance per core if they care for parallelism.

So I'd suggest that with BEAM he'd get quite a good performance both in terms of developer time and in serving many (network bound) requests on a multicore machine.


> BEAM is not fast

> It is however really really low latency

I've been interested in Erlang for a while, and reading this is intriguing. Taken literally the above makes no sense :) but it also kind of... does, sort of, and I would like to understand how this works.


There's often a tradeoff between latency and throughput. Think GC algorithms or IO buffers. The Erlang project has historically chosen the far left end of this spectrum. They also haven't cared about numerical performance at all, delegating that to C code.


Ooooh, burst response as opposed to flat-out super tight hot loops.

That honestly makes a lot of sense. Thanks!


Probably too many queries to the database.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: