Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Akka checks most of those boxes AFAIK. And there's really not much you can do in the way of cheating unless I misunderstand you. Or at least idiomatically you wouldn't cheat in Scala anyways.

Speaking of which, I just realized the AtomicLong I'm using in my IdGenerationActor (performs an atomic increment of a processId counter in the database, then uses that with the AtomicLong as the input to Hashids; fast, in-process, cluster-safe short-Id generation that will leave popular Redis based solutions in the dust) is completely unnecessary. Copied and pasted from non-Actor code without consideration.

I guess Actors still can't keep you from doing dumb things yet. ;-)



this is just not true. every other week i have production issues because akka managed to exhaust the thread pool with long running/nonresponsive actors


You've done something wrong then? Or maybe using experimental features? I might play with them a bit, and it can be frustrating to wait, but I've never launched anything on -experimental before.

I was referring to "cheating", with the idea that you might pollute your Actors with... I dunno. Programmatic connection pooling for your database driver? Passing mutable messages around?

Both of those things would be very unusual in Actors written in Scala since pretty much everything is immutable by default. Seeing that sort of thing should at least raise some eyebrows.

As far as non-responsive, I've never seen that. But I do take care to use Futures where appropriate, and pipeTo. Maybe it's good habits, or maybe I'm just very lucky.

One of my first Akka projects is still running, still processing content it's notified of by Postgres through LISTEN/NOTIFY, still posting that content into Cloudant (basically a managed Lucene deployment in this case).

And it's been running since September 2014 without a restart AFAIK. Which would have never happened with a previous non-Actor solution we might have used if for no other reason that a network blip might detach the listener, whereas here the Actor just restarts.

My experience has been overwhelmingly positive. Despite doing the wrong thing occasionally.

If you have Actors that are hung, I guess the first thing to try is figuring out which one(s). After that, you could just schedule the supervisor to routinely PoisonPill them, and forcefully stop them after a grace period.

I'm not sure that OTP is going to help with this sort of issue either. You have an apparently blocking process that makes an Actor non-responsive. The fact that it's single threaded within the Actor and stalls it's mailbox is kind of the point of Actor systems AFAIK.

I guess one thing I feel like helps me is to keep my Actors small and doing one thing. It's hard to do too much damage when you only have a dozen lines of actual message handling. "One thing" is sometimes coordination/aggregation BTW. Which means I might have a GetRequestActor, PutRequestActor, DeleteRequestActor, etc. Which do the one thing. But then I also have a DatabaseActor that basically just coordinates/forwards for all those so you don't actually have to deal with them yourself.

And from there if I decided that no GetRequestActor should take longer than 10 seconds to do it's thing, I can easily schedule the DatabaseActor to forcefully stop any task exceeding it (without the need for an actual watch if you choose). Which you can then log an ERROR for and work out why. And maybe some requests just take longer and that's OK. So maybe you then write a LongRunningRequestPathExtractor and put that pattern before the normal one. And now you can have multiple timeouts for different paths.

Or maybe you just record the epoch for different requests, and if you've had 1,000 updates since the last View request, you know the next one is going to trigger a reindex. So you give it extra time. Or you set it to allow stale results and reschedule the same call to occur again in 1 minute to minimize the disruption of indexing the changes. Just some thoughts.


>You've done something wrong then?

Well, the idea is with Erlang you can't.


I don't believe Erlang goes around magically imposing it's own timeouts on message handling.


it doesn't, but an erlang process that is waiting on a message won't block other processes from running ever


The magic is called "preemptive multitasking".




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: