They ask for work after they finish the previous job (or jobs, they can ask for more than one). Each worker is a single process built just for one task.
If there's no work for them there's a small timeout and they ask for more. Simple loop. It's all part of a library we built for building workers. For better or worse, it's all done over http.
You are right, though, it is one XFS volume per queue instance.
We just run multiple instances (EC2) on a load balancer. Each instance of the queue gets it's own set of workers though so the workers know the right server to report done to.
We want a way to have a single pool of workers, rather than a pool per queue instance, and have them talk to the load balancer rather than directly, but we haven't come up with a reasonable way to do that.
I like how GCP cloud tasks reverses the model. Instead of workers pinging the server asking for work, have the queue ping the worker and the worker is effectively a http endpoint. So you send a message to the server, it queues it and then pings a worker with the message.
Ooh, that's kind of interesting. Am I reading this right that it holds the HTTP connection open for up to thirty minutes waiting for the work to complete? That's kind of wild.
Indeed. If you're hitting AppEngine or GCP Functions, they auto scale workers up for you to manage long running tasks. Ideally though, you finish as quickly as possible by breaking the work down into more tasks. That way, you can parallelize as much as possible.
It is all configurable, but I've scaled up to hundreds of workers at a time to blast through tasks and it wasn't expensive at all.
Workers being an HTTP endpoint makes them super easy to implement and even better... write tests for.
I love Task Queues. We are using them extensively. Also, they give you deduplication for free and a lot of other nice features like delayed tasks storing tasks for up to 30 days extremely detailed rate limits etc.
Yea, this is the only thing I don't like about them, that I can't test them locally.
More generally, is there something like a "on prem cloud" which just replicates say Cloud Tasks (but also other Cloud Apis) using local compute as well as say a local db. For testing / development this would be very cool.
Can you elaborate more on this? How do the workers know when they have to process a new job?
Also, am I right in assuming this is typically a single node setup only, as all the files are mounted on a non "share-able" XFS disk?