Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's simple: async runtimes/modules in JavaScript/Node, Python (asyncio), and Rust. Those basically handle message queues for you transparently inside of a single application. You end up writing "async" and "await" all over the place, but that's all you need to do to get your MVP out. And it will work fine until you really become popular. And then that can actually still work without external queues etc. if you can scale horizontally such as giving each tenant their own container and subdomain or something.

There are places where you need a queue just for basic synchronization, but you can use modules that are more convenient than external queues. And you can start testing your program without even doing that.

Actually async is being used a lot with Rust also, which can stretch that out to scale even farther with an individual server.

Without an async runtime or similar, you have to invent an internal async runtime, or use something like queues, because otherwise you are blocked waiting for IO.

You may still eventually end up with queues down the line if you have some large number of users, but that complexity is completely unnecessary for getting a system deployed towards the beginning.



To back up the story regarding async a bit, at least on the front end ... A long time ago in the 2000s, on front-end systems we'd have a server farm to handle client connections, since we did all rendering on the server at the time. On the heavyweight front end servers, we used threading with one TCP connection assigned to each thread. Threading was also less efficient (in Linux, at least) than it is now, so a large number of clients necessitated a large number of servers. When interfacing with external systems, standard protocols and/or file formats were preferred. Web services of some kind were starting to become popular, usually just when interfacing with external systems, since they used XML(SOAP) at the time and processing XML is computationally expensive. This was before Google V8 was released, so JavaScript was seen as sort of a (slow) toy language to do only minor DOM modifications, not do significant portions of the rendering. The general guidance was that anything like form validation done on the client side in JS was to be done only for slight efficiency gains and all application logic had to be done on the server. The release of NGINX to resolve the C10K problem, Google V8 to make JS run faster, and Node.js to scale front end systems for large numbers of idle TCP connections (C10k) all impacted this paradigm in the 2000s.

Internally, applications often used proprietary communication protocols, especially when interacting with internal queueing systems. For internal systems, businesses prefer data be retained and intact. At the time, clients still sometimes preferred systems be able to participate in distributed two-phase commit (XA), but I think that preference has faded a bit. When writing a program that services queues, you didn't need to worry about having a large number of threads or TCP connections -- you just pulled a request message from the request queue, processed the message, pushed a response onto the response queue, and moved on to the next request message. I'd argue that easing the strong preference for transactional integrity, the removal of the need for internal services to care about the C10k problem (async), and the need to retain developers that want to work with recent "cool" technologies reduced the driver for internal messaging solutions that guarantee durability and integrity of messages.

Also, AWS's certifications try to reflect how their services are used. The AWS Developer - Associate still covers SQS, so people are still using it, even if it isn't cool. At my last job I saw applications using RabbitMQ, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: