Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's circular. I can't argue anything from this point. "Why should I get an electric car when I have a good old gas car in the driveway?" I don't have an answer. I do have answers for why one is better than the other. APIs work over HTTP in spite of limitations, not because of good synergy. I think gRPC is the most reasonable implementation of such (disclaimer: I work for Google, but not on gRPC and I've used gRPC before I worked at Google) but I still think it is overkill for many people. If you are using HTTP+REST+JSON and it works fine for what you are doing, then fine - there's an ecosystem already built around it. But the kinds of things people do with lighter weight and more efficient RPC layers literally aren't doable over standard HTTP/1.1 and REST. It enables stuff you wouldn't think of, when you can measure the absolute overhead in bytes. (As an example, I'm not aware of anyone actually doing this, but it would almost certainly be possible to forward low level signals like USB or perhaps even PCI express packets over a lightweight RPC layer, and get all of the encryption/access control/etc. you already have in your stack.)

Answers for why HTTP/1.1 is a poor fit:

- Text format requires text parsing. How long do you limit the header lines? What transport compression do you support? Text parsing is inefficient compared to binary formats.

- A lot of difficult to understand behavior. When do you send 100 Continue, what do you do when you receive it? What happens when you are on a keep-alive connection and there's no Content-length? (There's a whole flow chart for something simple like this.) etc.

- A lot of cruft. Like chunked encoding is weird. Trailers are also weird. What happens when a header is specified twice?

Answers for why HTTP/2 is still a poor fit:

- What are the headers even for? You now have this entire section of your request that doesn't matter, with its own compression scheme called HPACK. Why?

- Server push. It's nice that you have bidirectional streams, but this is clearly designed for browser agents. gRPC repurposes this for bidirectional streaming as it should be, but...

- ...Often times, hacks like that lead to the worst problem: You did all of this work to use HTTP as an RPC layer, and you can't even use it in a browser because the sane things you do for your backend might not be compatible. in gRPC there's a special layer for handling this, but it's a lot of additional cruft.

HTTP/REST is great because there's a huge ecosystem, but that's not even a solid win due to the complexity. As an example, years ago I ran into huge problems with Amazon ELB because it was buffering my entire request and response payloads, and imposing its own timeouts on top. All documented behavior, but you can't just plug in this HTTP thing and hope for it to work. Basically anything in the middle that also speaks HTTP has to be carefully configured. Again, leading to doubt over the whole point of using a protocol like HTTP. There's rules for what should be GET, PUT, POST, DELETE, and yet those all interact strangely. No payload in GET body, some software gets weird about calls like DELETE, so sometimes you have to support POST for what should be a PUT and so on.

And at the end of the day, all you really wanted was RPC payloads in both directions, and you have all of this crap around it, and it's largely just because web browsers exist, but none of this stuff even works well together.

It works OK if you don't really care much and just throw a software stack together, but that doesn't mean it will be efficient, doesn't mean you won't run into problems. I definitely prefer to go for simpler, and HTTP is not actually simpler. It just has the benefit of having an existing ecosystem.



I don't think we disagree about anything here, if you wanna optimize for maximal machine/network utilization then optimize for that with gRPC or equivalent, if you wanna optimize for lean stack and have to use HTTP anyway because you're on the web then use (RPC over) HTTP - both can be considered more "efficient" depending on the setting and your constraints.

But the point was that contrasting web requests with RPC is a mistake of category and has little to do with various IO handling and concurrency models that the author was discussing.


Well, the thing is, I do agree with the author, though, on their point of not using web requests for RPCs. I think we must be interpreting the author's text differently.


Either that or she's lumping together two separate issues, or both.


The author never conflated the design choices of RPC protocol and threading model. Just the opposite, in fact. She listed them separately.


Except she writes: " Then it does its weird userspace "thread" flip back to the original request's context and reads the response. It chews on this data, because, again, it's terrible JSON stuff and not anything reasonable like a strongly-typed, unambiguously, easily-deserialized message. The clock is ticking and ticking. "

If she laments that it is bad design to do the deserialization on the IO thread it is just as true for JSON as it is for protobuf or whatever "true rpc" format she considers worthy.


It is less true for formats that deserialize faster. I still don't see where she is confusing the two. At the very top, she explicitly notes them separately:

"I will not use web requests when the situation calls for RPCs. I will not use 'green' (userspace) 'threads' when there are actual OS-level threads and parallelization is necessary."




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: