Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

RPC has its own set of limitations, which if your application qualifies, might be a good fit:

* Coupled server and client. gRPC uses protocol buffers which have zero backwards compatibility.

* Zero discoverability. The client knows in advance what the server can do.

* No standards to follow. You make up your own specs, like Google did.

These constraints are orthogonal to REST, the architectural principles behind the web. What they're doing is tunneling RPC over the web, which is what most HTTP APIs are doing already. There are only superficial differences like the use of protobuf, lack of verbs and URIs, etc.



"protocol buffers which have zero backwards compatibility"

Either I misunderstand you, or this is _remarkably_ wrong: Protocol buffers were designed to make it easy to define protocols which are both backward and forward compatible.


My understanding is the same. Though I had also come to believe that the primary way to achieve this is via loose constraints, i.e. required fields should be used VERY sparingly.

This compatibility pattern also leads me to conclude that protocol buffers aren't a suitable model for generating a client-side type system. You'll just end up with structures where everything is a Maybe type, so you end up needing tons of bespoke client-side code to handle the possible permutations.

You need a layer on top of of them to express the true type system suitable for clients, and I believe GraphQL does a great job of this (but I hasten to add that even GraphQL's type system is relatively limited and isn't a magic bullet).


gRPC is based on protobuf3, which doesn't support "required" or "optional" in the first place.


so everything is optional?


Except primitive types. So this is avoided:

> You'll just end up with structures where everything is a Maybe type, so you end up needing tons of bespoke client-side code to handle the possible permutations.

Absent primitive values default to zero, while absent message fields are mapped to what makes most sense in the specific programming language (in most of them, "null" or "nil").


yes


I made a small mistake in the first point, got downvoted to oblivion and people seemed to stop reading there: the binary serialization indeed has backwards and forwards compatibility. However, the textual serialization lacks this compatibility. I can't remember a petty detail of a vendor specification apparently. Pedants with encyclopedic knowledge of Google are out in full force today!

I should have stopped at "coupled server and client", the point is that they both rely on an agreed upon external schema since the messages are not self-descriptive.


Honestly: The other points are not better if your intention was to make grpc look worse than other HTTP based APIs:

- For all of them you need to know the remote addresses (IP/hostname, port) upfront or use an external service discovery solution.

- For both you can implement some service introspection, which delivers you a list of available services/methods. Afaik for grpc there even exists some standardized introspection mechanism. For other HTTP APIs you might want to download some swagger description from a well-known address. Or WSDL scheme. Or GraphQL schema.

- Standards on which layer? On transport layer you are following the HTTP standard, independent of whether you are using grpc, GraphQL, json-rpc or some handmade REST API. On application layer you are mostly on your own anyway, there's not a lot things one could standardize. There are some exceptions, like the standardization of Webdav on top of HTTP, but most applications have their own specific set of requirements. If we are talking about standardization without meaning offical-standardization, then we can argue that grpc provides a more rigid (standardized) model for an application than the definition of some ad-hoc APIs: It is standardized how APIs and exchanged data types are defined (.proto files), how they can be accessed and how data is transferred over the wire (mapping to HTTP). All of that without the application developers on both sides needing to care for it.


Actually gRPC is about the same as other HTTP based APIs, it is just a more efficient RPC. All of them are lacking what made the web scalable in the first place.

- HTTP APIs are worse than websites of the 90s. At least a browser could be expected to view a few websites. HTTP APIs require a custom client for each one.

- Document media types, not APIs. This isn't such a novel concept, browsers (fancy HTTP clients) work because HTML is a standard.

- Standards at the application layer, not specifications. You mentioned specs only.

Let me just clarify that RPC is a great fit if you are constrained to a single vendor and don't care about third-party clients. On the web, every browser is a third-party. For HTTP APIs to take off, they need to be built more like websites, or else vendor specs will fill every niche.


I don't understand your point about third-party clients. APIs defined in .proto files can have both clients and servers implemented by anybody.


APIs defined in XMLRPC, CORBA, SOAP, et al, can have clients and servers implemented by anybody.

Programmers don't seem to learn from history and struggle with thinking over time. These formats worked well in a time when a single party (or second party) controls the server and client, when services were very consolidated. Now that the web is becoming more and more centralized and closed, it follows that RPC is making a comeback: widespread interoperability is not much of a concern.


Sorry but you're not addressing your own point:

> RPC is a great fit if you are constrained to a single vendor and don't care about third-party clients

If I'm not constrained to a single vendor, and care about third-party clients, what makes RPC a bad fit? In specifics, not vague historical comparisons.


Practically speaking, most REST based servers and clients are tightly coupled anyways.

That is why SDKs are so popular with developers. They just want to call a method and not be concerned with how the bits get across the wire.

And please do not cite the browser as a good example of a REST client. The browser is driven by an advanced AI (namely a human). We are not there yet with machine to machine interactions, and it isn't clear that REST is the magic bullet that will enable this kind of system.


A good rule of thumb is that if a REST API has an SDK, then it's not really RESTful in the first place.

SDKs are an enormous effort to create and maintain for every HTTP API, I think it's a malpractice that is all too common.

Browsers (and by extension, websites) are not good examples of REST in practice? I don't know what world you're in.


> SDKs are an enormous effort to create and maintain for every HTTP API,

Hence the attraction of generating client and server stubs with gRPC

> Browsers (and by extension, websites) are not good examples of REST in practice?

The "client" of a website is a human being - and we are very good at interpreting dynamic content.

As an example: There are probably 100+ websites out there where you can book a ticket for a flight. It might be painful, but as a human, I can figure out how to navigate and book a flight on any of those systems.

I challenge you to write a REST client that can do the same.


Every single one of those websites use HTML, a common media type, with semantics defined in that standard. What's missing are the application-level semantics like what's a "flight search" and "flight booking", which can be solved with linked data. In fact, this already exists! [0]

In theory, using a common media type and linked data vocabulary, one can make this hypothetical scenario of an automated machine-to-machine flight booking system happen. In practice, it requires either changing how people think, or build APIs to begin with. It's a steep uphill battle to change how people think, making this happen is much easier.

[0]: https://schema.org/FlightReservation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: