Hacker News new | past | comments | ask | show | jobs | submit login
How Did REST Come to Mean the Opposite of REST? (2022) (htmx.org)
76 points by DeusExMachina on March 3, 2024 | hide | past | favorite | 53 comments



This essay is great. I can see from the first comments that its point is entirely lost. Which is too bad; the core idea of Representational State Transfer is interesting and deserves to be respected separate from "HTTP but not SOAP".

I built some of Google's first public APIs and we used SOAP, to match the onion on our belts. It never worked very well. I'm real glad that JSON over HTTP and other forms of RPC succeeded where SOAP failed. But it's a shame that Roy's ideas for REST got lost in all that.


Do you have any resources showing what an API client for a "true REST" system would look like? The #1 objection I've seen to REST as Fielding described it is that it seems to basically assume that the human is the API client.

For example, this passage from Fielding:

> A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations.

The idea of a computer application that has no context beyond an entry URL and a media type (text/html) is pretty hard for me and many like me to wrap our heads around. I can't imagine a system being written that doesn't know what the options will be and what shape the data will come in, and once you've hard coded that it's not obvious what you gain by sending URLs instead of IDs.

I'm totally open to there being something I'm missing here, but every explanation of REST I've seen focuses exclusively on the shape of the server response and doesn't explain what an API client is supposed to do with the response and how REST avoids hard-coding client actions.


I wish I could have you look at a project I worked on in the early 2010s in the security industry. We had an architect who went full ham on HATEOS and honestly it was great. You could point a client at a single root endpoint and it would self-discover every other URL[path] it needed. We relied on RFCs where possible (sometimes drafts when an accepted RFC wasn't required). Everyone got very familiar with RFC 2616[1], link rels, uri templates[2], media types, &c.

It required architectural dedication because junior devs would complain that it didn't match what they learned about REST from webdev tutorials. Which is to say, the social hurdles to HATEOS implementations far outweigh the technical hurdles. They'd constantly try hardcode urls and have to be reminded not to do that by the backend folks. From the backend, it was great. We had tremendous flexibility when URLs weren't part of the spec and we were able to tackle hard problems like supporting multiple simultaneous versions quickly and with little lift because of this flexibility.

1. https://www.rfc-editor.org/rfc/rfc2616

2. https://www.rfc-editor.org/rfc/rfc6570


> Do you have any resources showing what an API client for a "true REST" system would look like?

You are using one, this very moment, as you read this comment. Fielding's concept of "REST" was not a prescription for a new type of RPC system; it was a description of the architecture of the web as it existed at the time. The "computer application that has no context beyond an entry URL and a media type" is simply a web browser.


Sure, but how is that a helpful concept in any way? If I started calling my operating system an "ALLOCBAG" and made a dissertation talking about "The ALLOCBAG Principle" you can't really deny me that it's a thing that exists ("ALLOCBAGs are all around you! You're using one I'd right now", I'd say), but it's not really advancing the state of discussion here.

Similarly, rebadging "a client/server system that does a thing when you interact with the client" as "REST" (yes, I know there's a bit more to the definition) doesn't seem useful or helpful. So out it goes, in favor of the one that's talking about a style of API design.

Roy Fielding designed the Adobe Experience Manager API according to his idea of REST [0]. I don't think the resulting API was remarkably noteworthy. [1]

[0] https://www.slideshare.net/AEMHub2014/rest-in-aem-by-roy-fie... [1] https://experienceleague.adobe.com/docs/experience-manager-6...


Human UIs like HN are the only example I've ever seen that's compelling. But when you refer to it as a RESTful API (TFA uses "APIs" 31 times) you're not talking about a UI, you're talking about an Application Programming Interface, which is explicitly defined as an interface by which computers talk to each other.

If REST just plain shouldn't be used in APIs then that's a fine argument to make, but we should start calling for RESTful UIs and start pointing out that a RESTful API is an oxymoron. We shouldn't be pushing for integrating HATEOAS into APIs.


> Do you have any resources showing what an API client for a "true REST" system would look like? … The idea of a computer application that has no context beyond an entry URL and a media type (text/html) is pretty hard for me and many like me to wrap our heads around.

Here is a very, very simple one. It uses a hypermedia format which uses JSON syntax, but has more requirements in order to be valid:

    {
        "account_number": 12345,
        "balance": {
            "currency": "usd",
            "value": 100.00
        },
        "status": "good",
        "deposits": "/accounts/12345/deposits",
        "withdrawals": "/accounts/12345/withdrawals",
        "transfers": "/accounts/12345/transfers",
        "close-requests": "/accounts/12345/close-requests"
    }
The key is that this is a media type (let’s call it application/example+bank-account) which defines a particular object schema, which includes certain properties whose values must be URL strings (it uses HTML’s relative-URL rules).

> once you've hard coded that it's not obvious what you gain by sending URLs instead of IDs.

Well, for one thing the client does not need to care about constructing URLs because it just follows them. The logic is just ‘grab the value of a key, verify that it is a URL, follow it’ rather than ‘remember an ID, use some implicit knowledge to construct a URL, follow it.’ It means that the server is free to delegate certain accounts to other servers. It makes federation a ton easier. It means that application code is able to live at a higher level.


> The #1 objection I've seen to REST as Fielding described it is that it seems to basically assume that the human is the API client.

Reading the essay, it seems that is indeed the goal that the client is the human, so it would be weird for that to be an objection. If so, the objection then seems rather that a RESTful API is not terribly useful for a lot of systems.

Which is fine, not everyone has to love hamburgers either.

But yeah, I'd love to see a proper non-trivial example.


The essay repeatedly uses the phrase "RESTful API", so it's natural for people to get confused when it advocates for things that are fundamentally incompatible with an API client.


Imagine a RUST endpoint as a database interface. The various HTTP operations for record read, create/update, and delete.

The client could dynamically generate form formats which match the data type of records. You've probably used off the shelf CRUD software which behaves that way, only hard-coded based on settings.


Per Fielding's definition [1], the constraints of REST architecture are:

* Client-server

* Stateless (no cookies/sessions)

* Cache

* Uniform Interface (i.e. HTTP verbs)

* Layered System (proxies, load balancers, etc)

* Code-on-Demand (applets, scripts)

To fit the REST definition, it needs to include client-side execution of server applets/scripts.

Right?

This is all such a wierd game trying to defend an implausible definition.

Stupid game, stupid prizes.

[1] https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...


> Stateless (no cookies/sessions)

Cookies are stateless though, that's the whole point of cookies.


The very link you provided says that client-side execution of server code is an optional constraint:

>The notion of an optional constraint may seem like an oxymoron. However, it does have a purpose in the architectural design of a system that encompasses multiple organizational boundaries. It means that the architecture only gains the benefit (and suffers the disadvantages) of the optional constraints when they are known to be in effect for some realm of the overall system. For example, if all of the client software within an organization is known to support Java applets [45], then services within that organization can be constructed such that they gain the benefit of enhanced functionality via downloadable Java classes. At the same time, however, the organization's firewall may prevent the transfer of Java applets from external sources, and thus to the rest of the Web it will appear as if those clients do not support code-on-demand. An optional constraint allows us to design an architecture that supports the desired behavior in the general case, but with the understanding that it may be disabled within some contexts.

I think this means you only do it when it makes sense. Perhaps this is too open-ended as you can run basically anything on the client, including a full application. But I think the stateless requirement means you should not require that the server be bothered with the client state, and that the client state should be easily recoverable from the resource URI. But I don't actually know. You don't have to commit to using REST principles if you don't want to. I think it was proposed as a way to reduce server load and bugs, not as a way of life.


The Atlassian APIs have JSON responses with URL links throughout for further navigation. That would seem like it satisfies the stricter definition of REST without being HTML. But self-describing interfaces like that are only useful when interacting with people who are then deciding what next action to take. They are decidedly not what you want when interacting with programs where the semantic interpretation of information is hard-coded into the program.

That's why at the same time, REST was taken as a call to action to formalize the server-side API that clients rely on so that your user-facing UI would use that API to generate the relevant HTML for the browser and the same API could be reused for 3p developers to rely on to build automation / additional products without increasing the maintenance area.


Agree. I've built a bunch of stuff using those APIs and consuming those links is actually more work than not having them, IMHO. To find them, I have to look at response payloads, identify the field with the link, and then call that link to get additional details. A simple, documented resource to URL mapping would be easier to use. It comes at the cost of having to update code when resource locations change, but I've yet to see that happen outside of major API changes that break things in other ways that require code changes on my end.


Yes, the article is a good historical review, but like other “what is REST really” explanations, the example only works when an English-speaking human is the client. It doesn’t work as motivation for building APIs this way.

A computer doesn’t know what the word “deposits” in that link means, unless the programmers on both ends agree on what it means. At that point it doesn’t look that different from agreeing on the possible values of “status” in the JSON.

But wait, maybe we can just put ChatGPT on both ends and hope for the best… :)


Can you elaborate on what you mean by "an English-speaking human is the client"? It seems to me that if the hypertext response in the example referenced "deposits" in Arabic, Swahili, English, or Japanese, it'd all be the same to the client.

(fta) > The client knows nothing about the API end points associated with this data, except via URLs and hypermedia controls (links and forms) discoverable within the HTML itself. If the state of the resource changes such that the allowable actions available on that resource change (for example, if the account goes into overdraft) then the HTML response would change to show the new set of actions available.

> The client would render this new HTML, totally unaware of what “overdraft” means or, indeed, even what a bank account is.

So the client doesn't understand what a deposit is, but it doesn't have to. The response provides actions that the client knows how to incorporate and navigate. The alternative being a JSON response that is completely opaque - requiring additional parsing, explicit interpretation, and templating/interpolation, and DOM insertion before it's useful.

A RESTful web client speaks hypertext. JSON-RPC is useful, but is not RESTful in the original sense of the word.


This is exactly the confusion caused due to mismatched terminology. In the explanation the “client” is a browser. But obviously the browser isn’t an active participant in the exchange. The actual client of the banking service is the human looking at the browser. The human knows how to react because they know the English word “deposits” and can continue this game of adventure by interpreting whatever comes back, again based on English labels.

Now consider how you’d write a script for a computer to use this “API” in automated fashion.

Nobody creates a JSON-over-HTTP protocol because they think it’ll be easier for human clients to use. The area of controversy is how computers should talk to each other.

Another way of saying this is that the concept of “self-describing” data is a handwave that is doing a huge amount of work here.


We're going to disagree, but respectfully, thank you for the discussion.

> This is exactly the confusion caused due to mismatched terminology. In the explanation the "client" is a browser. But obviously the browser isn't an active participant in the exchange.

The browser is also understood as a client in other similar exchanges [0]. There shouldn't be much confusion on the term, at least from a developer perspective.

> The actual client of the banking service is the human looking at the browser.

I don't see it this way. The article is clear about what the term "client" means (e.g., "a proper hypermedia client...", "...the client would render this new HTML...", "...the client must know how to interpret the status field...")

It's also clear from Fielding's thesis itself, referenced in the article, what a "client" represents in its description of REST as a "client-server architecture" with hypermedia as a "client-server constraint" [1].

There is confusion about REST as a term. I don't believe it's due to mismatched terminology or a misunderstanding of the term "client" in the context of REST as a network/web architecture.

> The human knows how to react because they know the English word "deposits" and can continue this game of adventure by interpreting whatever comes back, again based on English labels.

The user of a (web) client isn't relevant in this case, nor is it how the user interprets what comes back. The point is that the (web) client can interpret a hypermedia response without any additional knowledge. The (web) client can make any additional actions available to the user without further interpreting the response as it would with the JSON version.

> Now consider how you'd write a script for a computer to use this "API" in automated fashion.

Now, your script is the client, not the browser, and your script is responsible for parsing the relevant data, following links, etc. How it works with RESTful responses and payloads is entirely up to your script. That said, it seems similar to a JSON response, you'll parse out the bits you care about and proceed accordingly.

> ...The area of controversy is how computers should talk to each other. Another way of saying this is that the concept of "self-describing" data is a handwave that is doing a huge amount of work here.

I disagree, but I can see where you're coming from.

Machine-to-machine communication using hypermedia rather than JSON _still_ strikes me as "self-describing" in the sense that being able to locate, navigate, and act on the resources that hypermedia represents is what a "proper hypermedia client" is designed to do. You can still access the API programmatically, but now you have less parsing to do, and things like navigation and resource manipulation come for free. By that I mean there is no need for a dedicated API client that solely speaks the JSON-RPC dialect of the particular API you are accessing. (I'm thinking of "RESTful" API clients I've recently had to work with in one way or another: Box, BusinessCentral, Azure, DocuSign, etc.)

I don't interpret "self-describing data" in the context of a network architecture to mean that the (web) client has attained enlightenment and suddenly knows what deposits and withdrawals are. There shouldn't be too much confusion on that point, even with the most superficial reading. Fielding describes "self-descriptive messages" as simply a RESTful constraint - not magic. You (through a [web] client) ask for deposits, and you receive a response containing details about a deposit and/or ways to manipulate those data further. How the user interprets the message, what they do with it, isn't interesting. What is interesting is that user will work with the resources through a web client: a RESTful web client can interact with a RESTful resource without additional "out-of-band" knowledge in the form of, say, JSON parsing and buckets of javascript imposing some RPC logic.

[0] https://datatracker.ietf.org/doc/html/rfc6749#section-1.1

[1] https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...


> The (web) client can make any additional actions available to the user without further interpreting the response as it would with the JSON version.

What happens when there is no user and it's fully automated? Everywhere in the post where you say "you", you're referring to an intelligent user interacting with the website which is the point being made - hypermedia REST works well in an interactive context, but an automated context needs something simpler, more reliable, & more parseably stable than the more mutable UX you'd send the user. The point is that this breaks down when you have automation interacting with a service that has no concept of agency - it's executing specific sets of functions against endpoints and discovery is not particularly relevant as that discovery happens once when the programmer writes the automation code.


That's the point of the article; when you say 'an automated context needs something simpler, more reliable, & more parseably stable', you are referring to an RPC system like, eg, JSON-RPC. This is what the OP article calls 'the opposite of REST'.


Accept-Language is a thing.

It works just a well with HTML docs as with HTTP APIs.

(i.e. usually not at all, but there you go)


I had to deal with one of those APIs in a project years ago and we did use the URLs that it sent us in the JSON responses. We could have hardcoded them but that would have made our client brittle, because it would fail anytime the devs of the server changed their mind on their endpoints.

This means that an attack on that server could make us POST to or GET from another domain controlled by the attacker? Absolutely yes. I don't remember how we dealt with it. Maybe we fixed the host, or made some sanity checks. Anyway an attacker with that level of access could probably siphon out data in a less evident way.


HTTP already has a method for redirecting the client if they use an out of date URL. Three of them, in fact.


I generally like the idea of a resource-oriented API, this seems pretty intuitive and useful. And that part mostly survived even in APIs that are otherwise nothing like the original REST idea. But many other parts of the original idea just don't seem useful enough to be worth the effort. And there are parts where I don't think pure REST has good solutions (or I'm not understanding REST well enough to know them), mostly around bulk actions. The moment you want to act on multiple instances of a resource you have to invent your own stuff, it doesn't fit well with the usual REST layout. And there are very good reasons to need this, just iterating over resources is not a solution in terms of performance and transactional behaviour.


For me, it all comes done to ideological purity vs. what works. Everything exists on that continuum.

REST as defined in Roy's dissertation is pure and beautiful, but the work of programmatically navigating such an API adds burden for no gain in the vast majority of use cases.


I’m a huge fan of (the original meaning of) REST, but… I agree with you!

REST is for building cooperating standards based systems.

Almost nobody these days is particularly invested in building standards to have different companies with different clients and servers interoperate anymore, everyone is focused on customer acquisition and monetization.

So there’s really not much point in creating and deploying solutions aiming at this kind of interoperable standard.

The folks at Fastmail are still trying to do this, more power to them, but they’re swimming upstream, I think.


The answer is simple…

Sometimes you spend more time trying to make an API cleanly fit into a REST scheme than actually making the API.

Sooo you say fuck it.


Let's not throw the baby with the bath water. Level 2 REST is a huge step up (over RPC without resources) to have clear interfaces.


It is frustrating that people don’t know what makes REST RESTful. But what’s weirder is why people seem to want to make things RESTful in the first place. Why has it become a signal of quality to claim something is ‘RESTful’?

I mean, for which problems does a true, Fielding-style RESTful API and universal client model actually make sense?

Fielding himself was describing the World Wide Web. Not a web application - he was talking about the entire ecosystem on which arbitrary web applications can be built.

The web (when driven through a web browser, at least) is RESTful, no matter what you do on top of it.

If you are not building your own universal application ecosystem of services and clients akin to browsers and web servers, you don’t actually need a REST architecture. You probably need an RPC architecture.

But for some reason people have acquired a vague understanding that REST is ‘better’ than RPC.

What they usually mean is that idempotent resource-verb based RPC APIs are a more compatible way of building applications within the REST architecture of the web than SOAP-like RPC APIs are.


I think carefully defining and documenting different complex media types is incredibly valuable, at scales much smaller than “the whole internet”.

For instance, I think IMAP, CalDAV, CardDAV, JMAP, RSS and other similar standards add a ton of value to people’s lives. If you build a true, extensible, REST in its original meaning system, you’re contributing to this grand vision of interoperable utility.

But I also agree with you that building interoperable standards is hard work with negative ROI for investors who aren’t trying to commoditize a complement somewhere. And so the IETF dream isn’t getting a ton of investment these days.


Prescriptivism vs descriptivism. Sorry, descriptivism always wins.


Naive binary


I was aware of the whole REST is not REST but it was fascinating to read the history behind it.

I'm adding this to my list of "misinterpretation of a Fowler's article causes nonsense in the tech community", right before microservices


previously:

How did REST come to mean the opposite of REST? - https://news.ycombinator.com/item?id=32141027 - July 2022 (383 comments)


I wonder how many people reading here have taken the time to sit down with Fielding's original 2000 dissertation, analyze what he said, and understand it?


Another essay from the same site helped this make more sense to me. If your immediate reaction was "But my code doesn't know how to handle the links in the response!", check it out:

https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...


HATEOAS as designed for humans makes sense, but then REST advocates should really stop calling what they're after a RESTful API and start calling it a RESTful UI. An API is widely considered to be the opposite of a UI—it's the interface that is directed at computers rather than humans.

If the key principle of REST is one that is incompatible with computer clients, then a good starting point in a constructive dialogue would be to acknowledge that what they're really saying is that a RESTful API is an oxymoron.


This authors definition of HATEOAS as being “humans read some text” is bizarre and completely disconnected from what people were talking about at the time.

Here’s how HATEOAS is supposed to be helpful for computers:

1. Out of band, humans create complex, useful, well defined media types that cover the space of the functionality they’re trying to define. These need to be well documented. This is where the whole thing falls down

2. Once you’ve got useful media types, you build servers that offer clients a “service document”, a collection of interesting resources (with resources potentially exposed with different media type representations, so newer better media types can be chosen when they’re created in the future, without needing to swap out your server)

3. Servers also define a variety of useful verbs they support, which can also expand over time

REST depends heavily on (1). Since almost no one bothers to do (1), the whole project gets tarnished by people trying to do the later stuff and calling it REST.


> If the key principle of REST is one that is incompatible with computer clients

Don’t worry, it’s not. While some hypermedia is intended for humans, there is no reason a hypermedia might not be targeted at computers instead.


I wonder if they simply overplayed their hand pushing for the esoteric http verbs like put.

People then pick and choose what they can ignore.

Should have stuck with get and post IMO.


So call it RPC. The author is being pedantic, which is fine, we all are in our own ways.

But the first example is clearly unusable to anyone with a quarter of a braincell, so who cares?


When I was learning webdev (around early 2010s) I was very confused about exactly that. I couldn’t even comprehend how and why API of a web service could or need to be RESTful in the proper sense of the word. HTML is the REST API of the web. It allows people to dynamically navigate webpages to get what they want. Most APIs don’t need to be REST and to follow HATEOS: no one is going to explore them on the spot. So it’s just RPC built upon HTTP semantics (and sometimes not even that when everything is just POST requests with weird URIs).

Many tutorials back then explained what REST is and then explained what RESTful API is. And they seemed like two totally different things. I thought I am not getting something. Thankfully, at some point I realised that all those tutorials and people were just parroting bullshit. At best one could say that JSON over HTTP is REST-like compared, for example, to websockets.


Afaik, SOAP is a subset of all webservices; the rest is REST (i.e. XML without SOAP and non-XML).


Just because the person who coined the term had something else in mind, doesn't mean that it is inherently better than what it became.

The author also misses an important point: Most applications nowadays must have multiple UIs, e.g., not only HTML, but also iOS and Android. The current pattern of using JSON-RPC-style APIs serves this reality much better than "correct" REST, since it can be reused.


Simply REST as a whole was impractical for machines. So folks took the 3/4 that was applicable and ran with it. REST—the good parts.

What I don’t completely understand is why the hand-wringing about it for almost two decades?

I gathered from discussion it’s because business folks kept using the term in job posts. Other words bastardized by the public like “agile” and “hacker” say hello. :-D


Wait until I tell you about Java Script.


Because nobody cares what REST meant "originally" or what Fielding had in mind.

What people care about, and what did caught on because of that, is getting rid of XML and SOAP, and using JSON and lightweight web serving concepts for data endpoints.


Yet another retcon of REST that never once mentions MIME types, and thus missed the point.

All this gobbledygook about HTML “being REST” is silly and doesn’t match how things were discussed at the time.

HATEOAS is delivered by encouraging different resources to have different MIME types, and offering a service document that lets you know what all the resources available are with their associated MIME types.

That’s it.

It’s really not very complicated, EXCEPT of course that almost no one not intimately involved with the IETF ever bothers defining new MIME types.

This is unfortunate! It would be great if everyone would go ahead and document how their resources work, and then maybe we really could deliver on the promise of REST.

I do agree that reasonably consistent verbs and paths are pleasant things for folks reading and debugging HTTP traffic, but that’s about all they have to do with REST.

But it’s unfortunate we keep getting these explanations that somehow fail to get at the core of the original meaning.


> REST must be the most broadly misused technical term in computer programming history. I can’t think of anything else that comes close

ahem Agile


Scrum is pretty high on the list too


See also:

* DevOps * Microservices * SOA * ACID * Observability

Semantic diffusion, dilution, and drift are endemic in tech because terms are so frequently co-opted by opportunists who want to find a way to sell the concept as a product.

What is interesting is that the first sentence is undermined by the second sentence, which is fairly reasonable compared to the first.


I think there's a pedagogical component too. The way that things are taught have an outsized impact in how people think things are done. This extends far beyond concepts.

For example, and I've been guilty of this too, in data science, it's common for tutorials and examples to be use ipython/jupyter notebooks. This, however, should not be mistaken for "It's a good idea to use notebooks as part of the code-to-production pipeline." It's an easy mistake to make, and implicitly part of how AI/ML is taught.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: