Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably for HTTP Pipelining: https://en.wikipedia.org/wiki/HTTP_pipelining



HTTP/1.1 pipelining is not the same as HTTP/2 multiplexing.

Obviously, one cannot multiplex with HTTP/1.1, but AFAIK there is still a question of whether someone can pipeline (not multiplex) with HTTP/2. HTTP/2, introduced by an advertising company and developed in part by CDN service providers, is designed for web pages that auto-load resources from a variety of hosts, namely advertising servers. It's commercially-focused.

Here is an old example of using HTTP/1.1 pipelining. For the basic task of fetching many files from same host over a single TCP connection.

https://www.daemonology.net/phttpget/

Of course there is much more than one can do with HTTP/1.1 pipelining, such as fetching 100s or 1000s or pages, as "streaming" text/html, from a website in a single TCP connection. It is also possible to HTTP/1.1 pipeline POST requests. IME, HTTP/1.1 pipelining is generally fast and reliable, a very useful and convenient feature for web users, one that I have been using for two decades.

HTTP/2 proponents on HN will sometimes insist that HTTP/1.1 pipelining is entirely supplanted by HTTP/2 multiplexing, ignoring that the web can also be used for non-commercial, non-advertising purposes. They will argue that HTTP/1.1 pipelining is useless because it is not supported by web browsers and cannot support improved e-commerce accompanied by online advertising, data collection and tracking, e.g., websites comprised of resources from a variety of hosts, especially advertisers.

This is a mistake. HTTP/1.1 pipelining and HTTP/2 multiplexing are two different things and they can and they will co-exist. The web is not just for Google and other so-called "tech" companies. Nor is it only for their chosen HTTP clients, "Chrome" and what not. The web is not just for commerce. It is for non-commercial web users, too. It's open to all HTTP clients, including this one from Zig. The web is a public resource.


HTTP/2 was based on SPDY which was created by Google. HTTP/2, however, is an IETF standard. The IETF is not an advertising company. The claim that HTTP/2 was introduced by an advertising company is false.

The claim that HTTP/2 is a commercial technology is unsupported.

HTTP/1.1 pipelining is flawed. The problem is that if you pipeline requests for A, B, and C, the responses need to come back for A, B, and C in that order. The server can choose to either process pipelined requests serially or in parallel at its option. If the server chooses to process the requests serially it more or less defeats the purpose of pipelining. If the server processes the requests in parallel, it has a problem: let's say that A is a slow request and that B and C are fast requests generate a large amount of data - the server has to wait for A to complete before it can respond with B or C but in the meantime it has to store the responses for B and C somewhere. This is a great way to DOS a server: pipeline a bunch of slow requests followed by large ones and see if the server runs out of memory or disk space.

Servers don't support pipelining because it can cause a DOS. Clients don't support pipeline because servers don't and also because the head of line blocking caused by a slow response means that they will often get better results by opening up more connections than to try to pipeline things.

There isn't a conspiracy. Pipelining just isn't a good technology.


"Pipelining just isn't a good technology."

No, it's a great technology. For my purposes. Yours may be different.

I'm using the web for data retrieval. Text retrieval. I'm not interested in "interactive" web pages.

Every web user is entitled to their own opinion. I'll respect the opinions of those who like HTTP/2 so long as they respect the opinions of someone else who likes HTTP/1.1. From a privacy perspective HTTP/2 is flawed. But for me the more pertinent issue is that it's overkill. I'm using the HTTP/1.1 for data retrieval from the same host, where I want responses returned in the order they were requested, with HTTP headers. I'm not retrieving images, CSS and the like. I'm not looking at graphics. I'm retrieving text to be read in textmode. I have no need for "interactivity" and no need for multiplexing. For this purpose, HTTP/1.1 works beautifully. Nothing any HN commenter blurts out will change that fact. This reply was not even honest. "Servers don't support pipelining..." Where is this coming from.

I've been using HTTP/1.1 for over 20 years. I use it on a daily basis. As long as every httpd continues to support it, as they have for decades, I'll continue to use it.

Pipelining was designed to avert DoS by only opening one TCP connection. When RFC2616 came out, servers had problems with clients opening many TCP connections at once. This was considered poor netiquette. IETF wanted us to limit the number of connections to two.^1 Do so-called "modern" web browsers and other contemporary clients follow the old netiquette. As an HTTP/1.1 pipelining user, I only open a single connection. I'm still following the old netiquette.

What's funny about these replies trying to attack someone's use of HTTP/1.1 pipelining (which is quite strange if you asked me -- why would anyone care) is that the people making the replies have never tried to do what this person using HTTP/1.1 pipelling is doing. How could they claim it's "slow". Trust me, if it was typically slow I would not use it. Very rarely is it slow and even then it isn't any slower than making sequential TCP connections.

1.

Some excerpts

RFC 2616 HTTP/1.1 June 1999

8 Connections

8.1 Persistent Connections

8.1.1 Purpose

Prior to persistent connections, a separate TCP connection was established to fetch each URL, increasing the load on HTTP servers and causing congestion on the Internet.

Persistent HTTP connections have a number of advantages:

- By opening and closing fewer TCP connections, CPU time is saved in routers and hosts (clients, servers, proxies, gateways, tunnels, or caches), and memory used for TCP protocol control blocks can be saved in hosts.

- HTTP requests and responses can be pipelined on a connection. Pipelining allows a client to make multiple requests without waiting for each response, allowing a single TCP connection to be used much more efficiently, with much lower elapsed time.

- Network congestion is reduced by reducing the number of packets caused by TCP opens, and by allowing TCP sufficient time to determine the congestion state of the network.

- Latency on subsequent requests is reduced since there is no time spent in TCP's connection opening handshake.

8.1.2.2 Pipelining

A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received.

8.1.4 Practical Considerations

Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.


Why is HTTP/2 corporate?

Why use HTTP if interoperability with browsers isn't a concern?


HTTP/1.1 is supported by all browsers, right? Using it doesn't give up any interop. At least until said ad company develops enough of a monopoly to move into the Extinguish phase.

I don't know anything about pipelining or multiplexing, but I too have the impression standards bodies are now dominated by large browser vendors. Older standards therefore seem less corporate.


"Using it doesn't give up any interop."

And interop is quite valuable for web users. But for the so-called "tech" companies and CDN service providers authoring the HTTP/2 RFCs, maybe not so much. Who receives the primary benefit of HTTP/2. Certainly not web users. Perhaps they get some small secondary benefits.

What does not make any sense to me is why past/present Googlers and other HTTP/2 proponents voting and replying on HN are offended by someone who likes using HTTP/1.1. For pipelining. The (non-browser) interop is much better than HTTP/2. That is, using 1.1, I can pipeline HTTP to/from almost every httpd on the internet, using a vast array of TCP clients written over a long period. If I want to use HTTP/2/, the number of libraries and clients is much smaller and all are recent. Further, AFAIK these clients cannot pipeline the way 1.1 does, retrieving many files from same host sequentially over single TCP connection, in the order they were requested, with HTTP headers. Google existed when RFC2616 came out. If it is so flawed then why not try to change it then. HTTP/2 is flawed for what Google and other so-called "tech" companies want to do, always with a browser or mobile OS they control, not necessarily what web users want to do, with whatever clients web users choose, 100% of the time. We've seen the stuff so-called "tech" companies get up to and it usually involves surveillance to support commerce. HTTP/2 isn't going to solve or alleviate any of those ills. To keep the Wall Street analysts happy, Google will not be adding to their browser, inspiring and backing standards that translate to less profit for Google. If a new standard decreases the amount of data collection or tracking, that's less profit for Google. HTTP/2 is not such a standard.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: