I dont know, arguing that http/2 is safer overall is a... bold claim. It is sufficiently complex that there is no standard implementation in the Python standard library, and even third party library support is all over the place. requests doesn't support it; httpx has experimental, partial, pre-1.0 support. Python http/2 servers are virtually unavailable at all. And it's not just Python - I remember battling memory leaks, catastrophic deadlocks, and more in the grpc-go implementation of http/2, in its early days.
HTTP 1.1 connection reuse is indeed more subtle than it first appears. But http/2 is so hard to get right.
The underlying vulnerability, tracked as CVE-2025-8671, has been found to impact projects and organizations such as AMPHP, Apache Tomcat, the Eclipse Foundation, F5, Fastly, gRPC, Mozilla, Netty, Suse Linux, Varnish Software, Wind River, and Zephyr Project. Firefox is not affected.
These sound to me like they are mostly problems with protocol maturity rather than with its fundamental design. If hypothetically the whole world decided to move to HTTP/2, there'd be bumps in the road, but eventually at steady state there'd be a number of battle-tested implementations available with the defect rates you'd expect of mature widely used open-source protocol implementations. And programming language standard libraries, etc., would include bindings to them.
An HTTP/2 client is pretty easy to implement. Built-in framing automatically improves a lot of complexity, and if you don't need multiple streams, you can simplify the overall state machine.
Perhaps something like "HTTP/2-Lite" profile is in order? A minimal profile with just 1 connection, no compression, and so on.
Yes, the corporations and insitutions and their economic transactions must be the highest and only priority. I hear that a lot from commercial people with commercial blinders on.
They simply cannot see beyond their context and realize the web, http/1.1 is used by human people that don't have the same use cases or incredibly stringent identity verification needs. Human use cases don't matter to them because they are not profitable.
Also, this "attack" only works on commercial style complex CDN setups. It wouldn't effect human hosted webservers at all. So yeah, commercial companies, abandon HTTP, go to your HTTP/3 with all it's UDP only and CA TLS only and no self signing and no clear text. And leave the actual web on HTTP/1.1 HTTP+HTTPS alone.
Let's get real, online security is mostly a commercial thing. Why do you think Google pushed so hard for HTTPS? Do you really think it is to protect your political opinions? No one cares about them, but a lot of people care about your credit card.
That's something I disagree with the people who made Gemini, a "small web" protocol for people who want to escape the modern web with its ads, tracking and bloat. They made TLS a requirement. Personally, I would have banned encryption. There is a cost, but it is a good way to keep commercial activity out.
I am not saying that the commercial web is bad, it may be the best thing that happened in the 21th century so far, but if you want to escape from it for a bit, I'd say plain HTTP is the way to go.
Note: of course if you need encryption and security in general for non commercial reason, use it, and be glad for the commercial web for helping you with that.
It over-reaches with argument about disallowing http/1.1.
Parsers should be better.
Moving to another protocol won't solve the issue.
It will be written by the same careless engineers.
So same companies will have the same issues or worse...
We just lose readability/debuggability/accesibility.
It's not correct to attribute all bugs to carelessness, and therefore assume that engineer conscientiousness is the only criterion affecting defect rates. Some software architectures, protocol designs, programming languages, etc., are less prone than others to certain kinds of implementation bugs, by leaving less room in the state space for them to hide undetected. Engineers of any skill level will produce far more defects if they write in assembly, than if they write the same code in a modern language with good static analysis and strong runtime-enforced guarantees. Likewise for other foundational decisions affecting how to write a program.
The post makes the case that HTTP/2 is systematically less vulnerable than HTTP/1 to the kinds of vulnerabilities it's talking about.
> Content-Length+Transfer-Encoding should be bad request.
Maybe it should be, but it isn't. In fact, all revisions of the HTTP/1.1 RFC have made it clear that if both headers are present, the receiver must treat it as the Content-Length header not being present. Not as an error.
RFC 2616 "If the message does include a non-identity transfer-coding, the Content-Length MUST be ignored."
RFC 7230 "If a message is received with both a Transfer-Encoding and a Content-Length header field, the Transfer-Encoding overrides the Content-Length"
RFC 9112 "... Transfer-Encoding is defined as overriding Content-Length, as opposed to them being mutually incompatible."
> RFC is also not respected: "Proxies/gateways MUST remove any transfer-coding prior to forwarding a message via a"
That's a spec mistake in the original RFC, corrected in later revisions. It would be an absurd requirement: if the input is chunked, the output must be chunked as well. If the sender is sending gzip and the receiver accepts it, what is gained from the proxy decompressing the stream only to immediately compress it?
> Content-Lenght: \r\n7 is also a bad request.
... I mean, yes, it would be a bad request. But the example in the article is "Content-Lenght: \r\n 7" which isn't invalid. It's a feature defined in RFC 2616 as folding. It was a bad idea and deprecated in later revisions, but that just means that a client should not send it. A server or proxy can either reject the message or undo the folding, they're just not allowed to pass them through unmodified.
I think the main point is that these sorts of parsing mistakes shouldn't be so easily exploitable and the problem is that the length is non trivial to parse, so if you mess up the parsing of that it escalates the security of a ton of other bugs.
The section "How secure is HTTP/2 compared to HTTP/1?" (https://portswigger.net/research/http1-must-die#how-secure-i...) responds to this. In short, there's an entire known class of vulnerabilities that affects HTTP/1 but not HTTP/2, and it's not feasible for HTTP/1 to close the entire vulnerability class (rather than playing whack-a-mole with bugs in individual implementations) because of backwards compatibility. The reverse isn't true; most known HTTP/2 vulnerabilities have been the kind of thing that could also have happened to HTTP/1.
The new features/behaviors in the new protocol inherently create new classes of vulnerabilities. That above link relates to an issue with RST_STREAM frames. You can't have issues with frames if you lack frames.
It's quite possible the old issues are worse than the new ones, but it's not obvious that's the case.
My WWW site has been served up by publicfile for many years now, and reading through this I kept having the same reaction, over and over, which is that the assumption that "websites often use reverse proxies" is upgraded in the rest of the article to everyone always uses back-ends and proxies. It's as if there is a monocultural world of HTTP/1.1 WWW servers; and not only does the author discount everything else apart from the monoculture, xe even encourages increasing the monoculture as a survival tactic, only then to state that the monoculture must be killed.
The irony that near the foot of the article it encourages people to "Avoid niche webservers" because "Apache and nginx are lower-risk" is quite strong, given that my publicfile logs show that most of the continual barrage of attacks a public WWW server like mine is subject to are query parameter injection attempts, and attacks quite evidently directed against WordPress, Apache, AWS, and these claimed "lower risk" softwares. (There was another lengthy probe to find out where WordPress was installed a couple of minutes ago, as I write this. Moreover, the attacker who has apparently sorted every potentially vulnerable PHP script into alphabetical order and just runs through them must be unwittingly helping security people, I would have thought. (-:)
Switching from my so-called "niche webserver", which does not have these mechanisms to be exploited, to Apache and nginx would be a major retrograde step. Not least because djbwares publicfile nowadays rejects HTTP/0.9 and HTTP/1.0 by default, and I would be going back to accepting them, were I foolish enough to take this paper's advice.
"Reject requests that have a body" might have been the one bit of applicable good advice that the paper has, back in October 1999. But then publicfile came along, in November, whose manual has from the start pointed out (https://cr.yp.to/publicfile/httpd.html) that publicfile httpd rejects requests that have content lengths or transfer encodings. It's a quarter of a century late to be handing out that advice as if it were a new security idea.
And the whole idea that this is "niche webservers" is a bit suspect. I publish a consolidated djbwares that incorporates publicfile. But the world has quite a few other cut down versions (dropping ftpd being a popular choice), homages that are "inspired by publicfile" but not written in C, and outright repackagings of the still-available original. It's perhaps not as niche as one might believe by only looking at a single variant.
I might be in the vanguard in the publicfile universe of making HTTP/0.9 and HTTP/1.0 not available in the default configuration, although there is a very quiet avalanche of that happening elsewhere. I'm certainly not persuaded by this paper, though, based entirely upon a worldview, that publicfile is direct evidence of not being universal truth, to consider that I need do anything at all about HTTP/1.1. I have no back-end servers, no reverse proxies, no CGI, no PHP, no WordPress, no acceptance of requests with bodies, and no vulnerability to these "desync" problems that are purportedly the reason that I should switch over to the monoculture and then switch again because the monoculture "must die".
HTTP 1.1 connection reuse is indeed more subtle than it first appears. But http/2 is so hard to get right.