Nice, but a bit worrysome. Is such binary protocol a Good Thing?
Ancient people — even though they hadn't much CPU, memory and network resources — in all their wisdom made high-level protocols text-oriented. Sure, they didn't consider everything (so we're still haunted with evil spirits of, for example, UTF-7 and quoted-printable), but overall design was simple yet perfectly functional. And anyone could use just a telnet client to talk to HTTP (FTP, SMTP, etc) server. With SPDY you can't do such thing anymore, you need specialized software.
I'm all for something like "Upgrade: SPDY"/"HTTP/1.1 101 Switching Protocols" (then binary framed message-oriented protocol goes into effect), but I somehow worried with binary-from-the-beginnig "HTTP 2.0". Maybe I'm just too conservative?
(Side question: If we want multiplexing streams so badly, why noone cares about SCTP? Is it flawed or inappropriate?)
The reason we didn't like the Upgrade header is because it requires yet-another-round-trip in the protocol. Round trips are expensive, and getting worse both for desktops and mobile. (average RTT is ~114ms to Google, which has pretty good global presence)
As for SCTP, I think applications generally drive progress better than OSes do. Certainly the OSes have had enough time to sort out SCTP, but they haven't done it, and the end-game is nowhere in sight. Until there is an app that showcases it, will it ever happen?
My recommendation is to first prove new protocols at the application layer (perhaps run SCTP over UDP just to grease the skids). When people are addicted to how much better the experience is (multihoming + concurrent streams), then the OSes will pull it into the kernel to make it more scalable.
The simplicity of text protocols is illusory and mostly leads to bad implementations that look like they work but don't really. Any protocol that supports multiplexing is fundamentally complex; for example, BEEP is a "text" protocol but I doubt you could speak it with telnet.
SPDY is always an upgrade; AFAIK there's no way to start out talking SPDY (since there's no spdy: scheme). Every SPDY server should still support old-fashioned HTTP.
SCTP gets broken by NATs, so outside of a few telco networks it's dead.
Nobody questions that a secure protocol is not readable text :-)
I fundamentally believe the protocol has to be server authenticated and encrypted always. We've seen breach after breach of user privacy and the desire for governments to crack down on its citizens has no bound. We simply have to encrypt. Once we do that, text is off the table, and it doesn't really matter if it is binary or not.
Of course, we need much better tools for managing the encryption, I'm not a believer that the status quo is adequate in any way.
Alright, but that is just theory. The reason you'd want a text protocol is so that humans could read it. Once you secure the protocol, you can't read it without a machine to help you.
Well, by the same reasoning I "can't" easily talk HTTP because I can't talk TCP, IP and 802.3 - they're all binary.
This is exactly why I wrote "high-level protocols". With just a telnet client I can talk HTTP, FTP, SMTP, IMAP, POP3, IRC and so on. And the machine just needs to handle lowel-level details.
Yeah it's only simple in the sense that you can generate a packet with telnet. It's not so simple when you're trying to write a server that has to parse the stuff in a correct and secure manner.
Disagree.. Having written both HTTP clients and servers, the SPDY spec is a breath of fresh air. Every packet has a predefined header length, and a clearly defined length for the trailing data - removes a heck of a lot of edge cases when implementing a parser.
Now if only TCP was message oriented... For that, I guess we have ZMQ.
I very frequently use netcat (or openssl s_client for HTTPS servers) to connect directly to a server I am debugging.
It could be argued that there is no "need" for this, but I find myself doing it quite frequently all the same. That may have something to do with the fact that I am developing a relatively low-level reverse HTTP proxy tool (PageKite), but I also did it quite a bit when working on more traditional webapps.
Once you speak the protocol it's nice to be able to just connect and chat with the server directly. :-) Humans are really bad at speaking binary.
I don't know. Probably not, but it's sometimes convenient to do so.
Well, I've did so when I had to debug some redirect caching problem with "walled garden" setup. This could also be done with a more specialized software, though, I just happened to use telnet as it was readily available.
I also sometimes check server's availability (and overall response correctnes) by telnetting to it. Although, I was just too lazy to set up a tunnel to the LAN they're residing at.
Whilst you can use more dedicated tools for debugging http connections than telnet, there is still an argument for protocols simple enough that you can use telnet.
Take Arduino for example, writing a naive but working http client is childs play; writing a SPDY client in Arduino is well beyond many people's capabilities. Yes libraries will appear eventually, but we've not even got a decent open http library yet.
I don't like SPDY. It's trying to solve a transport problem at the application level. Plus it seems to be quite complex.
I'd love to see Google promote a transport protocol like SCTP[1], and do HTTP over SCTP instead. If Google pushed SCTP a little bit, we might see it pop on Linux and Windows within a few years.
A: SCTP is an interesting potential alternate transport, which offers multiple streams over a single connection. However, again, it requires changing the transport stack, which will make it very difficult to deploy across existing home routers. Also, SCTP alone isn't the silver bullet; application-layer changes still need to be made to efficiently use the channel between the server and client."
I agree that SPDY would be a "quicker" solution. Just modify the browsers and wait a year or two, and 30%+ of the people browsing the web will have it.
SCTP have the nice side effect of improving things like streaming, and games.
As for application-layer changes, I don't think it would be too difficult to do, kind of like ipv6 (I don't have anything to back this up, it's just a hunch).
SCTP deployment will much longer than SPDY, but SCTP seems to be the "right thing to do". Not only for the web, but for other things that use the network. Internet is not only http://.
UPDATE: I just realized that saying that the transition to SCTP will be like IPv6 isn't necessarily a good point for SCTP :-D ... I guess I'm a purist and not a pragmatist.
SCTP addresses some of the same needs (primarily the multiplexing), but the major hurdle there is the fact that we're ditching TCP. As far as an upgrade path goes: it is much harder to get existing servers to support SCTP (think about Apache, Nginx, etc), then to bolt on support for a different application-level protocl (replace/augment HTTP with SPDY).
I disagree. It's much, much easier to support SCTP (since it's already provided by the operating system) than it is to support SPDY with all the server pushes, etc.
But to be honest, I don't know what was the source of such comparison... It's apples to oranges.
SCTP is not available on Windows by default, and you need administrative privileges to deploy it. I'm not trying to start an OS war debate, but this is a very practical problem.
Mobile, oddly enough, may be the best route to bring both IPv6 and SCTP to life.
Well, the primary reason for the existence of SPDY is encryption everywhere and speed. Something which with current HTTP/HTTPS doesn't really exist. HTTPS is still slower than HTTP and HTTP is slower than SPDY.
A lot of the speed increase comes from the use of multiplexing. So without it, it wouldn't be able to achieve most of it's goals.
SCTP is a good start for sure, and someday may make sense. The problem is a deployment one: it can't pass through NAT, making it off limits to most users today.
As for solving problems from the transport, that is not true. I assume you're suggesting that streams can only be tackled at the transport layer, but SPDY's compression is clearly an app-level endeavor.
Wow! When they removed the "http:// from Chrome, I knew exactly why they were doing it: to hide the SDPY rollout while simultaneously not breaking copy paste interop with non-Chrome uses. However, I had no idea they had already done it! Amazingly clever and effective. I totally saw it coming and then totally missed it's arrival. Awesome.
The decision to drop "http:// from the display was a UI decision and had nothing to do with the internals. The idea is that the "http:// is just user-confusion. Most users can't spell HTTP, much less know why it is there. Why do we subject our poor users to it? Personally, I don't care.
Chrome does not recognize "spdy://" as a protocol scheme, and the UI display changes nothing with respect to how chrome selects protocols.
Hmm, that's good catch - thanks. HTTP 1.0 didn't explicitly specify anything around persistent connections, so in practice you can add Keep-Alive header and hope that the server will respect it. By comparison, HTTP 1.1 defaults to persistent always and requires a "Connection: close" header to indicate otherwise.
great thing is that Google has enough gravitational mass to move world in such direction without many years delay of negotiations inside protocol specification committee.
Once everybody is onboard, when the license fee will be requested?
I wonder if the better plan would be to insert another layer between TCP and HTTP (or any app layer protocol) that provides for multiple streams between the same points.
That's basically what SPDY is. It has two halves, a framing layer (which is generic), and a definition of how to embed HTTP within that framing layer. In theory, you could use the framing layer for other purposes, but we designed it tailor-made for http.
For the original question - HTTP can only send one request at a time. SPDY can send many all at the same time, avoiding lots of round trips. As a side effect, it sends fewer bytes and packets too. look for the IETF slides for the latest data.
The title seems to imply HTTP 1.1 is already dead/abondoned, when in fact almost nobody except Google is supporting SPDY.
Firefox doesn't support it, but if it's open enough there's a chance they might add support for it. MSIE support is far away and I wouldn't but any bets on it being supported any time this year. If SPDY is supposed to be more than a http-option (i.e. "beyond http 1.1") all clients need to support it. That's far from the case today.
Given that last time I heard about it was quite some time ago and little has changed, I would say a better title would be "Http-times: SPDY is still around, still looking for friends".
Just because it comes out of Google's door doesn't make anything a guaranteed success and I outside HN's usual Google-praising sphere I haven't ever heard a single geek mention it. I'm not saying it's dead either, but at this point it seems to have gathered very little interest and momentum.
Ancient people — even though they hadn't much CPU, memory and network resources — in all their wisdom made high-level protocols text-oriented. Sure, they didn't consider everything (so we're still haunted with evil spirits of, for example, UTF-7 and quoted-printable), but overall design was simple yet perfectly functional. And anyone could use just a telnet client to talk to HTTP (FTP, SMTP, etc) server. With SPDY you can't do such thing anymore, you need specialized software.
I'm all for something like "Upgrade: SPDY"/"HTTP/1.1 101 Switching Protocols" (then binary framed message-oriented protocol goes into effect), but I somehow worried with binary-from-the-beginnig "HTTP 2.0". Maybe I'm just too conservative?
(Side question: If we want multiplexing streams so badly, why noone cares about SCTP? Is it flawed or inappropriate?)