Hacker News new | past | comments | ask | show | jobs | submit login
Life beyond HTTP 1.1: Google’s SPDY (igvita.com)
204 points by igrigorik on April 7, 2011 | hide | past | favorite | 53 comments



Nice, but a bit worrysome. Is such binary protocol a Good Thing?

Ancient people — even though they hadn't much CPU, memory and network resources — in all their wisdom made high-level protocols text-oriented. Sure, they didn't consider everything (so we're still haunted with evil spirits of, for example, UTF-7 and quoted-printable), but overall design was simple yet perfectly functional. And anyone could use just a telnet client to talk to HTTP (FTP, SMTP, etc) server. With SPDY you can't do such thing anymore, you need specialized software.

I'm all for something like "Upgrade: SPDY"/"HTTP/1.1 101 Switching Protocols" (then binary framed message-oriented protocol goes into effect), but I somehow worried with binary-from-the-beginnig "HTTP 2.0". Maybe I'm just too conservative?

(Side question: If we want multiplexing streams so badly, why noone cares about SCTP? Is it flawed or inappropriate?)


The reason we didn't like the Upgrade header is because it requires yet-another-round-trip in the protocol. Round trips are expensive, and getting worse both for desktops and mobile. (average RTT is ~114ms to Google, which has pretty good global presence)

As for SCTP, I think applications generally drive progress better than OSes do. Certainly the OSes have had enough time to sort out SCTP, but they haven't done it, and the end-game is nowhere in sight. Until there is an app that showcases it, will it ever happen?

My recommendation is to first prove new protocols at the application layer (perhaps run SCTP over UDP just to grease the skids). When people are addicted to how much better the experience is (multihoming + concurrent streams), then the OSes will pull it into the kernel to make it more scalable.

What do you think?


The simplicity of text protocols is illusory and mostly leads to bad implementations that look like they work but don't really. Any protocol that supports multiplexing is fundamentally complex; for example, BEEP is a "text" protocol but I doubt you could speak it with telnet.

SPDY is always an upgrade; AFAIK there's no way to start out talking SPDY (since there's no spdy: scheme). Every SPDY server should still support old-fashioned HTTP.

SCTP gets broken by NATs, so outside of a few telco networks it's dead.


Nobody questions that a secure protocol is not readable text :-)

I fundamentally believe the protocol has to be server authenticated and encrypted always. We've seen breach after breach of user privacy and the desire for governments to crack down on its citizens has no bound. We simply have to encrypt. Once we do that, text is off the table, and it doesn't really matter if it is binary or not.

Of course, we need much better tools for managing the encryption, I'm not a believer that the status quo is adequate in any way.


TLS is just a layer, which replaces `telnet` with `openssl s_client`.


Alright, but that is just theory. The reason you'd want a text protocol is so that humans could read it. Once you secure the protocol, you can't read it without a machine to help you.


Well, by the same reasoning I "can't" easily talk HTTP because I can't talk TCP, IP and 802.3 - they're all binary.

This is exactly why I wrote "high-level protocols". With just a telnet client I can talk HTTP, FTP, SMTP, IMAP, POP3, IRC and so on. And the machine just needs to handle lowel-level details.


Yeah it's only simple in the sense that you can generate a packet with telnet. It's not so simple when you're trying to write a server that has to parse the stuff in a correct and secure manner.


Maybe, but SPDY does nothing in this area. It's just message-oriented multiplexed binary-framed medium for HTTP.


Disagree.. Having written both HTTP clients and servers, the SPDY spec is a breath of fresh air. Every packet has a predefined header length, and a clearly defined length for the trailing data - removes a heck of a lot of edge cases when implementing a parser.

Now if only TCP was message oriented... For that, I guess we have ZMQ.


Is there any need today to talk manually to a web server using telnet?


I very frequently use netcat (or openssl s_client for HTTPS servers) to connect directly to a server I am debugging.

It could be argued that there is no "need" for this, but I find myself doing it quite frequently all the same. That may have something to do with the fact that I am developing a relatively low-level reverse HTTP proxy tool (PageKite), but I also did it quite a bit when working on more traditional webapps.

Once you speak the protocol it's nice to be able to just connect and chat with the server directly. :-) Humans are really bad at speaking binary.


I don't know. Probably not, but it's sometimes convenient to do so.

Well, I've did so when I had to debug some redirect caching problem with "walled garden" setup. This could also be done with a more specialized software, though, I just happened to use telnet as it was readily available.

I also sometimes check server's availability (and overall response correctnes) by telnetting to it. Although, I was just too lazy to set up a tunnel to the LAN they're residing at.


Whilst you can use more dedicated tools for debugging http connections than telnet, there is still an argument for protocols simple enough that you can use telnet.

Take Arduino for example, writing a naive but working http client is childs play; writing a SPDY client in Arduino is well beyond many people's capabilities. Yes libraries will appear eventually, but we've not even got a decent open http library yet.


I most commonly do it to read the HTTP headers using HEAD.


    curl -I
is an easy way to do that as well.

It requires fewer keystrokes :P


I don't like SPDY. It's trying to solve a transport problem at the application level. Plus it seems to be quite complex.

I'd love to see Google promote a transport protocol like SCTP[1], and do HTTP over SCTP instead. If Google pushed SCTP a little bit, we might see it pop on Linux and Windows within a few years.

[1]:http://en.wikipedia.org/wiki/Stream_Control_Transmission_Pro...


"Q: What about SCTP?

A: SCTP is an interesting potential alternate transport, which offers multiple streams over a single connection. However, again, it requires changing the transport stack, which will make it very difficult to deploy across existing home routers. Also, SCTP alone isn't the silver bullet; application-layer changes still need to be made to efficiently use the channel between the server and client."

http://www.chromium.org/spdy/spdy-whitepaper


I agree that SPDY would be a "quicker" solution. Just modify the browsers and wait a year or two, and 30%+ of the people browsing the web will have it.

SCTP have the nice side effect of improving things like streaming, and games.

As for application-layer changes, I don't think it would be too difficult to do, kind of like ipv6 (I don't have anything to back this up, it's just a hunch).

SCTP deployment will much longer than SPDY, but SCTP seems to be the "right thing to do". Not only for the web, but for other things that use the network. Internet is not only http://.

UPDATE: I just realized that saying that the transition to SCTP will be like IPv6 isn't necessarily a good point for SCTP :-D ... I guess I'm a purist and not a pragmatist.


Same thoughts.

Present-day GNU/Linux should work with IPPROTO_SCTP, it's only Windows who lack the implementation out of the box.


Correct me if I'm wrong, but as far as I'm aware, GNU has nothing to do with SCTP. Linux supports SCTP.

My non-GNU Linux system talks SCTP just fine.


SCTP addresses some of the same needs (primarily the multiplexing), but the major hurdle there is the fact that we're ditching TCP. As far as an upgrade path goes: it is much harder to get existing servers to support SCTP (think about Apache, Nginx, etc), then to bolt on support for a different application-level protocl (replace/augment HTTP with SPDY).


I disagree. It's much, much easier to support SCTP (since it's already provided by the operating system) than it is to support SPDY with all the server pushes, etc.

But to be honest, I don't know what was the source of such comparison... It's apples to oranges.


SCTP is not available on Windows by default, and you need administrative privileges to deploy it. I'm not trying to start an OS war debate, but this is a very practical problem.

Mobile, oddly enough, may be the best route to bring both IPv6 and SCTP to life.


Good point... But we were talking about servers ;)


Wouldn't SCTP need to be installed on both the client and the server?


Yes (for adoption) and no (for having server support).

But those are two very different problems ;)


Heh - I didn't think a server side SCTP implementation was very interesting if you don't have clients to use it.


The source of the comparison is that the main thrust of both SCTP and SPDY is multiplexing multiple independent data streams on a single connection.


Is is, really? For me multiplexing is just nice addition to SPDY.

Personally, I consider "server pushes" as the main feature and "full encryption and compression" as nice improvements.


Well, the primary reason for the existence of SPDY is encryption everywhere and speed. Something which with current HTTP/HTTPS doesn't really exist. HTTPS is still slower than HTTP and HTTP is slower than SPDY.

A lot of the speed increase comes from the use of multiplexing. So without it, it wouldn't be able to achieve most of it's goals.


afaik, nginx does not have support for spdy


SCTP is a good start for sure, and someday may make sense. The problem is a deployment one: it can't pass through NAT, making it off limits to most users today.

As for solving problems from the transport, that is not true. I assume you're suggesting that streams can only be tackled at the transport layer, but SPDY's compression is clearly an app-level endeavor.


Whoa, so Google Chrome is now talking SPDY to www.google.com ? I had no idea.

Anyone know a good way of running a packet sniffer on OS X so I can see it in action?


Wireshark works just fine on OSX.

http://wireshark.org or brew install wireshark


chrome://net-internals/#spdy


Wow! When they removed the "http:// from Chrome, I knew exactly why they were doing it: to hide the SDPY rollout while simultaneously not breaking copy paste interop with non-Chrome uses. However, I had no idea they had already done it! Amazingly clever and effective. I totally saw it coming and then totally missed it's arrival. Awesome.


No, this is pure conspiracy theory. :-)

The decision to drop "http:// from the display was a UI decision and had nothing to do with the internals. The idea is that the "http:// is just user-confusion. Most users can't spell HTTP, much less know why it is there. Why do we subject our poor users to it? Personally, I don't care.

Chrome does not recognize "spdy://" as a protocol scheme, and the UI display changes nothing with respect to how chrome selects protocols.


In terminal:

sudo tcpdump -s3000 -X -i en1 port 80

switch to -i en0 if you're using your ethernet, or you have a macbook air.


The author is wrong about his HTTP history.

Keep-alive was a feature of HTTP/1.0 and was phased out in HTTP/1.1 in favour of "persistent connections."


Hmm, that's good catch - thanks. HTTP 1.0 didn't explicitly specify anything around persistent connections, so in practice you can add Keep-Alive header and hope that the server will respect it. By comparison, HTTP 1.1 defaults to persistent always and requires a "Connection: close" header to indicate otherwise.


So wait, if we somehow ran SPDY on our own servers, would Chrome auto-detect it, or is it hardwired to Google services?


There are several proposed mechanisms to "enable" SPDY. The preferred is to run over SSL, and to use their NPN extension to negotiate SPDY support: http://tools.ietf.org/html/draft-agl-tls-nextprotoneg-00.htm...

Alternatively, there is work on sending an "upgrade" header in your regular HTTP response: http://code.google.com/p/chromium/issues/detail?id=69688

The NPN route is obviously the best in terms of performance, since the protocol can be negotiated as part of the TLS handshake..

Long story short: you can definitely run your own SPDY server and Chrome will auto-detect and use the protocol.


Yes, its all open, and there are non-google SPDY servers out there already. My own site (you can probably find it) will speak SPDY for you.

If you have any trouble, please hop on to the spdy-dev@google.com mailing list for help.


The last bit of the article tells you how you can implement SPDY on Apache with either Ruby or Python.

It looks like SPDY-enabled clients check for SPDY implentations first (which fails very quickly if absent) and then fails over to HTTP.


Does anyone know of plans to build this into Firefox? Sounds like an interesting project.

Edit: already found it https://bugzilla.mozilla.org/show_bug.cgi?id=spdy


great thing is that Google has enough gravitational mass to move world in such direction without many years delay of negotiations inside protocol specification committee.

Once everybody is onboard, when the license fee will be requested?


What I don't understand is why having multiple streams inside a single TCP channel is preferable to having many TCP channels.


Because setting up a TCP connection is costly in terms of roundtrips/latency. With SSL inside even more.

Also when you have only 1 connection you might better utilize the bandwidth you have because of TCP's slow-start algo.


I wonder if the better plan would be to insert another layer between TCP and HTTP (or any app layer protocol) that provides for multiple streams between the same points.

(If that's exactly what SPDY is, never mind.)


That's basically what SPDY is. It has two halves, a framing layer (which is generic), and a definition of how to embed HTTP within that framing layer. In theory, you could use the framing layer for other purposes, but we designed it tailor-made for http.

For the original question - HTTP can only send one request at a time. SPDY can send many all at the same time, avoiding lots of round trips. As a side effect, it sends fewer bytes and packets too. look for the IETF slides for the latest data.


The title seems to imply HTTP 1.1 is already dead/abondoned, when in fact almost nobody except Google is supporting SPDY.

Firefox doesn't support it, but if it's open enough there's a chance they might add support for it. MSIE support is far away and I wouldn't but any bets on it being supported any time this year. If SPDY is supposed to be more than a http-option (i.e. "beyond http 1.1") all clients need to support it. That's far from the case today.

Given that last time I heard about it was quite some time ago and little has changed, I would say a better title would be "Http-times: SPDY is still around, still looking for friends".

Just because it comes out of Google's door doesn't make anything a guaranteed success and I outside HN's usual Google-praising sphere I haven't ever heard a single geek mention it. I'm not saying it's dead either, but at this point it seems to have gathered very little interest and momentum.


Also SPDY does everything HTTP does, as seen from the 'upper layers' it looks and acts just like HTTP. Maybe more like HTTPS.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: