Hacker News new | past | comments | ask | show | jobs | submit login
Joe Hewitt - What the web is and is not (joehewitt.com)
72 points by riledhel on Sept 26, 2011 | hide | past | favorite | 32 comments



> Hyperlink traversal matters. The Internet being global and decentralized matters. HTML does not matter.

He gets it.

I think a lot of people misinterpreted his previous blog post as saying that the web is worse as a platform than iOS. No, he's saying that HTML is worse as an API than the iOS frameworks. The two are different, and the difference is crucial.

The web is Wikipedia, Google, Facebook, blogs, and HN --- services that couldn't exist without the concept of the hyperlink. That we currently write web pages in HTML, CSS, JS, and related technologies isn't central to this, and that is what Hewitt doesn't like about the web.

He's not calling for us to stop writing web apps. He's calling us to put some serious thinking into a better markup, presentation, and application language. Now that we have decades experience in writing HTML, we might actually we qualified to do so.


I agree with one nitpick. The web is Wikipedia, Google, Facebbok, blogs, HN, and everything because of of the concept of the hyperlink. It couldn't exist without the hyperlink. That everything is written in HTML, CSS, JS, and other related technologies doesn't mater. The hyperlink does, regardless of how it is implemented.


The inherent tension between the speed and efficiency of closed platforms and the decentralized power of open standards based platforms seems insurmountable. Except that's sort of what I find so great and so fascinating about Google, because it represents the one singular, powerful, centralized, commercial force whose incentives align almost exactly with the Web at large.[0] And I think it's really great to have such a power innovating on basically every layer of the Web stack, from protocols, to codecs, to browsers, to operating systems.

[0]: I wrote a bit more about that here: http://blog.byjoemoon.com/post/166900257/why-i-trust-google

And a bit about the problems around the particular relationship Google has with the web here: http://blog.byjoemoon.com/post/7590977101/googles-existentia...


The problem with Google is that Google is pro-web as long as the web is pro-Google.

Take Google+ for instance - they could have at least attempted a decentralized social network, but they didn't. Instead they are requiring real names, because apparently pseudonyms are bad for Google ... and yet being anonymous is something which the web (in general) not only allows, but also encourages.

Yes, you can choose not to use Google+ or Facebook, but for how long will this freedom last when absolutely everybody will be on one of these social networks? How long will it last until email will not be the primary means of communicating at work with your customers or colleagues?

It's a pity that the millions of users with accounts on Facebook haven't discovered the advantages of opening up an account on Wordpress.com or similar services, with their own domain name ; or the advantages of using Google Reader to follow your acquaintances.


Your first sentence is right as far as it goes.

I agree with you that G+ would be better decentralized, and based on open standards. But I think they tried that with Buzz and Wave, both catastrophic failures. And I think the strategy this time is to try to nail down the user experience first. I think the real names policy is a part of that, too (though I agree, too, that this is a bad policy).[0]

Could all be wishful thinking, I suppose.

[0]: Some weak evidence for this being the strategy: http://news.ycombinator.com/item?id=3019052


Actually, Google+ is based on open standards, here is what Evan Prodromou (the creator of http://status.net, one of the biggest decentralized open source social network) has to say about the API:

> Activity Streams, PoCo, OAuth 2. I feel like the girl in Jurassic Park. "I know this!"

https://plus.google.com/104323674441008487802/posts/jJNsUDPc...

So, you couldn't get more open and decentralized than that. G+ api is read only for now though but the devs made it clear that it won't stay that way.


I think what many of us really want is federation or some level of interoperability such that we can run our own services that work transparently with G+ as a peer. More like the way email works.

(As a bonus, this would neatly sidestep the ano/pseudonymity issue.)

So, when I can run byjoemoon+ on my own server and interoperate with G+ users with complete feature parity, I'll be really happy.

Meanwhile, I think the API stuff is great, but only a half-measure.


This was the plan with Google Wave; did they not provide people with the ability to set up their own Wave boxes? How'd that turn out?


> The inherent tension between the speed and efficiency of closed platforms and the decentralized power of open standards based platforms seems insurmountable.

Isn't this where open source platforms like Linux come in? Linux has been fighting for desktop adoption for decades, but the surrounding projects (Cairo, Pango, etc) that support that effort have evolved a tremendously powerful platform. An open source project could easily build and popularize an alternative to HTML and the like.

The problem I see is the entrenched HTML output (both static and from dynamic generation) of billions of websites. The longer HTML is entrenched, the bigger the leverage it has. And that entrenchment isn't wholly without merit either. Resolution and viewport independent UIs, especially over such a broad range, are incredibly difficult.

That said, I fully expect things like onmouseover scripts to disappear as we try to learn new UI ideas for multitouch devices.


The existence of the HTTP Accept header shows that HTTP was designed with a variety of incompatible user-agents in mind. If you write new user-agent, use Accept to specify what content types you can render. Intelligent web servers can serve the same content in different formats for the same URLs. Your browser that only handles HTML will browse on unimpeded, my browser that also handles NewLanguage3000 will get the newfangled version. Everybody wins.


> Everybody wins.

You hand out this ribbon far too easily. Parallel offerings[1] are pretty much guaranteed not to be parallel (or resemble it even enough to fool those who want to be fooled).

1. Separate-but-equal, or whatever you want to call it


I'm not sure I get your point, so I may be arguing against something else entirely here.

That said, the point of parallel offerings is not to provide identical representations of a given resource. Even XML+XSLT versus XHTML might have different semantical implications, so the best assumption is that no two ways of representing a given content can ever be identical.

Well, that's a Good Thing. If I'm requesting a given resource and I signal preference for a PDF-encoded response, that implies on some level that I want a print-formatted, print media version of that resource, whereas its HTML counterpart can be hypertext and multimedia.

What you as a server are committing to is that both servings are representations of the same given resource, but without prejudice of putting each format to its best use.

This is also beautiful in that if the HTML stack proves to be a bad idea in the long run, rather than being deprecated, it can fall into disuse slowly alongside the slow rise of its successor(s). In fact, there's absolutely zilch stopping us from having more than one canonical format — any server-side MVC (or MVCish) architecture can probably handle that off the box!

EDIT: to be sure, the question of how expensive it is to develop for two different representation formats is a whole other story.


> I'm not sure I get your point

If it helps, it doesn't look like you did. :)

> without prejudice of putting each format to its best use

Doing what you can to the extent of what's possible with multiple mediums (yeah, mediums, I said it) is real pie-in-the-sky, false promise kind of stuff.

People can't manage today to put the web to anything you could mistake for its best use. Why should it inspire confidence when anyone says it's going to happen when their resources are halved?


The power of the web isn't any one technology, but a combination of two very powerful things: the ability to present arbitrary content with an arbitrary look and feel, and the ability to trivially point a user at someone else's content.

We have a (HUGELY successful) first crack at a platform (HTML, CSS, JS) that enables completely custom UI, presentation and generation of content.

Hyperlinks form the other half of this in that my content can send you over to someone else's content (with their own customized UI and presentation) with a single click of a mouse.

The problem is that people are using the first half of the web as a basis for delivering traditional (thick client, local) applications, and that does indeed suck.

We've become focused far more on the experience than the content. I think that focus is misplaced. We've spent a decade now trying and failing to perfectly implement our content for our UI paradigm and now we're paying the price as that paradigm shifts. I expect that paradigm to shift again before we perfect our new implementations.


> We have a (HUGELY successful) first crack at a platform (HTML, CSS, JS) that enables completely custom UI, presentation and generation of content.

Java Swing lets you do this, too. But no hyperlinks.


Swing takes a different approach. It is a pluggable widget toolkit rather than a set of lower level constructs used to build primitives.


[deleted]


That depends entirely on how you think about "turtles all the way down!" Primitives have to be built from something, our CPUs don't speak HTML natively.

We'll use jQuery as an example. People use jQuery bits to build widget sets for their own web UIs. Widget set libraries built on top of lower level libraries (primitives) that are themselves built on HTML/CS/JS. Companies like Microsoft are taking this to a whole new level with their Metro efforts.

These kinds of things are where browsers fall down and go boom. As other people have said here, the world is trying to turn documents (unmarked up content) into applications. It was a good first crack at the idea of marking up documents with application code, but ultimately its warts are starting to show.


I deleted my comment because I realized I'd posted hastily without noticing you were the one saraid216 was replying to. So trying to convince you that HTML, JS, and CSS aren't really those sorts of primitives either would be kind of a weird effort, since that's the position you've already taken.


It's strange; I found little to disagree with in this post, but when I went back to read the original, something about it rubbed me the wrong way.

It seems to me that this response was written because people (including myself) get jumpy when the death of the Web is mentioned -- they conflate the Web with the open principles underlying the Internet, and a BDFL there runs counter to those principles.

I don't really have a dog in the fight as far as HTML etc. are concerned, though I do think the benefits of having an open-standards, (mostly) consistent rendering framework and execution environment across all major platforms and architectures, including mobile devices, far (far!) outweigh the drawbacks of our retrofitting what began as a document markup language. I'll stick with it until we -- or if Joe has his way, a charismatic visionary -- can come up with a more appropriate replacement.


Yup, it's all about DNS and links.

But let's try to look more than a few years out for once? OKAY?

You could make a very decent web that is made out of many clients, that can all link to each other and maybe even all run in separate tabs of the same browser (or whatever 'process' model you want).

But god help you 50 years down the road when you need to download new client browser for every other web page.

This, among other long-term type trade-offs are the very good reasons why the "open web" as it exists on it's current trajectory are a very good idea.


It seems an extreme that the web will in 50 years will require different browsers for every site. The way I read this is that HTML/CSS/JS could be replaced overtime, bit by bit in 50 years by something else. Flash could have done this, though I think their chance has passed them by. Just like Java Applets and Silverlight. Maybe Chrome will add an interpreter for CoffeeScript and in 10 years all new sites will use that instead of JS. Maybe Firefox will introduce something other than HTML to build a page in that works with CSS and JS better and in 15 years, everything new will be in that language.

It isn't that everything will be different, but the implementation of the web is less important than that openness of the web. Fifteeen, twenty years ago, the web was made up of newsgroups, bbs', and email. HTML and browsers were new. Thing have changes, but the web still exists. Something new will come along that will slowly replace what we know as the web. What it is and how it happens is unknown, but I expect it to happen.


Thank you for saying it. I have been echoing this sentiment for a long time. Why settle for "good enough" when the root of all progress is "better?" I think one key thing that we will need first is a language to challenge Javascript and HTML. I think C# and Java are good, but they could be a lot better. The language will need native rich media abilities and enough abstraction to run on any device or operating system.

The problem with HTML (to me) is that it is not really driven by capitalism or competition - say what you will about the two, but they do force companies to strive for the best products possible.


"..most of the positive responses were from people who, like me, have a depth of experience on other client platforms.. Then there are the people who reacted negatively to my post. A common theme throughout these responses is complacency."

So everybody who disagreed with you is complacent. Who's being complacent here?

I enjoyed the original post, but the tone of this one is off-putting.

---

Between the two articles I built my first iPhone app. It felt like a ghetto. Interface Builder, I christen thee 'click and pray'. Perhaps there's a simpler explanation for these phenomena: we tend to discount the drawbacks in what we know.


He doesn't mention 'View source', which I think is pretty important.

As Clay Shirky says: http://www.shirky.com/writings/view_source.html


"HTML" fails at delivering hot new stuff, but native apps on the other hand give too much control to the developer and a lot of developers abuse it. Having browser limitations can sometimes be a good thing.


Lots of words without a message. We don't need someone to tell us HTML sucks...

It's like saying "this API sucks! We'll fix it with a new API"


I used to deride web applications by saying we are trying to make DOCUMENTS behave like APPLICATIONS.

However, we should keep in mind that the explosion on the web also happened because of user generated content, and by that I don't mean just silly facebook comments. I mean blog posts, I mean photos, I mean content where the users are able to control the presentation. Or ideally, where users indicate semantic information.

This is what has made Web 2.0 great, if I may :)

And there is one language for this: HTML. It may not be perfect but what we are really talking about is an XML-like language to indicate semantics. It is ubiquitous.

So it's not just about hyperlinks. It is about structured data where you are able to embed the semantic information directly in the data. Sure, you could have the web based around JSON, for example. But why bother, when we already use brackets everywhere?

And once you start using those brackets, everything else follows.

But we would do well if we had this: http://news.ycombinator.com/item?id=2024164


> However, we should keep in mind that the explosion on the web also happened because of user generated content, and by that I don't mean just silly facebook comments.

"Document" in the sense that archivists and other information specialists use also applies equally to things like Facebook comments or random YouTube videos. Any discrete piece of content (and I'm not being rigorous with this definition) is a document and the purpose of the web is basically linking them together.

That these documents are often displayed simultaneously on the same webpage doesn't keep them from being separate documents.


Can i get some credit for inspiring this chain of thought haha....http://news.ycombinator.com/item?id=3024558


I suspect thaat most of the positive responses were from people who, like me, have a depth of experience...

What a condescending asshole.


I know Joe personally and he is not condescending or an asshole. He's an extremely nice, personable guy who happens to have done some amazing things that have changed the web (like start FireFox for instance).

Perhaps you should think a little before insulting someone you don't know based on half a sentence they wrote.


... on other client platforms, and are able to judge the Web in comparison.

Finish the sentence please.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: