If you want that, you can just fire up links/elinks/lynx; it, surprisingly, works fairly well even today.
But even the writer of the article didn't want to just see the text - she wants to see the shoes and decide which one to buy. You don't want to see a table with shoes numbers, descriptions and prices when you are shopping; visuals are important, too.
That's why noone is actually using text-only browsers.
She wanted to "see the content". The "content" just included images as well as text. What she didn't want was the heavy interface.
Maybe a new content type would work... "text/html" would be the application and "text/static-html" would be "just the content"? Browsers would then just have a toggle option of some sort.
Replying to myself to clarify that I wasn't really being serious. Certainly it would be nice to be able to tell arbitrary web sites "no interface, just human-friendly information". But since they'd all have to implement that, it really wouldn't work better than what is already possible.
This whole thing seems like a Macromedia redux of sorts. Web devs suddenly have a bunch of new tools to play with and some of them go nuts and do stupid things we'll all laugh about in 5-10 years. Of course in 5-10 years there will be something else for short-sighted marketers to abuse...
Or, the designers/developers can make sensible decisions about what to include. The existing tools are fine, and provide plenty of techniques for optimizing to different viewing platforms and environments. The problem is people getting carried away with the whiz-bang stuff that's possible, while ignoring degradation. I can't imagine how XML/XSLT would be any better.
The issue isn't what format the content is in, but that it's retrieved through client-side logic rather than as part of the query response.
Also, what's wrong with HTML/CSS that is in any way resolved by XML/XSLT? I'm genuinely curious why somebody would put their content into a format with no default, well-understood display semantics.
Maybe I misunderstood how the xml/xslt-paradigm works, but my idea was that you have a serverside api serving up raw data, and the display is controlled by some code on the clientside that can be easily customised. So the server could serve up something like
<shoe>
<images>


</images>
<description>This shoe is very advanced bla bla bla</description>
<fancyintroanimation>foo.com/intro1.swf</fancyintroanimation>
</shoe>
<shoe>
....
and the client could use the default xslt if they were happy with that, and a custom otherwise.
If you turn off Javascript and cookies, a ton of sites that serve what at least could be static content and which you don't need to log into simply stop working.
What happened to graceful degradation? Is it not agile enough to expect a server to be able to work at least minimally with any browser?
Online stores show so little of what they're actually selling - we wanted to try stripping out everything that isn't the products. (using shoes from Amazon as the example here)
If you want that, you can just fire up links/elinks/lynx; it, surprisingly, works fairly well even today.
But even the writer of the article didn't want to just see the text - she wants to see the shoes and decide which one to buy. You don't want to see a table with shoes numbers, descriptions and prices when you are shopping; visuals are important, too.
That's why noone is actually using text-only browsers.