It's more in the category of "the new kid on the block tries to implement to spec and discovers what everyone else already knows: spec's hosed."
I once received a bug report on a site that consistently went down after a computer woke from sleep... But only if the computer was a macintosh, and only if the browser was Chrome. It turned out that the root cause was that when the machine slept and reawoke, XML HTTP requests that were attached to timers in an open webpage would fire all at once.
On Windows and Linux, apparently, the network stack would dutifully pause those requests while the radio took a moment to reestablish connection. Mac OS x, adhering to the spec, did not pause but instead immediately reported on wake that the network was unavailable.
So, the other browsers on Mac OS wisely broke speck and ignored the first couple network down that came in after sleep, quietly retrying rpcs. Chrome adhered to spec and dutifully reported the dropped network as an error that failed all those rpcs.
As a result, client's page was broken, but only on Mac os, only on wi-fi, and only on Chrome. Would you guess that their first solution was to painstakingly rewrite all of their set timeout logic to move the retries up to the JavaScript layer, or would you guess that their first solution was to report a bug to Google and tell their regular users Chrome was broken?
In any case, it's a moot point now because at some point Chrome changed their network stack implementation to match everybody else's. ;)
It’s getting closer to the point where we call the entire current “web” a Google-specific network, Google Chrome is renamed a “Google” browser instead of a web browser, and we re-make a new interlinked network which does not require one specific company’s product to use. (Never mind an advertisement company.)
The “Google” network and sites can be kept on as a necessary evil proprietary service, like Facebook is for many, and also LinkedIn.
That sounds like requesting an awful lot of volunteer labor from web developers who don't want to do that.
Web developers, ultimately, have very little vested interest in what browser is winning or who's using what as long as (a) people can access their site and (b) they don't have to write the site twice. That's their incentive model. Telling them that the spec is X and if Google does Y Google is wrong when Google is like 90% market share is just kind of a funny idea for them to laugh at and then go right back to solving the problem in a way that reaches 90% of the possible users (and then maybe, time permitting, writing pieces of the site twice to pick up a fraction of the remaining 10%).
> Web developers, ultimately, have very little vested interest
Yeah, of course. It's only the platform they depend on. Why not cede control of it to Google, right? What's the worst that could happen?
Sometimes I ask myself why people even try. What is the point when people have such an apathetic attitude? What is the point of these web standards? Some huge company comes in, dominates the market and suddenly they're the standard. Nobody cares as long as they're making money, even though the huge company is usurping control of the platform. Not even a year ago I saw a post here about people at Google talking about moving the web away from the previous "owned" model to a "managed" model or something like that. As long as people don't have to work too hard to get paid, who cares, right? This notion of an open platform is just a funny idea to laugh at.
Those people are then, to further the analogy, not “web” developers but “Google network” developers. Therefore, I would not ask them to do anything more than they are doing; what they are doing is irrelevant to the new interlinked network.
Which is fine. I'm sure they will care when the new interlinked network becomes relevant to anyone for anything.
(If one wants to do that road, one should probably start reasoning from the "killer app" of a novel network model. The killer app of the web was HTML, and specifically the hyperlink combined with the URL, which allowed for association of information in a way that hadn't been possible before. It'll be hard to one-up that, but if someone could find a way to do it that would be hard for HTML to just grow to consume, there may be room for a novel information service).
It's useful to distinguish between them, though. Apps are almost always first party software that only does what's officially supported. Browsers have a long history of customizability, extensibility, programmability, adversarial interoperability.
What if instead of browsers and ad blockers we had an extensive collection of web scrapers for every web site out there?
I do believe we have all of those things currently, but with the scrapers working against the human condition. Our only recourse is smart human scrapers but that job sucks really fast.