For several years (2001-2005) we've already had the situation when the single browser engine (Trident, MSIE) had the dominant share of 90% and more. Indeed, this was bad for the Web as a platform, pretty much in ways the article describes.
However this time we've got a difference: new dominant browser engine is open-source with a very liberal license (BSD, LGPL). So anyone who is not satisfied with the development of WebKit is welcome to fork. In fact, Google has replaced JavaScriptCore (part of WebKit) with V8.
So is there actually any reasons for having competitive open-source full browser stacks? I believe licensing reasons (the one behind FreeBSD and Linux) is not very important for browser world (Gecko and WebKit have very similar licensing terms). And I don't see much ideological reasons that can't be fixed by forking.
What else? Usually developers hate abandoning their work and switching to improve competitor's solutions — especially in the open source world. Developers like the feeling of doing important stuff and money. Market share of Firefox is still quite high (~25%) and Mozilla still receives money (company is non-profit, but developers are being paid I believe). Probably for this reasons the struggle will continue for some time.
However it is difficult to assume that there is something useful for web platform in this struggle. Every new web standard feature requires independent implementation from two different open-source projects and the whole platform adoption process goes as fast as the slowest team goes.
It's a very valid point - it doesn't take a genius to figure out competition drives development, while having a market monopoly affords you a great position of "extortion" in regards to your customers (developers). If there was only one solution from only one company, it would literally be their way - and that's it.
Then again, this is why we first develop web standards or markup languages (e.g. HTML, XHTML, XML) in the first place ;)
If a single company was in control of Webkit then it would be an issue, but thankfully there's not. I actually see Webkit's main competition being native apps!
Take a look at http://quetzalcoatal.blogspot.com/2013/02/why-software-monoc.... The author points out how GCC's monopoly, though open source, forces clang to copy its implementation quirks almost exactly, because it's been the de-facto standard for years. Open-source doesn't automatically make the problem go away.
It was surmountable in gcc's case, but still took lots of resources (which clang has). It might not be surmountable in other cases, and even if it is it raises the bar quite a lot for new competitors.
I feel like we're at an interesting tipping point here: I've seen about an equal number of articles in favour of a Webkit monoculture and opposed to it.
The general pattern seems to be that, if you're interested in building a better X (browser, compiler, operating system), then a monoculture is bad. If you're interested in building on top of X (websites, code, applications), then a monoculture is great, so long as the dominant entity is good enough.
In lots of cases, I think the people building on the platform tend to get their way, both because they're more numerous and because the technology in X eventually stabilises, so fewer people want 'a better X'. If a truly better X does later emerge, it then needs a Herculean effort to break the monoculture (e.g. Firefox in the bad old days, clang, the Linux desktop). Then there's an interesting transition period before, perhaps, a new dominant force emerges.
Maybe Mozilla can maintain a mixture of rendering engines by being the determined underdog. It would probably make it easier for a new rendering engine to emerge, but the open web may seem less inviting than more uniform technologies. For all its faults, one of the reasons Flash did so well for so long was that developers didn't have to deal with multiple competing implementations.
"For several years (2001-2005) we've already had the situation when the single browser engine (Trident, MSIE) had the dominant share of 90% and more. Indeed, this was bad for the Web as a platform, pretty much in ways the article describes.
However this time we've got a difference: new dominant browser engine is open-source with a very liberal license (BSD, LGPL). So anyone who is not satisfied with the development of WebKit is welcome to fork. In fact, Google has replaced JavaScriptCore (part of WebKit) with V8.
So is there actually any reasons for having competitive open-source full browser stacks? I believe licensing reasons (the one behind FreeBSD and Linux) is not very important for browser world (Gecko and WebKit have very similar licensing terms). And I don't see much ideological reasons that can't be fixed by forking.
What else? Usually developers hate abandoning their work and switching to improve competitor's solutions — especially in the open source world. Developers like the feeling of doing important stuff and money. Market share of Firefox is still quite high (~25%) and Mozilla still receives money (company is non-profit, but developers are being paid I believe). Probably for this reasons the struggle will continue for some time.
However it is difficult to assume that there is something useful for web platform in this struggle. Every new web standard feature requires independent implementation from two different open-source projects and the whole platform adoption process goes as fast as the slowest team goes."
Competition in open source is not bad. It is actually needed. People can fork WebKit but they might not able to change it dramatically if it's not in favor of Google, Apple and Nokia.
I can't think about a case that WebKit "bosses" won't like a change from another party, but that might happen. We need competition for implementations. What we don't need is double standards for the web.
Microsoft proposed and standardized CSS grids which is awesome, but Google and Apple for some reason do not implement it in WebKit. Microsoft will not do that too. Both ends are enjoying the situation. Pulse.com works great in IE10 because of CSS grids. Microsoft can put their "work best in IE" logo on their website again. Webkit seems don't care much about CSS grids because they think flex-box is the solution.
This is the problem. We have companies making and implementing new standards for their use without caring for the rest of web.
> I can't think about a case that WebKit "bosses" won't like a change from another party, but that might happen.
Such things already happened, for example Google wanted to add changes to WebKit to support another VM in the browser (for Dart). Apple devs blocked the attempt for technical reasons, but some speculate political ones were relevant as well.
Maybe you can also explain the term hellbanned and its origin? Neither the FAQ nor the guidelines explain anything... Further, some of these hellbanned comments seem benign?
The name hellban is from Something Awful, but it's a much older idea. The basic problem with banning problem users is that they'll often just register a new account and keep making bad posts. Hellbanning tries to solve this problem by hiding from the user that they're banned; they can continue to post and everything appears to work from their end, but their posts are hidden from all other users. The hope is that eventually they'll get bored with getting ignored and just move on.
A lot of people find this distasteful for obvious reasons, but it's fairly effective. The occasional good posts from hellbanned users (which are the vast minority; most are terrible, spam, or at best noise) are simply a result of that moderators are not perfect.
The cost of Firefox switching to WebKit would be enormous, likely even infinite. Too much of Firefox and its extensions depend heavily on non-HTML features of Gecko.
While it would indeed be enormous, it wouldn't be infinite - after all, they are writing Servo (See https://github.com/mozilla/servo/wiki/Design for details) that's hopefully parallelizable. Of course, whatever that ends up being might not be Firefox.
That is a great comment and apparently he is hell banned for a single massively downvoted apple snark. Either heavy down voting automatically triggers a ban or the moderators are smoking something.
If true, "Works best with Chrome" / "Only works with WebKit" etc would simply show that today's web developers are even bigger bozos than the ones from the 1990s.
So, what we learn from history is that people don't learn from history: they just repeat the same mistakes over and over.
I think today's web developers are just a lot younger. I'm 21 and I came to start developing when IE6 was still relevant. (Around my age 15). The generation after me came to developing in a completely different post-IE6 world. I don't think most of them realize the historical context or even the relevance. It's a vicious cycle. I'm not sure if there will ever be a cure for young naivety.
These literal kids just want to build cool stuff and they have more power/freedom/resources than ever before. Give them time and the good ones will learn from their mistakes.
When I started, there was no web. The historical context is quite a bit broader, and it helps to understand what the state of the technology was.
When the browser war was fought (and won):
- Netscape Navigator was actually a pretty terrible piece of software. I dare say they deserved to lose. Towards the end, most Mac users were running IE5 -- it was a better browser (http://en.wikipedia.org/wiki/Internet_Explorer_for_Mac).
- There was diminishing market interest in supporting alternatives to Windows because non-windows marketshare (Mac OS X, UNIX workstations) was plummeting. This allowed Microsoft embrace-and-extend to succeed.
- The web development field was nascent at best. Similar to how many web developers are moving into the mobile space today (and bringing their ideas of how to write apps with them), you had Windows-centric enterprises migrating towards writing web sites (not apps!) and bringing their ideas of how to do so with them.
I very much doubt that the web browser war would happen again in quite the same way. Between the availability of open-source browser stacks, the genuine viability of multiple platforms and vendors (iOS, Android, Mac OS X, Windows, and even Linux), and the established web development community (which would have to be co-opted), I don't think it would be quite so easy for someone like Google to 'win' a browser war and then stagnate indefinitely.
What bugs me about all these articles is that there is no call to action - anyone who wants to defend against monoculture (whether or not that is a worthwhile goal) should start contributing to servo[1], Mozilla's experimental next-generation engine, written in their experimental new language, rust[2].
Good point, and I should probably update the article to include something. Servo is one way, though I fear it's a bit too far out still to make much of a dent in the mobile web for quite some time. Still, it may be a good choice depending on your skills and inclinations.
There are other options too. My main ask for web devs would be to test on mobile Firefox and/or the Windows phone browser in addition to whatever webkit browsers you normally would. Actually, for now testing on Opera would probably be better than either of those; the general consensus seems to be that they're the best wrt standards.
It won't prevent the monoculture, but it will mitigate its effects and help keep the field open for competition.
I think the OP forgot one: the potential for a zero-day vulnerability that hits all current browsers, because they're all descended from a single ancestral code base, and a C++ one at that.
Doh! You are absolutely right, which is sad since I was using a mental model of invasive species when I wrote the post, and even pondered how some plant species do great, smothering and outcompeting the natives, until a fire comes through during their dry season and wipes them and their seeds out.
I honestly apologize for that. My post grew out of a thought I had that if someone wrote a sim-type game about the Browser Wars, then it would be a lot easier to demonstrate to people the ramifications of the forces at play. The only way to figure out a winning strategy would be to analyze the incentive structure, and from that predict the other players' moves. Sadly, I omitted that background from the post, leaving only the title behind as a hint. I agree that an actual game would be cooler and relevant to a different audience (namely, you and the others I suckered in to reading my post due to the title).
That's some pretty heavy fear mongering to excuse being stubborn and outdated.
The entire article is written as though it's a parody, yet we still have to live with their inferior browser and cumbersome ignorance of what people want.
Vague criticism without detail adds nothing to any discussion. Why not try to articulate what, specifically, bothers you about Firefox? What specific ways is it outdated and ignoring what people want?
For a few years Internet Explorer had a monopoly and lots of web sites became dependent on its bugs, quirks, and extensions. A lot of corporations are still stuck due to that.
It was because IE had a HORRENDOUSLY SLOW development cycle with minimal resources committed. It was impossible to get all of those issues fixed in a timely manner. If they can be fixed within a few months, it won't be 5+ years of entrenching people in egregious bugs before a new version comes out that breaks all of those if/elses in everyone's code.
We develop ONLY for Chrome. It (and WebKit especially) has bugs, too, and we report them regularly. But the development cycle is rapid enough that we can put a TODO in the code, file a bug against it, and a few months later we actually get to fix the code because, oh my gosh, it's fixed on Chrome stable.
IE only had a horrendously slow development cycle with minimal resources committed after it became a monopoly. Up until that point it was regularly updated with features and bug fixes.
I doubt that Chrome/WebKit will stagnate if it becomes the de facto standard but I'd rather it didn't have the chance and I'm constantly amazed that so many comments on HN seem desperate for one browser to rule them all.
IE was always that way, actually. Please check their release dates. They were very slow by standards we expect today. After 6, they did suddenly disappear for years, but again, this is a Microsoft problem.
It was also closed source, so nobody who found a bug could just go fix it. You had to ask Microsoft, nay beg Microsoft, to fix it for you. And they didn't.
These effects compound one another. The result is that MAJOR bugs become features.
You're wrong, but I guess you're too young to remember and can't be bothered to learn anything.
The IE team produced 7 versions in six years and there were some very substantial advances, including a new layout engine (Trident). They also did Mac and Unix versions, a mobile version, and a tabbed version for MSN (before iE had tabs).
IE certainly developed a lot faster than anything else on the market in the 1990s, bearing in mind that Netscape took three years to get from 4.7 to 4.8.
Safari entered the market late (2003) and still took the best part of seven years to make it to version 4.
I would (saying that, I havent even written about the switch to webkit), but it would be similarly disappointing (albeit understandable).
I think a lot of people project some war between browser vendors that doesnt reflect reality. Only speaking for myself (as a Mozilla employee) but I want what I believe to be the best for the web, not for Gecko / Mozilla (luckily Mozillas entire goal is to do the best for the web, so we have joint interests)
Google wants what's best for Google, Apple wants what's best for Apple and Microsoft wants what's best for Microsoft. Since all three are grasping multi-national capitalist corporations with a fiduciary duty to shareholders, that's to be expected.
Since "Mozilla is a proudly non-profit organization dedicated to keeping the power of the Web in people’s hands" then it is able to do things differently.
However, users are basically self-interested and short sighted (basically "how fast does a tab load") so just being morally superior isn't a win ;-)
Exactly, being morally superior gets you no where, and being morally inferior doesn't hurt you either. You said yourself that users only care about "how fast a tab loads." Well, if Google ships a browser that's terrible, people will switch to something else. That's why competition still exists! Firefox was in a position to dominate the desktop browser market, but then Chrome came alone and ate their lunch! There's nothing stopping Mozilla from regaining that market share, but they're going to have to ship a better browser!
Not sure if you're being tongue-in-cheek, but Mozilla doesn't have any moral superiority over Google, Apple, or Microsoft. At the end of the day, we're talking about individuals here.
Partly tongue in cheek. However, we know that Google, Apple and Microsoft will do what is best for their companies, however they dress it up. Mozilla, being non-profit, does have a better ideological foundation for doing what is best for the web.
Of course, it does come down to individuals in the end, and
open source has pointless turf-wars, so nothing is guaranteed....
If everyone were switching to Gecko, that would be horrible for the web - it would be a monoculture.
If just Opera switched to it, that would be bad - loss of one implementation is almost always bad - but it would at least maintain some non-WebKit market share in mobile. So less bad but still quite bad.
So yes, I am guessing that Mozilla folks would say it would be negative (I would).
And that monolithic mono-culture was NOT able to evolve to reduce power consumption in a reasonable timeframe. It wasn't even a big priority, since nobody expected x86 to be in a person's pocket, ever. (AFAIK). It took an outside player, ARM, to drive the debate. So it's pretty much a validation of what Mozilla's saying.
Preventing innovation: a gang of hackers makes a new browser that utilizes the 100 cores in 2018-era laptops perfectly evenly, unlike existing browsers that mostly burn one CPU per tab. It’s a ground-up rewrite, and they do heroic work to support 99% of the websites out there. Make that 98%; webkit just shipped a new feature and everybody immediately started using it in production websites (why not?).
Notwithstanding the other points made, how is rapid adoption of new features, and a competitor's ostensible inability to keep up, preventative of innovation?
The issue here is not rapid adoption, it's an effective monopoly being used to lock competitors out of the marketplace by making fast, arbitrary changes and forcing people to keep up. This has already happened in a few scenarios despite WebKit having competition, so it will get worse if all the competition slowly dies off. webkit-only mobile websites are an obvious example but chrome-only audio code in HTML5 games is another great example (Chrome's HTML5 audio stack was so broken it caused crashes, so to have working sound in Chrome you had to use their special API. As a result, there are HTML5 games out there that only support Chrome's API or, if you're lucky, have a fallback based on a flash plugin).
EDIT: Another great example is the stunt Intel pulled with AVX to intentionally sabotage AMD's ability to compete in the market, as documented here:
http://www.agner.org/optimize/blog/read.php?i=25
Essentially, Intel published a proposed new instruction format, and AMD said 'that looks great, we'll be compatible with it'. After AMD announced this and had started preparing to ship their new chips, Intel suddenly announced that they had changed their instruction format from what they published - after it was too late for AMD to adapt.
The end result was AMD shipping chips that were incompatible with Intel's despite AMD's best effort. Intel knew that as the majority market share holder, developers would prioritize Intel compatibility over AMD compatibility, and AMD would lose.
Writing a browser engine takes time. While you write it, WebKit will be adding new features. By the time you get up to it, it would advance ahead, adding more and more quirks (keep in mind you need to implement bugs as well as features). Since WebKit isn't a standard, it moves quickly, so your own team must be better than all developers working actively on WebKit and then some.
You're confusing innovation in features with innovation in basic architecture.
But the premise of the article doesn't even require innovation in features. It just requires changes that change behavior that sites then depend on and that you have to reverse-engineer.
And reverse-engineering is very time-consuming and slow. If all possible competitors have to reverse-engineer to become viable, that puts in place a huge barrier to competition.
Note that WebKit already behaves this way in various cases: their transitions draft proposal was very vague (as in, what they described could have been figured out in a few afternoons by someone playing with the functionality and their developer docs) and then the editors (Apple employees, note) did nothing to improve it for a few years, forcing everyone else to reverse-engineer WebKit to implement this "standard"...
I think having more then one engine is important to keep everyone accountable and motivated. To me the idea of developing for the standard is merely idealism until either we have either one engine to rule them all or the engines come to the realization that the responsibility of consistently implementing the spec starts with them and not the developer.
In the case of having only one engine, obviously the standard suffers but in the case of multiple engines the developers suffer. Which is worse?
I feel like we as developers have done a good job to mitigate the suffering from having multi engine compatibility with frameworks to the point where it's still better to maintain the direction of multiple engines and code for the implementation rather then the spec simply for the sake of accountability.
At the same time it would be nice to fork everything off the best candidate and unify things but then we wouldn't have anything to compare it to in order to know it's STILL the best candidate.
Competing browser engines and using Javascript frameworks to normalise quirky behaviour sounds like the best of both worlds. Any bugs (i.e deviations from the spec) are quickly patched by the framework until they can percolate down to the browser engine. Of course, note that I consider writing raw HTML to be 'low level programming' ;)
But there's still room for competition with the same codebase. WebKit browsers differ significantly, and still compete with one another.
Though this post claims to be talking about WebKit, I see something like this:
> There’s a bug — background SVG images with a prime-numbered width disable transparency. A year later, 7328 web sites have popped up that inadvertently depend on the bug. Somebody fixes it. The websites break with dev builds.
And have to wonder if they're trying to project IE's issues onto WebKit. I used to have a lot of respect for Mozilla. But now that MDN is stagnating, Firefox is a much less inviting development environment than Chrome (oh how things have changed), and with Mozilla talking shit at every turn, I think I have to revoke all respect. Good luck, guys.
The post is discussing how people develop against a single engine as opposed to a standard, its not 'projecting' and certainly not 'talking shit' about webkit
I disagree entirely. This is an inflammatory piece that is factually full of nonsensical claims that don't and haven't held up. It makes HUGE assumptions that have direct contradictions in practice. I don't see how anyone could take it seriously. As for not "talking shit" about WebKit, let's look at that one claim:
> Backwards bug compatibility
This is obviously pointed at what IE did after years of maintaining the same bugs that people relied on. The article actually figured out WHY this happened: taking a year to fix a MAJOR issue results in people relying on it. IE was always updated very, very slowly. Nobody else really had this problem, and IE is the one who instilled major flaws as "features" and refused to correct them later on. This very much describes exactly what Microsoft did with IE.
But then they try to ascribe that, somehow, with major handwaving as a problem that will be seen if everyone uses the same code base. This coming on the heels of Opera deciding to use WebKit. It's very clear they are making the claim that the same problems IE was having would be caused by WebKit. This is called talking shit.
The problem is that these problems were caused by a well known phenomenon: taking years to fix major issues. Google has maintained a PHENOMENAL rate of development on their own browser, and they aren't even competing with Mozilla anymore, they're way out in front. Pushes to stable are slower but Canary is updated almost daily. I've seen major bugs and regressions get fixed in HOURS. And I've seen independent players issue patches to fix problems.
This is a picture that is antithetical to IE. It is the POLAR OPPOSITE of the scenario under which this problem initially became so egregious. There is no basis for this claim. And to the point that we "NEED" multiple implementations of a standard to find where things are ambiguous, please feel free to peruse the discussion groups at your leisure. Not only are these ambiguities discussed without referencing Firefox, IE, or anyone else, but they're often resolved with changes in the implementation or, rarely, in the spec. This would be an impossibility if we all worked on 1 code base, clearly.
Sure there is. Consider https://bugs.webkit.org/show_bug.cgi?id=36084 which is unfixed for many years now because of backwards compat issues with non-Web stuff on Mac that uses WebKit.
I can find you more examples if you'd like.
Sometimes WebKit is willing to break compat to make progress, but very often they are not. And if they were not competing with others, I fully expect them to be less willing to break compat: right now they mostly do it when the standards and other UAs force them to.
Remember, my article was about incentive structure. Webkit's evolution so far has NOT been in the context of a monoculture, so pointing to past behavior as evidence for future actions within a monocultural context doesn't hold up. I would guess that at least part of the reason why webkit has been developed so aggressively and well is because they were competing with the other players. I know that much of the energy behind recent spidermonkey development came from v8 kicking our ass in the Firefox 3.6 era.
Look, I am not dissing webkit devs whatsoever. They're smart people doing a great job, and they're getting strong support from their employers because doing a great job really matters. But if we end up in a webkit monoculture, I imagine much of both the intrinsic and extrinsic motivation will disappear. You're running a for profit corporation. Should you continue pouring resources into a game you've already won, or should you shift them towards something else that's going to impact your bottom line? That, and only that, is the connection I would make with IE6.
If that's talking shit, then fine I'm talking shit. But I don't think it's unfair to expect webkit's caretakers to behave like rational humans.
As for needing multiple implementations to find problems, I won't challenge the assertion that people are uncovering and resolving ambiguities on discussion forums, without needing multiple implementations. But how many are found this way? Surely you would agree that some problems are found by trying something out, having it not work, testing it in a different browser, and seeing it behave differently? I assert that many, many problems are found this way. I further assert that ambiguities don't really matter to people if all browsers behave the same way - until you need to do something different (eg make something faster or add a new feature), at which time those ambiguities suddenly become critically important. Enumeration order of JS properties comes to mind here. The Web came to depend on creation order despite it not being specced. But what about indexes? What if you have some of both?
> Google has maintained a PHENOMENAL rate of development on their own browser.
Right, this is Google now, not Google of 2018 or Google in 2030. Nothing prevents Google from saying, oh well this sucks... Let's kill Chrome. Or shift manpower to something else, slowing down rate of development.
Also I don't understand what is your point, this argument was about WebKit, not Chrome.
I don't want to start a browser flame war, but for me these are clear victories and explanation is unnecessary. Their combination is sufficient for me to vote that Chrome > Firefox, and this as a person who used to use Firefox religiously until Chrome completely changed the game.
Usability (configuration, ease of migration of configuration, etc): win
Developer tools: win+++++
Extensibility: win
Speed: win (JS + speed of rendering, though this is a lot closer than it used to be)
System footprint: win
Availability of experimental features in beta: win (though from time to time Firefox does some crazy awesome shit in beta builds, Chrome is more consistent)
I've long said the only browser remotely close to Chrome is Firefox, but for me, they are a clear leader. It's unfortunate to see Mozilla behaving this way in public. I take Opera's choice of using Chromium as a validation of my assertion, and I really believe if they had chosen to use Mozilla's software instead of Google's, that Mozilla would be fairly silent right now.
You really shouldn't list subjective things like Usability, Extensibility because for me Usability is the same and Extensibility is laungably better for Firefox. Also I don't think Developer tools are that much bigger win.
I understand that you don't want to start a browser flame war, and neither i do, but:
>Configuration
Sometimes Chrome is easier to configure (With Firefox you need to enable click to play in about:config, while Chrome has a checkbox somewhere in the "advanced" menu), sometimes is harder (Try to configure a proxy in chrome)
>Ease of migration
I haven't tested it myself, but i know that Firefox can load your history/bookmarks from Internet Explorer and Chrome.
If you mean between the same browser, Firefox Sync is on par with Chrome for me.
>Developer tools
Chrome has better tools ootb, but you can install Firebug in Firefox which is mostly on par.
>Extensibility
You're talking about addon? Because Firefox's addon can be way more powerful than Chrome's addon.
>System footprint
Firefox usually uses less memory than Chrome (but it's more prone to memory leaks).
For the CPU, i don't know about Chrome, but my Firefox installation is currently using about ~1% of it
There is some merit to this claim. Firefox on my 4GB 1GhZ laptop is somewhat sluggish (it could be down to several factors), especially when it comes to Flash. Chrome flies, but Nightly is actually on par with Chrome.
If developing to standard gives you the same result on each engine then the number of them does not matter.
What if I develop to standard but only test in Webkit. My page is broken in Gecko though. Whose fault is that, Webkit's, Gecko's or mine? If it's mine then why? I did everything according to standards.
So in reality you don't develop against standards (alas), you develop against their implementations, and having multiple engines helps in no way.
In reality, most web developers have no clue about what the standard says in edge cases, which is just fine: neither do most other people.
But the result is that it's very rare to find cases that are really developed "to standard".
Even worse, though, on mobile right now people aren't even trying to develop to standards. UA sniffing and locking out of non-WebKit UAs and use of -webkit-prefixed things even when standard alternatives are available is rampant and purposeful. And when you ask these people to develop to standards they just laugh at you.
In reality you develop against multiple implementations that attempt to adhere to the same standard, where they break in between is where the standard or the implementations need fixed, differences in assumptions agreed and ambiguities clarified. That was the point of the blog post?
What do you mean MDN is stagnating? There's a constant upkeep going on there, which is difficult considering how fast the web moves but it's constantly being updated.
It's becoming less usable (see: JS classes, prototypes were separated in some but not in others) and the information on it isn't keeping up with standards at all. It might be fine for people who are working on browser builds as of about 24 months ago, but it is woefully inadequate for anyone who wants to use something that is even in a draft state or considered stable.
A quick example: look at document.cookie. They are referring to DOM2 information. It's undergone some (slight) changes in HTML5. Nothing is referenced, even though that section is relatively stable and marked as safe for implementation. That's a 2000 spec vs a 201x spec. Nobody's even gone in and pointed out that this has changed in HTML5.
For some weird reason removed front the front page in short time. I have seen submissions with less votes and less discussions stay on the FP much much longer than this.
Open-source counters most of the issues around monoculture. Remeber that the first browser wars occurred between two closed source products.
In fact, I think that a single OSS project is far more effecient than attempts at standardization across competing products when it comes to user-beneficial innovation.
Look at the open-source UNIXes. Nearly all the value-add has come from cross-pollination of "proprietary" and not-yet-standardized enhancements, which are consumed by users and application vendors targeting those platforms.
You're basically saying there should only be one web browser and no one should try alternate approaches to common problems unless they're starting with the same codebase.
Being open source doesn't change the fact that it's the same codebase.
Why should there only be one browser engine? I'm a web developer and I hate cross-browser testing/compatibility, I prefer to use Chrome for it's devtools. I would hate for there to only be one browser in the world.
No, I'm saying that open-source solves most of the issues with a monoculture, while also being more effecient than vendor standardization when it comes to pushing forward innovation.
How does open-source counter the issue of monoculture? If there is a monoculture and it's open source it still has the same problems. You can fork but there's a monoculture so your fork is irrelevant. You can submit a patch but there's a monoculture so your patch doesn't get accepted.
Open source isn't a magic bullet, try forking Chrome and see how far you get without prominent adverts on the most popular page on the internet and investing millions into packaging your browser as part of Flash/Java/etc. updates.
However this time we've got a difference: new dominant browser engine is open-source with a very liberal license (BSD, LGPL). So anyone who is not satisfied with the development of WebKit is welcome to fork. In fact, Google has replaced JavaScriptCore (part of WebKit) with V8.
So is there actually any reasons for having competitive open-source full browser stacks? I believe licensing reasons (the one behind FreeBSD and Linux) is not very important for browser world (Gecko and WebKit have very similar licensing terms). And I don't see much ideological reasons that can't be fixed by forking.
What else? Usually developers hate abandoning their work and switching to improve competitor's solutions — especially in the open source world. Developers like the feeling of doing important stuff and money. Market share of Firefox is still quite high (~25%) and Mozilla still receives money (company is non-profit, but developers are being paid I believe). Probably for this reasons the struggle will continue for some time.
However it is difficult to assume that there is something useful for web platform in this struggle. Every new web standard feature requires independent implementation from two different open-source projects and the whole platform adoption process goes as fast as the slowest team goes.