> So instead of [building the browser one feature/spec at a time], we tend to focus on building “vertical slices” of functionality. This means setting practical, cross-cutting goals, such as “let’s get twitter.com/awesomekling to load”, “let’s get login working on discord.com”, and other similar objectives.
Seems similar to how Wine is developed: instead of just going down the list of API functions to implement, the emphasis seems more on "let's get SomeProgram.exe to run" or "let's fix the graphics glitch in SomeGame.exe". Console emulators (especially of the HLE variety) seem to have a similar flow.
I really like this approach because it’s a low-effort way to prioritize development.
I did this with the CPU for my GameBoy emulator: I picked a game I wanted to work and just kept implementing opcodes each time it crashed with an “opcode not implemented” error.
It's also a great approach to port games - replace all platform-specific code with assert(false) until it compiles and then fix the asserts as you encounter them until everything works.
This approach works better for Wine where the Windows binaries are a fixed target.
On the web, you may get Twitter's feed rendering acceptably, and then two days later they ship an insignificant redesign that happens to use sixteen CSS features you don't have and everything is totally broken again.
The point isn’t that “getting X to work” is a one-and-done job. Rather, you’re using major websites as indicators for what features to target next, because those are largely one-and-done.
Of course you are correct that supporting Twitter or any service is a moving target, so long as it changes. But that doesn't mean specific bugs can't be captured in a test case.
At one point, he deletes half the HTML file to isolate where in the site the problematic code is. In a way doing a kind of binary search. After a few iterations of this, he comes up with a very small case that exhibits the problem he's trying to solve.
It's clear he knows his way around the codebase and where to make changes, but isolating these test cases is probably as important. And presumably if you fixed enough of these issues (while following the specs), 99% of the modern web should work just fine.
> On the web, you may get Twitter's feed rendering acceptably, and then two days later they ship an insignificant redesign that happens to use sixteen CSS features you don't have and everything is totally broken again.
this is not directed at you, but at this attitude which is very common and which I see all the time: everyone is lightning fast to come up with reasons that something won't work.
why?
why do people say things without understanding that almost any given problem has subproblems, and that those can be solved.
in humans, negativity is always just under the surface, and positivity is often buried deeply, and I do not understand this. I don't think I ever will. people just love to be contrarians.
It's really tiring. It's everywhere on HN, I deal with it everyday at work, and everywhere else I look.
Instead of it being criticism, the commenter could've seen it as a positive. Every time a site changes, you discover functionality you haven't implemented yet. Over time, you've implemented more and more. It's progress. Progress is good. Choosing the negative interpretation is so endemic and arbitrary and simply unnecessary.
If you have infinite time and resources with perfect communication/understanding you can solve a lot of engineering problems. No one has that. This is where the original quoted claim from the article comes from, "building a web browser is impossible". That's encoding a lot of experience and reality of the Brobdingnagian challenge of building a web browser from scratch on 2023's web.
It's not negativity to point out a downside to an approach to a particular problem. It's potentially useful to get feedback on a development approach. Constructive criticism is very important in engineering projects because it encodes assumptions of limitations we all have.
A positive statement like "oh those sub problems are solvable!" doesn't really provide any help. No shit the problems are solvable in a perfect world. Such statements aren't even necessarily constructive because they don't offer any analysis or advice. It smacks of toxic positivity[0].
not all positivity is toxic positivity, you know. I wasn't even positive, I was just anti-negative. being against negativity is not the same as being positive at all.
spouting out a problem you foresee being revealed after another problem is solved is not constructive criticism, it is reactionary and attention-seeking.
my comment is about comments like yours; unlimited time and energy to mention anything that makes what I say sound bad, improbable or difficult, and zero time or energy to even entertain the idea that my point of view is valid, and worth considering.
It's just an effort to enforce social conformity. A specific case of this type of browbeating may not be helpful, but on the whole it's often positive for the group to be mostly uniform. It's also often negative! More a value-neutral standard human tribal grouping behavior than anything.
With regards to the negative interpretation having positive value, yes - working in a company that values ‘not failing in particular cases’ higher than ‘working in general, room for improvement’.
For example in a conservative corporation where a given project requires a ‘go’ from several departments were the success of the project does not give an immediate advantage to those departments, but a failure will require them to explain why they didn’t ‘catch it in review’.
Sure but Wine is a tool where if 90% of the stuff user wants to run in it works, it's still great. Say if you use it for gaming you can play most of the games and only boot into windows every few months once you hit one you can't.
But that's not how you use web. If 90% of the pages worked in browser I wouldn't use that browser, ever, because chances are I'd hit one that didn't at least once every few days.
It depends on the types of bugs and inconsistencies that show up in practice. If a page (or worse, the whole browser) crashes, that's a problem. If a column of ads doesn't scale right and gets bumped down to become its own lonely row below the main content, that's kind of ugly but almost a feature.
> But that's not how you use web. If 90% of the pages worked in browser I wouldn't use that browser, ever, because chances are I'd hit one that didn't at least once every few days.
This is a fact that Microsoft understood and pushed when they tried to get people to build pages for IE instead of working across both Navigator and IE.
I come across a variety of sites that show unusual behaviour every day. And as others pointed out, the page is mostly degraded, not unusable.
There are many grievances when using the web today, some are down to the lack of a set CSS spec. Others due to the complete and utter disregard for browser compatibility. I'm not going to tackle the monopoly of Chrome as a browser. However, there are a number of specific uses of this collection of ever changing specs and implementations that eventually lead to page-breakage in every new browser release.
The web is complex to tackle because everyone seems to think that they've a better idea of what a page is. Some of it is fair, some of it unfair. Nevertheless I would take this approach any day.
I completely agree. The people advocating for this type of development style are in another universe. Web is incredibly fragile.
There's a vast difference between a page being degraded by all browsers in a consistent manner per W3C specs (especially the critical parts of a webpage such as JS execution or malformed HTML) vs the damn thing breaking in such a unique way that the web devs will never be able to fix the page for this new browser while getting it to work the same for others. Worst case would be security is compromised and that is a very long list of things to implement in both the HTTP layer and browser behavior before you even get started trying to render a page.
Web devs shouldn't be fixing pages for individual browsers but instead using features conservatively and degrading gracefully where features are not available. Browser bugs shuld be fixed in the browser.
And for 99% of websites security doesn't matter at all because it's just a one-off visit to read an article or look at some funny pictures without any user account.
Web browsers tend to degrade a webpage rather than fail to load it. Given that every day using Chrome I'll come across a website that is having issues with rendering, it should be acceptable.
No wise old pro who was actually worth listening to I ever met in any field ever says things like "my young Padawan". Saying that makes you look like two kids in a trenchcoat trying to pretend to be an adult.
Which in this case is not inconsistent with this apparent misunderstanding of what moving goalposts means.
To use your own silly words, targeting the features that are the most used is litterally one way to choose challenges wisely.
In general the Windows binaries are not really a fixed target for Wine either - many applications (e.g. multiplayer games) have online components that require you to run the latest version. And even for others, people will demand the latest version work. Fixing things that are actually work first is still a good way to prioritize things - it's not like you stop implementing functionality once one target program works, you just move to another one. And if you run out of interesting programs then you can look at 100% API coverage.
yep, even better in unmaintained consoles, where the popular demand for concrete binaries is pretty much set in stone and you can even aim to complete the entire library of software binaries over time - as they're usually in the hundreds or low thousands rather than millions or billions
however the aim to build a reasonably sized and not ossified-to-previous-spec web browser is very interesting, especially if it's well engineered and made to be portable
> Console emulators (especially of the HLE variety) seem to have a similar flow.
Good insight.
I was going through this same spiel in my head the other day.
It's a flow that if properly managed can provide a good feedback system. It provides the developer positive feedback and at the same time successful milestones.
Say I'm building an emulator for a simple architecture with a few dozen opcodes...
"Alright. Let's start. Where do I start? How about NOP." So you implement NOP. You write some tests for it. Maybe you build a pretty printer into your opcode and you test it on disassembling a single byte file with a single NOP opcode.
Suddenly you have a working dissassembler! It's obviously an artificial toy, but it works.
Maybe next you add an INC instruction. Add some tests. You'll need registers...
Build a simple one INC opcode binary file. Maybe add an executor in addition to a dissassembler. Suddenly you've got registers working. And if if add another INC opcode byte, you can see your emulator changing behavior based on real external input!
And so on. It's an interesting flow, you're right.
I don't think that link supports your comment. It says that vertical slices (at least as described by that article) are generally unrealistic in game dev.
You're right! I skimmed the first paragraph, as I was mostly just looking for a description of vertical slices in games. The rest of the content does not in fact validate my statement.
You're right that the "vertical slice" approach has been used very successfully in games development, though. Mark Cerny[0] has been evangelizing[1] the idea that preproduction isn't over until you have a "publishable first playable" (a.k.a., a complete vertical slice) of your game.
Basically, you shouldn't switch from "preproduction" to "production" until you can show (A) here is actual gameplay, (B) it is, in fact, fun, and (C) you know how to actually implement it. Until you can demonstrate those things, how are you supposed to estimate how long it will take to build? Or that, once you're done, whatever gameplay mechanics you dreamed up are actually entertaining to a player?
[0] arcade game programmer, producer / studio exec who got Insomniac and Naughty Dog to scale beyond their founders, & eventual lead architect of the PS4 and PS5
Do we really pretend this is a architecture decision? This is the classic Agile Management wants to show result for reward fast, that leads to huge tech debt build up, as layers are not properly designed and reuseable.
It’s neither. There’s no management or promotions or really any incentive to do this other than it’s fun. It’s also not an architectural decision, just how they as a team decide what to work on.
But how does developing a half-baked browser that targets some websites for fun refute that building a browser is impossible? Doesn’t it provide another example that it is impossible, at least for this team?
Well, it takes time to make something big. And, one way to do it is to choose end-to-end functionality. Idk, it doesn’t seem so controversial to me. I’d wait and see before calling it half-baked.
A vertical slice of the properly integrated features needed by some practical use case is certainly more efficient and effective than implementing the whole of a large API (e.g. some fancy recent, unproven and unstable CSS module) "in a vacuum", getting numerous rare cases wrong, and struggling to test the new features.
This is a common misconception. I practice TDD at work and side-projects and find it very productive. TDD is best done with end-to-end tests (or automated integration tests, whatever you wanna call it). You write an end-to-end test (I give input A to the entire system, expect output B), first the test fails (because it's unimplemented) and then you implement and it passes.
It works because then your tests become the spec of the system as well, but if you only write unittests there is no spec of the system, only modules of your code. Which is not useful because if you refactor your code and change this module you need to rewrite the test. Whereas in TDD your tests should never be rewritten unless spec changes (new features added, new realization of bugs etc). This way "refactor == change code + make sure tests still pass".
You're of course free to write unittests as well, when you see fit, and there is no need to target a religious X% coverage rate at all. I think coverage targets are cargo-culted, unnecessary, time-consuming and religious. The crucial thing is, while you're writing new code (i.e. implementing a new feature or solving a bug) you need to write an automated test that says "if I do X I expect Y", see it fail, then see it pass, such that "if I do X I expect Y" is a generic claim about the system that will not change in the future unless the expectation from the system changes.
In other words, the example in this comment chain: "run a game, see 'opcode X doesn't exist', implement X, rinse repeat" is actually how TDD is supposed to work.
>Console emulators (especially of the HLE variety) seem to have a similar flow.
And emulators that have gone this path have all regretted it, because they end up making hacks to make <popular game> work, because everyone simply wants to play <popular game>. Dolphin is still paying the price of that method years down the line. Project64 took years to unfuck themselves up, ZSNES is forgotten and overtaken by many more that have done the proper thing.
So, sure, you can get some initial usage. But making a browser isn't about being able to open twitter.com
I feel like you may have misunderstood what kling is saying. He's not saying "we will cut any corner to get Twitter to load", he's saying "we look at what it would take to get Twitter to load, read through the relevant specs, and try our best to implement the required features cleanly and correctly".
Loading Twitter (etc.) is not really the goal, it's more of a prioritisation mechanism for tackling a huge spec. Actually getting Twitter to run is a nice reward for all that hard work though, and a series of such rewards keeps the contributors motivated in the marathon that is building a browser.
No, I understand what Andreas is saying. But the reality is, when you read and implement specs for a specific website, you end up cutting corners, even accidentally. Maybe Twitter relies on a particular behavior of fetch() that was screwed up in Chrome 97 and has had to be kept for backwards compat this entire time. Maybe it uses some CSS that never got properly documented or specified.
By targeting a single website, you end up accidentally writing in those site-specific fixes _in_ your implementation. You only realize it's fucked up because you visited Twitter. but maybe it screws up another site. Maybe something else depends on a quarter of that functionality, and you've accidentally broken it.
It seems that GP already addressed your concern: "read through the relevant specs, and try our best to implement the required features cleanly and correctly"
Of course some errors will be made along the way. That's to be expected, regardless of the approach taken.
Maybe so. Hopefully you’d find that out while looking at the spec for that particular function. If not, you may have to rewrite some code as you learn more. Life goes on
> ZSNES is forgotten and overtaken by many more that have done the proper thing.
When ZSNES did its thing, which, just to remind people, was back when Pentium IIs ruled the roost and CPUs topped out at 450mhz, doing things the proper way was not a choice because the proper way needed 3x the CPU power of doing things the ZSNES way that worked.
Dolphin became popular because it could actually play games, and I bet if they had instead spent an extra 3 or 4 years working on code that was "correct" without releasing anything, well odds are they wouldn't have such a large following and would not have attracted so many contributors.
Users do not benefit from "perfect code" that they never get to use because it is still in development.
Seems similar to how Wine is developed: instead of just going down the list of API functions to implement, the emphasis seems more on "let's get SomeProgram.exe to run" or "let's fix the graphics glitch in SomeGame.exe". Console emulators (especially of the HLE variety) seem to have a similar flow.