Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Chrome used an already existing render engine and improved on it.



More specifically, Chrome used Safari's rendering engine, and Safari used Konqueror's rendering engine, because even in 2001, starting a browser engine from scratch seemed like too much work.


I would say that in 2001, starting a browser engine from scratch was more work than today, because (as other commenters have noted) since then the specifications have become more robust and the "tag soup" sites not following the specs have become fewer.


https://meiert.com/en/blog/valid-html-2021/

Most sites still don't have valid HTML.

EDIT: my link is old. 98% of the top 100 sites had invalid HTML in 2021, in 2022 we've managed to hit 100%, great job everyone!

https://meiert.com/en/blog/valid-html-2022/


Well, the W3C validator also has issues which do not help.

For example, Wikipedia doesn't validate [0] because its CSS has "aspect-ratio: 1" which looks ok to me regarding the spec [1].

[0]: https://jigsaw.w3.org/css-validator/validator?profile=css3sv...

[1]: https://w3c.github.io/csswg-drafts/css-values-4/#ratio-value


“Invalid HTML” is completely irrelevant. HTML parsing is defined exhaustively; “parse errors” are purely “you probably made a mistake, but I’ll keep going” indications, and all browsers will do the same thing.


And that's why we can't have nice things, er, why we will never have valid HTML on a significant percentage of websites: browsers are historically very lenient with HTML errors (because otherwise they wouldn't be able to show 90% of all sites), and no one uses HTML validators to check if their HTML actually conforms to the spec. It's a chicken and egg problem really: the browsers can't be more strict because there are so many broken sites, and the sites won't be fixed because the browsers aren't strict enough.


I did a quick check, and most of the errors it reports are "unknown attribute" or "element such-and-such not allowed here". Those "errors" would be allowed anyway for forward compatibility, and aren't really a big deal.

IMO the validator's definition of "invalid HTML" is just too strict; it should only count parse errors and completely non-sensible things. And the specification is also too strict at times; on my own website I have "Element style not allowed as child of element div in this context." This is because on some pages it adds a few rules that apply only to that page and this is easiest with Jekyll. I suppose I could hack around things to "properly" insert it in the head, but this works for all browsers and has for decades and why shouldn't it, so why bother?

If the specification doesn't match reality, then maybe the specification should change...


This does not surprise me at all, with all the "must be a web app", still treating HTML mostly as a string in many web frameworks and semantically inappropriately using tags. It is exactly as I thought, the ratio of invalid HTML has become even worse. Probably most web devs these days do not even check their websites for HTML validity, because achieving it with the frameworks they chose is hard or impossible.


I think you severely underestimate "frameworks" if you think generating valid HTML is somehow harder with them than without.


Moreover, tools are better too. Even if one is still using only vim on the terminal, plugins work better, screens are bigger, code compiles faster, and the internet has better resources for everything from programming and communicating with your team, to just finding music that helps you stay productive, for example.


The Internet also has more social networks to keep you away from being productive.


In 2001 the entirety of HTML+CSS spec was probably less than just some of CSS modules like CSS Color.

Today the complexity lies not in the robustness of the specs, but in the sheer number of of them, and their many interactions. I mean, just distance units... There are over forty of them


The problem was solved by standardizing tag-soup. Now just any tag-soup will get displayed the same according to the new specs.


Today's HTML spec defines precisely what will happen for any input—feed modern browsers all the tag soup you'd like!


And Firefox is going all the way back to Netscape Navigator...


In the end, it all leads back to Konqueror. Pretty sad to see that piece of software go, considering its historical significance




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: