Nolan Lawson's recent talk on CSS Runtime Performance[1] show some pretty amazing styling performance wins for Shadow DOM. It's nice we can do more of this easily!
I think this should all work outside <template> tags. Re-remembering how they worked slowed down my comprehension some. The MDN article[2] was a solid refresher.
Thanks for the shout-out! I think I mention this in the talk, but note that YMMV. I designed that benchmark as a kind of "worst-case scenario" where shadow DOM / scoped styles really show a benefit. Depending on your CSS rules, DOM size, and amount of thrashing, the perf benefit could be small to large.
Also, it's still possible to shoot yourself in the foot, especially if you have a large/complex stylesheet repeated across multiple shadow roots. (Not because of the repetition – that's optimized in browsers [1] – but rather because of the number of DOM nodes affected.)
That said, I still think the perf benefits of shadow DOM have been undersung. And Declarative Shadow DOM makes it way more useful.
Does this mean we'll finally get Template-Instantiation[0], providing a native way to do HTML templates in the browser?
There's been a bunch of hand waving that the reason we still don't have this is because they needed a solution for encapsulation and a solution for SSR. Now we have both. Can we please finally deliver on something people have been asking for since the mid 2000s?
Declarative shadow roots and Template Instantiation are very different proposals with no overlap.
The only relation here is that with some critical feature done like this and shadow selection, there will be more time to work on Template Instantiation.
I know, as I highlighted, it appears to me that all the hurdles for Template Instantiation have been passed, and its about time we start to see actual movement and traction on this.
its been in the pipeline since 2017 with little major movement on it, while a bunch of things developers didn't actually ask for have managed to have more precedence. Whenever I've pressed for reasons for this, it gets hand waved away that they need solutions for SSR and encapsulation. We have both now.
Honestly should have shipped day one with Web Components, would have made it far more compelling. It has always felt like such a big miss to me by the standards bodies that this didn't get attention.
I was always really bullish on Web Components since the first whispers started...
Then I wanted to start building non-JS websites. And suddenly "Web" Components just weren't an option. Now I've dug even further to SSR, and the shortcoming is painful. I don't want to use Next.js, I want to use HTML. I want standards.
Maybe in 4-5 years this will finally be a solved problem...
> Maybe in 4-5 years this will finally be a solved problem...
Of course it won't. And of course it won't be HTML.
Thing is, Webkit wanted to start with HTML and with all things declarative [1] Google wanted to "move fast".
Result? The need a few dozen more standards to patch deficiencies in WC design, and most of those standards involve piling more and more Javascript on top. See this 2022 status report: https://w3c.github.io/webcomponents-cg/2022.html
So now we are in year 12 of this catastrophe, and you are expecting it to end in another 4-5?
If you want to build non-JS sites, web components won't save you. They require JS. They cannot work without JS.
You're much better off using Next, Nuxt, SvelteKit, SolidStart, Marko... Anything that doesn't use them.
But won't support for declarative shadow DOM solve the web components without JS problem ? I mean, maybe I am wrong, but with this and CSS, it looks as if we can gave custom elements in our HTML without any JS!
There's still no agreed upon "template language" that is markup only in HTML. (That's what the above comments refer to as "Template Instantiation".) So while that template tag looks juicy it still doesn't actually do much of anything without JS. Except for possible the small set of web components where the internal markup is the exact same for every instance of the web component, maybe.
Almost all examples of Shadow DOM uses Custom Elements, but that’s not required. I think this conflation harms adoption of Shadow DOM.
As an example of what you can use the Shadow DOM for - it works fine as the node to render a React app/component in. So if you need a couple of React components on a page to not affect each others style, you can do React.createRoot(myShadowRoot) and they’re fully encapsulated.
Honestly, I had it the other way for quite a while: Shadow DOM harmed my adoption/excitement of Custom Elements for a long time. :) I spent quite a while back in the day with x-tags and other Web Component things, trying to avoid Shadow DOM, because I wanted everything on the page & not nested in subdocuments.
These days I'm more open to anything. It's definitely a bit exciting everytime I need to parse something with Shadow DOM, a lot more work to really understand what the page is. Last thing I had to parse was chrome://gpu, and it took 3 hours instead of 1 hour to do the job, but could have been worse. But it was at least possible. Sometimes though, with closed shadowroot, the page has gained a resistance to userscripts (as this commenter reports[1]), and this seems purely evil/bad/totalitarian, in a distinctly un-web way!
Weirdly Web Components and Shadow DOM are both tools in the same effort, but Web Components is trying to get everything clearly onto the page, and Shadow DOM has always felt like trying to hide a lot of the page; as someone who loves the web as an interchange of information, Web Components seemed interesting & enriching & useful & value add, and Shadow DOM seemed like a tightening/winnowing/control thing, that didn't express my values. More neutral today, and seeing things like CSS styling performance wins it's clear there are solid technical wins for the platform, but especially closed Shadow DOM feels so overtly control-oriented & totalistic in design, in direct counter to the openness / togetherness / intertwingularity that made the web so interesting & rich & unique a computing space. (It still remains unclear to me that closed shadow-root disappearing into your own pocket universe ought to be permitted, but I'm not as scared as I was.)
> Shadow DOM seemed like a tightening/winnowing/control thing
This is useful when you need to apply constraints on teammates, same as applying a linter to your project's CSS. You need to guarantee your internal components work well together.
If your style is more "make a component and throw it over the wall for people to use," making everything "public" may be a better approach.
But most web components will never be public or shareable, and that's OK.
Custom Element Registries can become a problem for organizations with multiple teams working on the same page, as they may inadvertently create elements with the same name, or need bug fixes or new features causing conflicts and preventing individual teams from deploying updates.
The result is that teams need to coordinate their updates and deploy them simultaneously, slowing down the development process. This problem has been recognized in the Web Components community, and some initiatives are underway to address it, such as Scoped Custom Element Registries.
Scoped Custom Element Registries provide a way to isolate the scope of custom elements, preventing name collisions and enabling individual teams to work independently on their components without affecting the rest of the page. This approach has the potential to improve the development process for organizations with multiple teams and streamline the deployment of updates.
However, this solution is not widely adopted yet, and it may take some time to gain traction. In the meantime, organizations may need to develop their own internal guidelines and procedures to manage custom element name collisions and coordinate their updates.
I hear you. I think Shadow DOM is very useful for a handful of roots on your page, encapsulating third-party widgets or DOM-heavy components like a nested navigation.
Custom Elements on the other hand are useful for attaching behavior to HTML, sort of like jQuery plugins.
Not at all sold on the frameworks promoting using Shadow DOM for every button or whatnot.
When would somebody want to use Shadow DOM instead of just the regular DOM? Is shadow DOM the next incarnation of the previous fad before it (virtual DOM)?
The “Virtual DOM” isn’t a real thing from the browser’s perspective: it’s just a coding style for generating updates to the real DOM, and the browser doesn’t care about anything until that point just as it didn’t care about jQuery’s internals. The promise was that a vDOM would be faster by avoiding unnecessary updates but that never panned out and the web moved away from IE6 so the React team removed that from their marketing materials to focus on developer ease of use.
In contrast, the Shadow DOM is a real browser concept and that affects everything the browser does. Until that arrived, embedding was challenging because anything you added could affect the rest of the document in some way. Now we have a way to put something into an arbitrary location and guarantee that it won’t leak out, and that frees browser developers to make some performance optimizations.
As a practical example, think about social media embedding where people need to write code which is safe to put on millions of pages. Obviously that was possible but it was tedious, and browser developers identified numerous performance hotspots around it over the years. This allows that to be simpler and safer, which is always a great combination.
> Now we have a way to put something into an arbitrary location and guarantee that it won’t leak out,
Except that it does leak in practice. My company uses shadow DOM, but then to style stuff these use a lot of `--var`s, and apparently those penetrate the shadow DOM so you can still break other components inadvertently. (Admittedly when I saw that cluster* I retreated to the backend so my experience is limited)
It's basically to encapsulate things. You can define a custom element and style it how you always want it to look, and then when you set general style rules for the site you don't have to worry about it messing up something in the shadow elements. It's Google's replacement to `scoped` CSS.
The utility of the shadow DOM is in controlled isolation for both CSS and JS, because nether can arbitrarily reach into a shadow DOM's tree via the parent DOM. Although they can be made to intentionally affect it through the root element in a useful but controlled way.
A virtual DOM is just an abstraction in JS; whereas the shadow DOM is a feature of the browser engine. They are not really comparable, a shadow DOM _is_ the DOM. The confusion may come from the fact that it can be used in custom elements where MVCs might be involved again.
For example, encapsulating a third party widget, cookie banner or your main navigation. That way any CSS added inside the shadow DOM only affects those elements - you can safely write selectors like `h1` or `button` and they’ll only match what’s inside that same shadow root boundary.
When you have an extension that modifies a given page/injects them with some elements and you don't want the class names or styles of those elements to interfere with the class names and styles of the parent page. There are lots of extensions like this.
I first encountered shadow roots while trying to use tampermonkey to work around a problem with the latest release of Gerrit. What I thought would be a relatively trivial effort to find the right element and modify its style turned into a lot more effort.
While I get that the purpose of the shadow roots is to provide isolation, is there a simple way to break through that isolation without manually walking the DOM tree and traversing all of the .shadowRoot elements?
It seems Gerrit wraps most elements with a shadow root. For someone that doesn't do front-end development, is there an explanation for how this is helpful?
One nice way of handling this is to use the CSSStyleSheet object. In a nutshell, you can create a module like shadowCssRules.js and in that create a CSSStyleSheet object and set your global rules ( * { box-sizing: border-box;} ) then in your WebComponent constructor you call this.shadowRoot.adoptedStyleSheets = [shadowCssSheet]. This MDN page has an example at the bottom https://developer.mozilla.org/en-US/docs/Web/API/CSSStyleShe...
Also allows you to update the style sheet from anywhere in your js and it’ll effect all the shadow doms that adopted it
CSS variables help here quite a bit. I typically will add in component-specific tokens as css variables, that way I have simple penetration into the shadowdom.
If you own the browser, you can just nuke the `mode` option of `attachShadow` method before page itself runs. And bang, you can access to whatever you want now. So it isn't really a big deal if you are scraping or doing UI testing. Because the one runs first has the power to do whatever you want in js (Except for some primitive specifically protected in browser).
Very good! I can already see Lit-Element[0], one of the beneficiaries, leveraging this!
When I first started writing/exploring Shadow DOM API I felt so weird writing inline `class`, but then again, it's just a syntactic sugar for prototype functions.
> I felt so weird writing inline `class`, but then again, it's just a syntactic sugar for prototype functions
You felt weird using standard ES6 syntax? Did you prefer writing Thing.prototype.method = function(){}... ? Or Derived.prototype = Object.create(Base.prototype)?
The second one is mine, it's a massive hack and definitely shouldn't be done as it creates opens a new element every second and never closes them; but it was fun to see if it was possible.
When people start claiming that Safari is the new IE, dragging its feet, etc, I just point them to the WebKit blog. Apple just has different priorities and visions for the Web than Google.
It isn't like IE was completely lacking of innovation. To wit, div/span/XMLHttpRequest/BorderBox. Lots of things did come from there. I remember they used fieldset and legend in a much cleaner way, too.
The problem with IE was that they basically stopped after IE 6. The innovation was from before. After the release of IE 6, the IE team was basically disbanded, because MS thought they had won the browser wars (which at the time, they had). That was in 2001. IE 7 was only released in 2006, over five years later. And in 2008 Google released the first version of Chrome.
Well, that and it was Microsoft. They had an earned reputation for extend/extinguish. They have mostly shed it with VSCode, but I honestly can't see how. Seems to be in a very similar runbook to how they used to be?
They aren't led by exactly the same people as 20-30 years ago, so things do change a bit. Microsoft also isn't homogenous. They certainly haven't become Samaritans, however.
I was taking a quick check to see if I was correct on the div/span claim. I am actually not finding anything to corroborate that.
I also want to add that I have little love for IE. Same for some of the other things that Microsoft have done. I just find it odd the juxtaposition of why some of that behavior is fine today, as long as it is someone else doing it.
The defining characteristics of IE in the dark ages were that 1) it was ubiquitous 2) web sites developed to it and, instead of fixing sites in other browsers, they recommended its use over other browsers. You often couldn’t even log into your $WHATEVER from Mozilla on Linux, for example.
The modern analog to that is certainly not Safari.
Apple's vision is for the web to remain page-centric so it can't intrude on their walled garden.
But really, diverting this discussion to their technical progress (spotty as it is, they do have some there), misses the larger point: by banning non-webkit browsers from their platform, they're engaging in precisely the sort of anti-competitive behavior that Microsoft demonstrated.
Microsoft faced an antitrust lawsuit (United States v. Microsoft Corp) for embedding IE with Windows, which was seen as an abuse of their position in the OS/PC market to further their own stake in the web browser space.
Apple, in this case, is similarly seen as abusing its position in the US mobile market by not merely bundling Safari with iOS, but going a step further, and preventing any competition to its own browser by banning competing browser engines.
> Microsoft faced an antitrust lawsuit (United States v. Microsoft Corp) for embedding IE with Windows
For a whole range of processes leveraging and protecting the existing Windows monopoly against the threat posed by browsers, Java, and other applications, not just (or even primarily) for “embedding IE with Windows.”
The findings of fact regarding the MS antitrust violations are here availabld [0], bundling IE with Windows is part of bullet E (of A-I) under “Microsoft’s Response to the Browser Threat”.
So, your defense against it being accused of being like IE is to say its exactly like IE, but in the period of Netscape dominance, not the period of IE dominance?
There are plenty of great responses already, but to add a perspective I think might benefit folks who don’t especially care about the style and Web Component isolation aspects: Shadow DOM standardized those aspects of encapsulation not just for user-authored stuff, but also for encapsulated behavior of browser-native functionality. So for a very silly example, you can now inspect the shadow DOM of an image that failed to load and see the resulting DOM representation that the browser uses in place of the intended image. You can do the same for form field elements and a lot of other elements which used to be totally opaque.
This is great not just to understand more about what’s happening when browser-specific stuff goes sideways, it’s also a really good place to take design inspiration for encapsulating your own stuff. Besides scoping styles and fully isolating children as implementation details, the native implementations often make judicious use of `part` and `slot` aspects of the relevant/related APIs to handle interop with the non-encapsulated parts of their interface. In a lot of ways this allows elements (both native and custom) to provide much stronger contracts than the mostly tag soup that HTML tends to be by default. And it’s especially great that the interface for this is largely (if not totally now?) how built in behavior works, so writing code that targets the browser has the same privileges and the same limitations as the equivalent code the browser provides.
The isolation was primarily designed with Web Components in mind, but
it's popular as well in browser extensions. Content-scripts might want to inject elements into a host page without being subject to that page's styles or JavaScript. (e.g. the host page calling normalize() on their DOM.)
There was a thing called "scoped css" back in 2012 that never went anywhere which seems similar. But I think Shadow DOM also mixes in some notion of DOM access protection iirc. Also interested in a description of what shadow dom is and what it can be used for.
Indeed, Shadow DOM is a great way to isolate styles when working with browser extensions. In fact I’ve made a tutorial and it shows how CSS is scoped to the extension without interfering with any website style:
https://www.freecodecamp.org/news/chrome-extension-with-parc...
It exists to help you define components without running into naming conflicts. For example, if I want to make a "my-button" component that has a <button class="button"> and some CSS that targets `.button { ... }`, I won't end up styling a bunch of other things that have `class=button`. Pretty sure it also protects against other things doing `document.getElementById("button")` and getting the element whose ID is `button` from your component instead of whatever the caller was expecting. Effectively, shadow DOM provides encapsulation.
That's true, but they didn't ratify yet a standard on how to handle web components naming collision, so if you create `my-button` and another developer register `my-button`... Or if you want to use two different version of your own `my-button`...
The namespaces are URLs, so DNS ensures the domain names are unique globally, the owner of each domain name ensures the paths are unique for that domain, and the author of the XML document ensures the namespace aliases are unique for the specific namespaces used in that specific document.
I’m probably the wrong person to ask, but my mental model is that it scopes/isolated the CSS inside the component so it doesn’t affect elements elsewhere in the document but it also prevents CSS elsewhere in the document from affecting the elements in the component.
My outstanding question here is: how do people reuse components across sites with different themes? Do they have to pass styles across the shadow boundary via JS? Or do they just give up on cross-project reuse or theming?
Shadow DOM isn't a totally sandboxed construct, though it mostly is. You can't load font-face's within shadow dom's. They have to be defined in light dom, and then are available in shadow dom. Most other CSS styling seems to be totally isolated, given this exception.
> This meant that this feature was not available when JavaScript is disabled such as in email clients
This reason to be add it seems, really out of touch with reality.
I mean, most of html is not available, or horrendously borked in email clients. Hence the continuation of table within table within table to make anything other than plain text.
I believe Microsoft’s email clients are all still using the MSO renderer, which is basically a very incomplete and quite buggy implementation of HTML 3.2, considerably worse than IE 5.0. As far as I can tell, the only thing that has changed in the last 25 years is the addition of high-DPI support (which is mildly imperfect, and still requires a conditional comment shibboleth in many cases).
Desktop Outlook (the "flagship" to many, because of its prominence in Big Enterprise) is the only one left using something related to the MSO renderer, and even it seems to have some sort of heuristics now that it switches to what seems to be WebView 2 (embedded Edge Chromium) for some emails (I don't know what those heuristics are or if non-Microsoft originating emails can opt-in to that behavior). The rest of their email clients including "Mobile" Outlook seem to all use some version of WebView 2 (or are possibly entirely inside WebView 2/React Native).
The "One Outlook" convergence allegedly draws nigh and that will be really interesting to see how much of the "MSO-style" rendering survives.
Windows 10’s Mail app used MSO when I looked at it a few years back, and I don’t expect it’s changed. Therefore it wouldn’t surprise me if their new Outlook app (currently https://apps.microsoft.com/store/detail/outlook-for-windows/...) also used MSO, though I hope not.
Not sure about what you say on traditional Outlook supporting WebView2 rendering; searching isn’t finding me even a single mention of it, although I’d expect it to be very big news in the email industry. https://www.litmus.com/blog/a-guide-to-rendering-differences... was posted from six months ago and is basically unaltered from what one would have said in 2007 (and not much from what one would have said in 2000).
Mail app in Windows 10 has gone through a lot of revisions over the years, so the answer is that it has probably changed a great deal. So far as I'm aware the Windows 10 app has never used MSO, but my impression was that sometimes acts like it for compatibility reasons (ie, fake user agent IDs, things like that). I think it was some sort of Spartan renderer for ages. I don't know if it made the move to WebView 2 or not or when it did, if it did.
Part of why I'm mostly certain it never used MSO is that it really does seem to share a lot of code with its iOS and Android counterparts and there's no way Microsoft bothered to port MSO to iOS and Android.
Strangely-Aged Enterprise Desktop Outlook noticeably switches to WebView 2 when opening emails from Microsoft's (stupidly named) Viva Insights service (formerly Enterprise Cortana emails) and Yammer emails (now sometimes stupidly called Viva Engage in Teams). (It's mostly noticeable as a "hiccup" in email loading because the MSO view loads first and faster and then the much slower to initialize Chromium-based Web View does its slow pop-over replacement with an extra bonus loading spinner; it is not the best user experience.)
Again, I don't know if it is just hardcoded whitelisted domains from Microsoft's dumb "Viva" brands or if there is an opt-in switch more generally useful. I just know that the renderer switch is annoyingly obvious when I see it and I'm seeing it surprisingly often lately.
It was certainly using MSO when I looked at it, which I think was probably early 2018, but could be up to six months in either direction. I have a vague feeling there was an characteristically-named DLL (fitting in with the “they treat it as a black box never to be changed” narrative I’ve crafted), and it certainly handled MSO conditional comments, had the same incompletenesses in HTML 3.2, and the same wonky DPI handling resolved by the the o:PixelsPerInch pragma. I vaguely recall failing to get VML working.
Outlook for Android, iOS and Mac OS X (now for macOS) have never used MSO.
—⁂—
It could be interesting to look at the headers of your emails (including true MIME headers and <meta http-equiv> tags in an HTML part) that trigger WebView2 rendition. I can imagine something like the old `X-UA-Compatible: IE=edge` way of avoiding compatibility mode and deliberately opting into the most recent rendering engine on IE.
The only interesting header in a Yammer email that seems plausibly relevant is X-MS-Outlook-YammerExtensibleContentData. It claims to be a list of thread IDs but the contents are at least base64 encrypted and seem to likely be signed by some key somewhere and my interest waned in seeing if I could decode it further. If that's the trigger, it's definitely not generally useful if it only exchanges Yammer thread IDs.
I haven't seen a Viva Insights email in a few days, but I was reminded it does have a full add-on installed and I wouldn't be surprised if it was much more its add-on doing the work and it was very specific to those emails as well. (Possibly it is even the same add-on managing both.)
Can’t wait for this to be everywhere. React was amazing to get us where we are today but this shadow dom without js will allow server side rendering components without react
You don't need React to server-side render anything, including components.
Besides, if you need components, web components are probably the last thing you want to reach for.
Edit: Also, none of the major frameworks, and very few of the new frameworks target web components for many reason fully ignored by browser vendors. They can "consume" them (that is embed them in code), but at most compile to web components, reluctantly. And still provide components and SSR, and have for years.
> Neutral stance: We're not convinced that the complexity this feature introduces upon the HTML parser carries its weight in terms of usefulness for web developers. There's also a risk that the processing model is not compatible with a future declarative custom elements feature as it was developed in isolation. Having said that, the proposal is a reasonable approach for this functionality that takes into account the various constraints and security considerations that come with changing the HTML parser.
> Positive Stance: This is a reasonable proposal which takes into account the various constraints and security considerations that come with changing the HTML parser.
Hopefully that's a sign of things to come! (Fingers crossed)
So as someone that work only with server side rendered stuff, you know, no JS on the browser and css loaded as one big blob, is there something useful for me here? I feel i am missing the point completely...
I really feel like html has been totally forgotten by the power that be that work on CSS...
> So as someone that work only with server side rendered stuff, you know, no JS on the browser and css loaded as one big blob, is there something useful for me here?
Yes, this is exactly about being able to use shadow dom (for its performance and modularity benefits) without any javascript running in client side
Isn't it similar to a generic idea like Object Oriented programming? It's a pattern that can work on a server or a client. Unless your server isn't using a DOM of course.
Web Components are Custom Elements + Shadow DOM, and this makes half of that equation server-side. The other half works as a client-side hydration strategy until they churn out whatever Declarative Custom Elements would look like.
I mean afaict rn the perf improvement need me to inject the style in the shadow dom element. So tons of small css, possibly with duplications and less compression, or css in style elements.
As a huge fan of Web Components, I'm sort of strangely on the other side of shadow dom.
I've been doing 100% of my UI work using vanilla Web Components for years now, and what I generally recommend is to avoid the use of shadow dom unless there's a good reason for it.
The CSS parts API is difficult to work with, and if you want to use a CSS framework like bootstrap etc... you end up importing all the CSS into the component's shadow context.
Web Components with the light dom work great, they're an effective encapsulation mechanism. I'm not sure why most examples you see with WC go straight to shadow DOM.
It isn't necessary for most components and adds an lot of styling complexity.
Interesting. What is the added styling complexity? I am interested in using Shadow DOM for the next version of my component DataGridXL (https://www.datagridxl.com/) to prevent outside CSS from affecting the component. WHy do you say it is not worth it?
So the components you are talking about are simpler UI components for which Shadow DOM is overkill? For advanced components like a grid, you would still recommend going with Shadow Dom?
Is the repeated inclusion of the <style> tag into every copy of a declaratively rendered component — with the same css strings repeated every time for every instance of a component — a concern at all?
I'm super excited to test performance of this, but also I'm very curious about adoption. This approach to templating seems great for simple elements, but often times when instantiating a custom element we also want to bind functionality into the elements of the DOM. The @ decorator is very common in many webcomponent libraries' (i.e. lit) html() function. I wonder if the tradeoff of a more declarative approach is worth it for those relying on that existing functionality.
>In summary, declarative shadow DOM introduces an exciting new way of defining a shadow tree in HTML, which will be useful ...as well as in... email clients.
I guess ... isn't really the thing to do there, as the parts redacted are not only important but it seems to me much more likely to be supported. Although maybe I am overly pessimistic about email clients?
> many modern websites and web-based applications deploy a technique called “server-side rendering” whereby programs running on a web server generate HTML markup with the initial content for web browsers to consume, instead of fetching content over the network once scripts are loaded
I chuckled reading this. It’s like someone landed in 2020 and learned web development while being completely unaware of events in the preceding decade.
It's like Isaac Asimov's short story, "A Feeling of Power" about a far future where humanity had forgotten how to do math manually and relied entirely on expensive computers to do it. But then a technician discovered the rules of arithmetic on paper and the revolutionary new human powered "graphitics" allowed them to win "the war".
client-side rendering was coined because server-side rendering was the default.
now that client-side rendering is ubiquitous, being explicit about server-side rending became necessary and along lost in translation became "technique" - is what I think happened here.
Probably too late to change now, but I wish the term ‘rendering’ wasn’t used to describe completely different things.
Server side rendering is about dynamically generating HTML code. Client side rendering generates DOM objects in memory, and only rarely generates HTML strings. And then there’s actual rendering, which generates pixels on a display.
People who aren’t familiar with web development often conflate these concepts.
> It’s like someone landed in 2020 and learned web development while being completely unaware of events in the preceding decade.
To quote Rich Harris, the author of Svelte, on the whole web component saga: "It's almost as if congealing 2010-era best practices in the platform before we'd finished exploring this territory was a mistake" [1]
> I chuckled reading this. It’s like someone landed in 2020 and learned web development while being completely unaware of events in the preceding decade.
Note that they mean SSR specifically here vs. just server-generated web pages. Server-rendering of component updates is a post-SPA innovation (2018?), so it still seems worth explaining for most web developers.
Agreed, it was super common. I'm attempting to say something slightly different, which is that SSR — as a technique that was codified, named, and supported in modern frameworks — first appeared around 2018 by my recollection, and so may be a new-ish concept for many web developers.
My assumption is that the WebKit team understands their average web developer (which may be notably different than an average HN developer) enough to know whether an explanation is worthwhile.
If you’re talking about something like just React (the view layer), you can certainly go back well beyond 2010.
If you’re talking about something more full-featured, like Next.js today, the oldest I can think of is Ember with its FastBoot, which was becoming usable by the end of 2014 and well-established by mid-2015 (even if 1.0 took until mid-2017—and there was a major internal restructuring to how building worked). https://blog.emberjs.com/inside-fastboot-the-road-to-server-... is good background on that, showing also the attitudes of the time. https://blog.emberjs.com/tag/fastboot/ for the other couple of posts about it.
You're forgetting the UX designer, the UI designer, the UX researcher, the UI translator, the database administrator, the system administrator…
We have always been able to break complex tasks down into multiple, simpler job descriptions. That doesn't mean each person is limited to doing at most one of them.
I've seen candidates describe themselves as "Front of the Front-End" in the context of job interviews where the web UI stack included a BFF (Backend-for-Frontend, eg Next.js or Remix). It actually makes sense. The corresponding "Back of the FE" skillset encompasses proxy servers, API gateways, routing, observability, browser networking, server APIs, deployment infr, etc., on top of pure UI per se. There's a breadth of knowledge required that puts many backend-oriented "full-stack" devs to shame.
I think this should all work outside <template> tags. Re-remembering how they worked slowed down my comprehension some. The MDN article[2] was a solid refresher.
[1] https://nolanlawson.com/2023/01/17/my-talk-on-css-runtime-pe...
[2] https://developer.mozilla.org/en-US/docs/Web/Web_Components/...