Wouldn't this make users pay for every possible feature they could ever use on a given site? For instance, in Google Maps I might use Street View 1% of the time, and the script for it is pretty bulky. In your ideal world, would I have to preload the Street View handling scripts whenever I loaded up Google Maps at all?
If you’re asking if it would incentivize us to be more careful when introducing additional interactive functionality on a page, and how that functionality impacted performance and page speed, I expect it would.
Thinking about how the web was designed today, isn’t necessarily good when considering how it could work best tomorrow.
> If you’re asking if it would incentivize us to be more careful when introducing additional interactive functionality on a page, and how that functionality impacted performance and page speed, I expect it would.
Not quite, I wasn't trying to make a bigger point about is/ought dynamics here, I was more curious specifically about the Google Maps example and other instances like it from a technical perspective.
Currently on the web, it's very easy to design a web page where you only pay for what you use -- if I start up a feature, it loads the script that runs that feature; if I don't start it up, it never loads.
It sounds like in the model proposed above where all scripts are loaded on page-load, I as a user face a clearly worse experience either by A.) losing useful features such as Street View, or B.) paying to load the scripts for those features even when I don't use them.
“Worse” here is relative to how we have designed sites such as Google maps today. The current web would fundamentally break if we stopped supporting scripts after page load, so moving would be painful. However, we build these lazy and bloated monolith SPAs and Electron apps because we can, not because we have to. Other more efficient and lightweight patterns exist, some of us even use them today.
If you can exchange static content, you need very little scripting to be able to pull down new interactive pieces of functionality onto a page. Especially given that HTML and CSS are capable of so much more today. You see a lot of frameworks moving in this direction, such as RSCs, where we now transmit components in a serializable format.
Trade offs would have to be made during development, and with a complex enough application, there would be moments where it may be tough to support everything on a single page. However. I don’t think supporting single page is necessarily the goal or even the spirit of the web. HTML imports would have avoided a lot of unnecessary compilers, build tools, and runtime JS from being created for example.
How are you going to stop it, when you already are running JS? I can write a VM in JS that I can load, then I can load static assets after the page has loaded, and execute them in the VM. How would you block that?
I am thinking about a different time, when JS did less, and these decisions were being made.
Today, what you are saying is definitely a concern, but all APIs are abused beyond their intended uses. That isn’t to say we shouldn’t continue to design good ones that lead users in the intended direction.
I am saying that allowing for JavaScript to be dynamically downloaded and executed after the page is ready was a mistake.
You can build your Google docs, your maps, and figmas. You don’t need JS to be sent after the page is ready to do so.