If web sites have to so dynamic, I much prefer that the computation involved is done on their machine than on mine. I simply don't trust random web sites enough to let them run code on my machines.
What is it you dont trust? This Fear Uncertainty & Doubt clashes heavily with the excellent security sandbox the web browser is. What is the harm you are afraid of? What are you supposing the risk is/what's in jeapordy here?
Relying on sandboxes seems unwise to me. They're a useful backstop, but shouldn't be the primary defense. The primary defense is to minimize the exposure to risk in the first place.
As to what harm I'm avoiding, it's mostly around tracking -- which is something that browsers have a very difficult time preventing, especially if sites are allowed to run code in them.
Well, I wouldn't use such a website anyway (especially a document converter -- that is better done using a real application), regardless of where the processing was done, unless I was very certain that the website was trustworthy. For one thing, even if the website purports to not move my data to their servers, how do I know they're being truthful without going to extremes such as sniffing traffic?
There have been plenty of sites that have lied about such things.
> What I have for native applications that I don't for the web is the ability to firewall off the native applications.
There you're placing trust on the firewall's sandbox. Are you sure the application can't communicate with the outside at all? DNS exfliltration for example?
A firewall is not a sandbox, but yes, I am sure that the applications can't communicate with the outside at all. My logs would show if they were. Any and all packets that originate from them are dropped, including DNS lookups and the like.
actually most people will miss out on most of the usable internet without javascript. not everyone goes to the same sites as you or has the same browsing patterns.