Back in the day, we had a nice boundary between the document and the "app". Then for some reason we decided that Flash doesn't need to be a thing any more and erased that boundary by building the app functionality into browsers themselves, making the app and the document inseparable. We should have invested that effort into building an open source Flash player instead.
One of the nicest things about Flash was that you could set your browser to only load and run Flash content after you click it.
Java Applets were worse though, every time I got a virus of any sort from merely browsing generic sites, it always happened due to Java in the browser. I finally stopped installing Java for the web and my security problems went away.
Flash had some security nightmares all the time too if I remember correctly but I dont think it ever screwed me over like Java did.
I think unless we lock down new APIs that aide in fingerprinting to only be accessible to WebAssembly and let people block or enable WASM theres not too much else we can do. It would be nice to be able to block web APIs selectively to limit what a JS script can do.
> Flash had some security nightmares all the time too if I remember correctly but I dont think it ever screwed me over like Java did.
Those incessant RCEs were only due to the sloppy way the Adobe Flash player was written. There is nothing bad security-wise inherent to the SWF format itself.
Ruffle is an open source Flash player in Rust, currently under active development. I'm sure it won't have such problems because 1) it's open-source and 2) it's in Rust, and I was told that anything written in Rust can't possibly have any memory-related vulnerabilities; we'll wait and see if this would still hold true if/when they implement JIT compilation for AS3.
> I think unless we lock down new APIs that aide in fingerprinting to only be accessible to WebAssembly and let people block or enable WASM theres not too much else we can do.
IMO, it should be enough if incognito mode presents an identical fingerprint on everyone's browser.
It's not that easy to "present a fingerprint" without compromising the user experience. Sure, you could remove all those PWA and pretend-OS APIs and hardly anyone would notice, but what about things like viewport size and font rendering? You can't exactly hide them from a website.
> what about things like viewport size and font rendering? You can't exactly hide them from a website.
Of course you can. Viewport? Just return fake viewport data containing the most statistically common display properties. Website renders incorrectly? They only have themselves to blame, shouldn't have abused that data for hostile purposes. Data is a privilege, we can and should take it away. Fonts? Just force everything to use Noto Sans or Noto Mono. Everything will render correctly. Maybe the designer's vision won't be fully realized but that's not a problem.
“Font rendering” is a different thing than “what fonts you have.” Font rendering is about how fonts are drawn to the screen. The trick is to draw some words to a <canvas> and then pixel-peep the result. Different OSes and browsers use different font renderers and font hinting logic; fonts will even render differently on a different-DPI screen.
Different browsersare always distinguishable but a single browser could choose to always use the same font rendering code and settings, at least for private browsing.
Not just the <canvas>. The font rendering of the underlying platform also influences the width of strings. So if you create a <span> with some text, its width will differ several pixels depending on the host OS.
Is that a standard — that all browsers are forced to use the Freetype library, or to be bug-for-bug compatible with its glyph+hint parsing semantics? I've never heard of anything like that.
But also, even if they did, AFAIK browsers still mostly lean on OS text-drawing APIs for font rendering. Text in Chrome on Windows looks different than text in Chrome on macOS, etc. The same pile of beziers, and the same pile of hints, converts into a different set of hinted pixels (and sub-pixels!) when fed to each OS text-drawing API. Especially when those APIs are configured by user settings around subpixel hinting / "font smoothing", and when those APIs are aware of the device being rendered to and so render subpixels differently for high-DPI vs low-DPI screens, RGB vs BGR displays, etc.
For viewport, you can limit the size presented to the page to a few sizes with different aspect ratios. Browsers can simply rescale the page to the actual window size for display on the screen. That also works for font rendering.
If users decide they want pixel-perfect display, they can either resize the window to one of the allowed sizes or disable this feature for a specific page.
One of the nicest things about Flash was that you could set your browser to only load and run Flash content after you click it.