They've released two versions of Firefox in a span of 2 days. First 67.0.3 on June 18 [1] and 67.0.4 on June 20 [2]. Each release fixes a separate component of what seems the same exploit chain, the June 18 release the RCE and the June 20 release the sandbox escape. Link to the June 18 discussion [3].
Interesting to see a sandbox escape that doesn't involve a kernel exploit (unlike the attack against Chrome users that leveraged a Windows kernel vuln). I wonder if seccomp has pushed things to the point where the boundary between broker and child is the softer spot. More evidence to that being that the attack on Chrome didn't work on Win10, which has a sort of similar mitigation.
Perhaps in particular for Firefox, which has had the sandbox for less time?
Would love to hear some expert opinions. I can't derive much from this since I don't personally see any trends.
You're not wrong. Cars are usually by far the most dangerous activity most first-world people partake in, and individuals' driving habits are horrifying.
Anyone who drives should spend 5 minutes a week watching dashcam videos.
Only few components of Firefox are written in Rust. Servo has no privileged Javascript per design choice, while it does have some components written in C/C++ (mostly taken from Firefox).
Description
Insufficient vetting of parameters passed with the Prompt:Open IPC message between child and parent processes can result in the non-sandboxed parent process opening web content chosen by a compromised child process. When combined with additional vulnerabilities this could result in executing arbitrary code on the user's computer.
I guess that's why the JavaScript type confusion fixed in 67.0.3 was "exploited in the wild"? Because on its own, the rendering/JS process is compromised (which of course is bad) but it should still be sandboxed from the rest of the system assuming no sandbox escapes (like this one) are known?
Looks like the Firefox-based Tor Browser plans to lag a day behind, on an update for a critical vulnerability that's disclosed by the Mozilla update, like they did the last time. (Does that lag leave a substantial window for second-tier actors to compromise some security-sensitive dissidents and journalists?)
Also, a bit less concerning to me, the Debian package of `firefox-esr` still hasn't been released, as I type this, hours later.
Of course this is a tricky problem, but should there be more coordination on such updates, in the spirit of responsible disclosure?
I suspect that the ~1 day delay is just the time needed to run automated tests, rebuild packages, retest them, upload them on the various channels, turn updates on, etc.
I've worked with people who work on release engineering. Many things can go wrong by accident and silently, so you take your time to avoid distributing a broken binary that you could not upgrade.
Yes. In practice, users will selectively enable JS, both intentionally (because so many sites require JS, to be minimally usable), and accidentally because the particular JS-disabling add-on that TB currently uses still has a complicated UI (though it's much improved from earlier).
(I made one of the early tracker-blocking rulesets, and, for the last few years, as a side project, have been building a new practical dataset of specific sites' JS dependencies and third-party requests. I'm up to over 10,000 necessary whitelist rules, which I add to throughout each day, in my own normal use. That 10,000 doesn't include blanket whitelisting of many popular JS CDN URLs, for all domains, which I eventually had to do because of current tool support, and most sites needed them, and there's something privacy&security-friendly that could be done in the browser about those URLs in particular.)
Is there any reliable source that says which browser is the most secure for a regular user?
Pwn2Own: One change in the 2016 event is that the Mozilla Firefox Web browser is no longer part of the contest. "We wanted to focus on the browsers that have made serious security improvements in the last year," Gorenc said.
The implication is that Firefox just wasn't as secure as other browsers.
My gut feeling is that Chrome is far more secure than Firefox, but I would like an expert opinion.
"For a regular user", Chrome can only be considered reasonably secure if extensions are disabled. While doing regular support for ordinary users, I usually have to purge Chrome of piles of malware that have read/write access to all websites users are visiting, and they were distributed straight from the Chrome Web Store.
Firefox is capable of the same problems, but I rarely see malicious Firefox extensions in the wild, I assume they do a much better job policing their store.
Chrome may be better at fighting arcane sandbox escape exploits, but that's not what gets the regular user into trouble.
Secure is obviously a very wide term but as an approximation you can look at how much exploit vendors will pay for an exploit, assuming higher price = harder to exploit = more 'secure'.
Similarly if you could buy insurance against your browser getting hacked, then the different premiums per browser would reflect risk?
Or what are the black market prices for browser exploits? Although that would mostly depend on the value of an exploit per browser, not the cost of creating an exploit.
The more code inspected, the more zero days that are identified. Every large code base has security issues. What is scary are the ones that only the wrong people know about.
Link to the first fix: https://hg.mozilla.org/releases/mozilla-release/rev/99a829d2...
Link to the second fix: https://hg.mozilla.org/releases/mozilla-release/rev/ea5154be...
[1]: https://www.mozilla.org/en-US/firefox/67.0.3/releasenotes/
[2]: https://www.mozilla.org/en-US/firefox/67.0.4/releasenotes/
[3]: https://news.ycombinator.com/item?id=20218560