Page 19 is interesting. It explains that this is possible, without everything that does logging needing access to sockets, because on OpenBSD syslog(3) doesn't use sockets and doesn't need a file descriptor.
I wonder if he realises that browsing over Tor with IceCat will be causing him to have quite a distinct browser fingerprint. He should just use Tor Browser.
There is a lot more than just the useragent when it comes to Tor Browser versus any other browser. Tor Browser has many changes to try and make every instance of Tor Browser look identical, everything from the HTTP headers to window size to timing functions being rounded uniformly. Although I am guessing he browses with javascript turned off, which does defeat most of the fingerprinting techniques.
>It's also unique is that when an authority has a hole, breach or is an incompetent actor, it's very difficult to remove them from authority.
There is no proof of this. There are lots of systems in place to deal with mistakes and trust breaches. If it gets to the extent that a Root or CA needs to be removed from trust stores, then they are removed.
Convergence was the perfect replacement, but it never gained any traction.
Moxie's talk at BlackHat[0] introducing it is a good watch for those unfamiliar with the idea, and if you want to be wistfully frustrated at what could have been.
We still don't know how that would have worked in practice. Even skilled people have trouble making trust decisions reliably for everything and while we'd avoid the compromised CA threat I'm certain we'd start seeing equivalents like dishonest or incompetent notaries – and those might last longer because fewer people see the dodgy results since not everyone is using the same set of notaries.
If it became popular, it's really easy to imagine something like the Great Firewall being configured to block outside notaries to encourage people to use local notaries which are still under the control of the local authorities.
That's not to say it's not interesting work or potentially a solid improvement, only that I would be extremely hesitant to make absolute statements about an untested internet-scale security protocol. The approaches we're seeing work now do so because they're adding to well-understood protocols (e.g. HSTS, key-pinning, etc.) or don't change the trust model (if Google goes rogue, Chrome users are already screwed).
I have no doubt that there would be incompetent or dishonest notaries. The difference being that in an alternative universe, where Convergence is used, a rogue notary doesn't destroy the trust of the entire system. When Symantec is a rogue notary, oh well, Mozilla and Google push out an update and no one uses Symantec anymore, their notary just becomes irrelevant. However, in this reality, the darkest timeline, deciding to stop trusting Symantec immediately breaks 30% of HTTPS websites on the internet, so even though Symantec has given everyone plenty of reasons to stop trusting them, we have no choice. Same for Comodo, their notary would have stopped being used in 2011 (after their root certificate compromise).
Instead, with Comodo and Symantec combined, we now have over 60% of HTTPS websites secured by authorities who are incompetent and/or dishonest.
That sounds like it was written by someone who doesn't completely understand Convergence, and also has an alternative agenda (they want their own solution adopted).
> It is not very user friendly. Users are asked to manage a list of notaries. This list of notaries is stored locally on the computer, or even the browser. Managing this list is not feasible for most users.
Browsers can replace the CA root certs with a notary list and pick notaries at random from the list. This is not a problem like with CAs as multiple notaries have to collude to form a consensus (you only need one rogue CA), and rogue notaries can be removed on a whim, unlike CA roots which are indentured (removing a CA breaks any site that uses it).
> It's not clear how well it protects (or can protect) if some notaries haven't yet cached the latest SSL certificate for a particular website.
This doesn't matter at all. The notary looks the cert, checks the signature and tells you if it matched what you're seeing.
> It does not provide MITM protection on first visit.
Yes it does. If your connection is MITM'd the notaries won't match your perspective.
> Waiting for group consensus means all connections have higher latency (slower page loads).
Only the first visit, before the notaries confirm the certificate signature you're seeing, and then you cache it and only need to check it again if it changes.
> Both Convergence and Perspectives (see below) results in you sharing every website you visit with random third-parties.
Bounce notaries exist for this reason.
> With DNSChain, if privacy is a concern, you can run your own server and only rely on it
Thank you glass-! The information you've provided here I did not find in the Convergence documentation. I've updated the document to be accurate with your reply and added a new, rather significant critique that I somehow missed the first time around. Please feel free to re-review:
> It does not protect you if the MITM is sitting in front of the server you are visiting. Notaries would see exactly the same key that you see (the one that belongs to the MITM).
It makes me uncomfortable that the CA system is set up in a way that makes this necessary. If an alternative, such as Convergence, had taken off we wouldn't be in this situation.
If this is your argument it also requires the assumption that the trade-offs with Convergence are identical in both scale and type to the current system; at least in reference to this specific problem.
LibreSSL is mostly a drop-in replacement for OpenSSL, while BoringSSL has removed things that some applications will depend on. OSes shouldn't/couldn't replace OpenSSL for BoringSSL (the article says as much) but could replace OpenSSL with LibreSSL (some already have).
LibreSSL has had roughly half (22 to 43) as many vulnerabilities as OpenSSL since the fork and, before this, 0 sev:high, compared to OpenSSL's 5 sev:high.
Would you really disregard all that because of a 1-byte buffer overflow and a memory leak?
CVE-2015-0204 affected LibreSSL, but they thought it was a low priority vulnerability, when it actually is a high priority. They fixed it, didn't notify upstream afaict and just issued a new release.
LibreSSL isn't a panacea, and based on that, they can't even classify vulnerabilities correctly.
Most of the vulnerabilities in OpenSSL are in parts (e.g. DTLS) which are disabled in lots of builds.