Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Mozilla moves to distrust the TrustCor CA (groups.google.com)
198 points by jamespwilliams on Dec 1, 2022 | hide | past | favorite | 64 comments



Earlier today: https://news.ycombinator.com/item?id=33810755 (40 comments at the moment)


Right, in some sense m.d.s.policy is the more authoritative source, but unless the discussion ends up being distorted because of reporting elsewhere that's inaccurate the existing thread ought to "win". Doubtless HN has policy about this that I have not read.


The thread is hilarious.

The accused being passive agressive to essentially the judges is mind boggling.

I also like how the TrustCor person in the discussion claims the spyware was by a rogue developer and they can't do anything about that and gets the reply from the initial poster:

"This same rogue developer set up a proxy to receive data sent by the SDK and then forward it on somewhere else. This involved compromising one or more machines owned by TrustCor. This compromise went undetected by TrustCor/MsgSafe for 3+ years."

This compromise was undetected by the CA for 3+ years. Q.E.D.

And from Google [Edit: was Mozilla, thanks] "I tend to agree at this point that discussing the merits of the claims might be superfluous, because the conduct of the CA's representative is a more urgent issue [...]"


> And from Mozilla "I tend to agree at this point that discussing the merits of the claims might be superfluous, because the conduct of the CA's representative is a more urgent issue [...]"

This comment was made by Filippo Valsorda, previously engineer at Google now independent, not Mozilla.

edit: my bad, I didn't know Filippo had left.


Filippo Valsorda is now an independent consultant https://filippo.io/


Filippo actually left Google a few months ago and is now an independent security researcher.


The earlier implication was that the only unobfuscated sample of the malware sdk had msgsafe urls hardcoded in it.

Trustcor's response was to claim that the rogue dev edited the sdk and that the server at the url endpoint was actually just a proxy.

The initial poster is just recapping/reframing to match her claims.

(her explanation how she knows it was a proxy is a fantastical story about finding a docker image in an old backup. Nothing said before or after that gives the impression she would know how to do that)


The management class is unaccustomed to this sort of questioning.


>The accused being passive agressive to essentially the judges is mind boggling.

They probably knew they'd end up distrusted anyway. I wouldn't be bowing to my executioner either.


It really didn't come off the way people painted it as poor conduct to me at all. Slightly annoying, sure. Partially written with input from a lawyer, likely. But all I saw was a person rightfully defending rather unsubstantiated attacks on the integrity of her business.

The whole thing is guilt by association. Everyone agrees that no evidence of mal-issued certificates exists. But some other company that uses your CA product had a rougue dev and because they are financially related because of history all of a sudden the industry is in panic?

If the integrity of the people funding the operations of CAs is important, like is suggested towards then end (and with which I happen to agree), then we should create policy and scrutinize every CA equally. I don’t like mob rule.


The “manager” they name who the contractor reported to is the CTO of the CA, and has been since 2014.

On Reddit they were begging for community developer help during that time, and talking about how their team was only a few people. They allude to this small team notion in the thread as well, where they also suggest most testing was done internally (though archive.org catches them out on this - the advertised links to the android app still existed until after the article).

Putting the above pieces together strong suggests that the CTO was likely testing this unauthorized, unchecked malware that was reporting back to a VM that was running a proxy passing that data on to an undisclosed location.

During this era, virus total has behavior captures of several APKs all phoning home to this rogue server after broad attempts at capturing extensive information on the system including contacts, location, interface identifiers and root access checks. There is some variation in behaviors during that time as well. Further study could reveal worse behaviors than have yet to be reported by the appcensus folks.

The way the information was presented further reinforces that the CA appears to have unfettered access to the systems, code, logs and backups of the mail hosting business. When tied to the CA CTO being named the contractors manager, this all strongly suggests that there is likely no significant separation of things like phones, workstations and access controls among those who seemingly cross between these companies on an ongoing basis.


I see it differently.

The person wants with wiggling words explain away technical problems to engineers, which does not work. Espcially all this "but it was a beta and never released" which is completely beside the point (which makes you wonder, did the person not understand the point at all or wants to wiggle out again).

The passive agressive tone to some E.g "Who are you? Why are you here" to some security researchers does not show good intent. The wiggling out with "thats not important that happened some years ago" is not convincing.

"But some other company that uses your CA product"

Another company that you own had an "alledged" rogue dev, but the person says "no-one can know, thats the way of life, so we should move on". It feels like a SNL police sketch where the police asks a question and the suspect answers "I don't know what happend last week, and can we really know what is going on, does anyone know? And it was last week, we should just move on, goodbye officer"

"If the integrity of the people funding the operations"

It's not about the funding but the two companies had (have?) the same officers.

The only way forward that could have been successful IMHO:

1. I bought the operation (Trustcor) 1 year ago and have no documents about prior company development or involvements because I didn't get any when I bought the company - and the people I bought the company from don't answer my emails.

2. But I did switch the auditor (has not happened) to make sure everything is ok

3. I will do my best to find out what has happened back then.


I really don’t understand the claims of passive aggressive tone or explaining away technical problems to engineers. She was defensive, no doubt, but thats to be expected. She was trying to establish a fair ground for responding to what probably seems to her like absurd unfounded accusations. It sounds like she was in the middle of investigating things herself. She probably had legal limits to what could and couldn’t be discussed publicly and was trying to communicate that to people screaming at her.

None of the behavior was unexpected given the situation. That's really what I’m hung up about. There were two instances where people asked a bunch of questions and she responded to the 15 different ones being asked and then immediately there were like 2 responses from bystanders to the tune of “that response doesn’t engender confidence because TLDR” when in fact if you cared to read it it directly answered like 13 of 15 questions and then for two more said essentially that she was investigating the issue and didn’t immediately know (which you say is the correct response). If I faced a wall of questions my natural thought would he to be thorough and respond with a carefully thought out wall of answers…


I guess assessing the answers is subjective, I've read several of her long replies and either are weak or they do no contain an answer to the question, E.g.

"In Response to "How was an unobfuscated version of the Measurement Systems SDK incorporated into MsgSafe?":

Our company never published a production or supported version of the MsgSafe mobile app containing the Measurement Systems SD [...]"

This does not answer the question. It's irrelvant if the software was published or not, the question is, how does MsgSafe get an unobfuscated version of a software where everyone else got an obfuscated version? Why does MsgSafe include the only known unobfuscated version - if they are not the primary authors?

Answers are interwined with marketing, E.g. "We have innovated and lead the market in the adoption of TLS server certificate issuance for one of the longest-running and most respected dynamic DNS services worldwide and the positive impact this move has made cannot be overstated." - which has nothing to do with the issues at hand.

About security, the most concerning answer:

(Their website right now states: "Private, end-to-end encrypted")

In Response to: "[...] Nevertheless, I think it is reasonable expectation that a root certificate authority can get the crypto right, and so I'm concern regardless of the reason why.”:

[...] As far as you not believing the product is offering adequate encryption capabilities, let me first say that I do not want to drag the names of any other encryption products or services through the mud. To address your concerns, based on our teams exhausted research into many other providers offering similar services, one basic rule applies; whether the encryption or decryption functions are occurring on the client (often in javascript) or on the server, the server is still storing and handling the key material in the process. [...] If encryption occurs on the client then the key material is passed from the server to the browser over TLS. [...] As the MsgSafe website explains, our team has found that implementing the key material and encryption/decryption processing on the server provides security without the additional processing requirement on the client."

Either this is snakeoil ("Private, end-to-end encrypted") or they don't know about what they are doing.

There are many more of those answers in the thread, but dinners ready and I can't https://xkcd.com/386/


I agree there was a lot of mud slinging in that thread, but this is the key bit from Mozilla's response, supported by statements which Trustcor haven't disagreed with:

> Certificate Authorities have highly trusted roles in the internet ecosystem and it is unacceptable for a CA to be closely tied, through ownership and operation, to a company engaged in the distribution of malware. Trustcor’s responses via their Vice President of CA operations further substantiates the factual basis for Mozilla’s concerns.

It's not some other company, its the same owners and operators doing malware under one name and running a CA under another.


> It's not some other company, its the same owners and operators doing malware under one name and running a CA under another.

Right! That’s insane.

Even if they’re innocent, which they may be, it’s too close of a connection: I can’t bet on a parent company remaining ethical when they’re in a position to decrypt all the traffic they handle.

CAs need to be trusted absolutely. Given the many well-documented instances of unethical corporate behavior, I won’t wait for specific evidence of malconduct. This isn’t criminal justice, this is risk assessment 101. A CA’s parent company owning a company that produces malware the relationship of these companies to present a significantly higher risk of abuse versus a CA who does not have a sister company developing malware. Even if they don’t deliberately manufacture malware, the sister company demonstrated to be operational incompetence that’s ripe for abuse.


Was that true? I believe that amounts to speculation by the security reserachers. Rachel said that at most there was shared incorporation services / early investment but that the CA has no legal relationship with other company doing malware. And any similarity of names on founding documents is purely speculation and furthermore no longer relevant since TrustCor executives hold all authority.


These are the references that Mozilla listed:

"[6] The identical corporate officers were acknowledged in Rachel McPherson’s initial response and confirmed in a company document submitted privately by Rachel to Mozilla.

[7] Ian Abramowitz is described as the CFO of TrustCor on their website and Rachel McPherson’s initial response notes “They are strictly passive investors, with the exception of Ian Abramowitz”. In a company document submitted privately by Rachel to Mozilla, Ian Abramowitz signs an agreement with TrustCor on behalf of both CHIVALRIC HOLDING COMPANY LLC and FRIGATE BAY HOLDINGS LLC."


I already responded to your other comment here[1], however any lawyer would advise against making condescending statements to cops, judges or anyone else for that matter.

Further, no lawyer would advise making a statement such as "we've been asked to avoid discussing our legal structure because we may be punished by tax authorities", because the statement itself can be taken as an admission of wrongdoing.

[1] https://news.ycombinator.com/item?id=33814291


> Everyone agrees that no evidence of mal-issued certificates exists.

This is one thing CT logs are useful for. As soon as logging certificates to a public log becomes commonplace, then misissued certificates can't be used in private without a high chance of the world being told about what's going on.


Don’t you have to trust the CA to actually log all the certs they issue? What’s to stop a rogue CA from logging all but a few key certs?


Anyone else can log a cert too. There was talk of Chromium logging any cert that chains to a public root that they find un-logged.


Nice. I had no idea such a thing existed, tbh.


> The whole thing is guilt by association. Everyone agrees that no evidence of mal-issued certificates exists. But some other company that uses your CA product had a rougue dev and because they are financially related because of history all of a sudden the industry is in panic?

No. The problem is that the entity in question is a root CA, and there is an expectation that root CAs demonstrate behavior befitting the trust given to them.

> But all I saw was a person rightfully defending rather unsubstantiated attacks on the integrity of her business.

If all you do is skim the thread and avoid actually reading the walls of text, sure.

But if you look more closely at the TrustCor replies... wow.

This response to TrustCor sums it up pretty well: https://groups.google.com/a/mozilla.org/g/dev-security-polic...

> It has never been the case that compliance with a narrow set of rules creates trust in a human endeavor. The decision to trust a CA is an ongoing one, and the behavior of its representatives is evaluated in that light, as representative of the attitude taken by the organization to its responsibilities. Your aggressive bloviation and evasion contrasts quite negatively to the openness with which other CAs have addressed issues before, and is most certainly affecting the trust that I would consider reasonable to place in TrustCor.

Here are some choice highlights from TrustCor's responses:

1- starting their email response with an (unjustified) ad hominem attack on the researchers

> Interesting that this is the first time you or anyone else in your research group has reached out to us, except if you count the Washington Post journalist who claims in his article that we did not respond, which is one of the many false claims made in the article since we responded very quickly to his contact. And before I begin, you should probably clarify if your views are representing The University of Calgary’s views, The University of California at Berkeley’s views, or your commercial endeavor AppCensus’s views, or your views representing any customer, agency, etc…? If in fact these views are completely independent and personal, that is also helpful to note.

2- suggesting that TrustCor is a more reliable CA than Google is, based on Gmail being a "high volume spam sending system"

> In Response to Ryan’s (Google) Additional Observations [...] Unfortunately, [MsgSafe.io] and frankly all free or low-cost email service providers, are often used by ransomware developers because of how they lend to privacy and anonymity, and how easily they can be obtained. (examples of gmail being the most popular across ransomware attacks [1], [2]). [...] we took an extra step to check constantly for receive-rate abuses when spammers send mail through another high volume spam sending system such as Gmail,

3- going on a long rant about being singled out, when everyone is primarily repeating "please clarify your corporate structure and stakeholders to us, because you keep dodging the question"

> In reading related reporting and blogging off-list, I need to address an elephant in the room. Apparently it may also come as a surprise to some readers and the researchers themselves that other root program members are in fact international governments, and some are also defense companies, or companies who are wholly-owned by defense companies and/or state-owned enterprises, meaning "businesses" that are completely owned or controlled by governments. Further, some of those governments are not free/democratic and in fact some have tragic modern histories of basic human rights violations. We are none of those things and our company does not identify with those values. Given this point above, why of all potential targets are these researchers interested in TrustCor? They could go after countries with human rights violations that have placed a CA in the program. They could go after countries that suppress free speech that have placed a CA in the program. They could go after companies that are smaller CA/issuers than us, or much larger ones. They could go after CAs that are actually state-owned enterprises (owned by governments). But they aren’t. So why? Why choose to spend their time on this and on us in particular? We’ve been asking ourselves this since it began. We’ve only come up with 2 possible answers. (1) They saw that single domain name in an old registrar account and simply fell into recursive confirmation bias to assume everything stemmed from that... or (2) They make money in their for-profit enterprise if they can find any American nexus, so they can involve the American government and the FTC and create pressure with American journalists. Well, this mystery solves itself. They do get paid by FTC in their own web of companies. They do tip American journalists using their university affiliation and then plugging their company in the articles. And their American customers apparently don’t reward them to go after foreign companies. So this represented a great opportunity for them if they could prove the American companies they saw had anything to do with us — unfortunately for them, they don’t. We are not an American company or a company owned by Americans. If they’d known beforehand, they’d have probably paid no attention just like they’re not paying attention to other program members who literally are governments or state-owned/defense companies. I think this is all about self-aggrandizing: getting themselves and their company known, and about making money. These guys are in business, and they’re bullies. They wear the hat and shirt of university researchers from different multinational universities and yet they’re involved in the same startup company/business and other related businesses that benefit financially from the exposure and follow-on work, and they’re misusing this platform and betraying the purpose of this mailing list. It’s also worth noting: the researchers followed no semblance of responsible disclosure processes which are well established in the industry. They never attempted to work with our product team or management to express their concern, or suggest improvements, or discuss potential vulnerabilities. Instead they opted for maximum public impact and attempted to pressure this industry body with journalism following their sensational false narrative. They were even able to get an American journalist to publish a story without proper fact checking, and without speaking to any representative of our company even though two of us responded to the journalist immediately. Our CTO provided proof of this in writing in his letter with screenshots.


> 1- starting their email response with an (unjustified) ad hominem attack on the researchers

I do not read this as an attack. I think it’s reasonable to ask if their views are representative of their employer when they have not stated. They claim that proper disclosure wasn’t followed, and that the WaPo journalist was untruthful in their reporting. They’re defending themselves, not attacking their accusers.

I find point two the most concerning. It feels like a case of deflection or whataboutism. Had they not called out Google specifically, it would feel less aggressive.

Point three is … bizarre. The responses seem to come from someone incapable of handling stressful situations (not something I want in a CA), or someone trying to DARVO. The inability to properly to refute claims that should be easily disproved doesn’t logically imply a potentially global conspiracy.

I expect a CA’s representatives to be calm and measured in their responses.


Of course some people see things as attacks and some people see things as questions

"And before I begin, you should probably clarify if your views are representing The University of Calgary’s views, The University of California at Berkeley’s views, or your commercial endeavor AppCensus’s views, or your views representing any customer, agency, etc…? If in fact these views are completely independent and personal, that is also helpful to note."

when indeed he starts by introducing himself as

"I'm Joel Reardon, a professor at the University of Calgary, who researches privacy in the mobile space." and that's it.

If there is anything else there, she should just have said "I would be good if you disclosed that you are paid by a competitor/..../financial intereest" but insinuating that the poster has an agenda without providing any evidence (when the poster himself presented 34 sources for his post) looks like an ad hominem attack.

When indeed she doesn't seem to introduce herself, explain her affiliations or shares.


Why's it "unjustified"? Surely "trust" is the key issue here, so clearing up what may or may not have been written and what happened with correspondence and views would be key I would have thought.


I think the “attack” in point 1 wasn’t an attack, and that the questions about representation were not merely justifiable, they’re completely reasonable. It makes sense to ask if this is the opinion of a single researcher, or if the department and employer share the opinions. I imagine it changes their legal stance if they’re proven to be innocent of the claims against them (i.e. they’d sue the university for reputational damage stemming from improperly conducted research/improper notification processes).

However, I think excerpt 1 paints a different in light of excerpt 2, and especially excerpt 3. 3 feels a lot like DARVO with their rumination on being a hapless victim of some conspiracy. It comes across as, “We can’t disprove the claims against us, even in the private documents we’ve sent to Mozilla, so we’re going to claim they’re being unfair and abusive, and they don’t understand corporate structures.” They’re suggesting that the people responding from Mozilla and Google don’t understand corporate culture, or that they’re too incompetent to use the vast resources at their disposal to verify their beliefs about Trustcor’s corporate structure. This is a big deal, sure the people responding from Mozilla and Google could have business documents reviewed by a legal expert at their respective companies, assuming they don’t have direct knowledge.


"It makes sense to ask if this is the opinion of a single researcher, or if the department and employer share the opinions."

1. No it doesn't make sense. He said:

"I'm Joel Reardon, a professor at the University of Calgary, who researches privacy in the mobile space." and not

"I'm Joel Reardon a security researcher" - so it's clear it's his professional opinion as a professor not a private opinion / or commercial opinion.

2. No university shares the opinions of it's professors, that's kind of the key of an university.


Choice highlights, indeed. Those were reactionary responses. She didn't cast the first stone. Read up the tread.


Seems reasonable to me. Although it's not ideal to distrust without a "smoking gun", it is (as pointed out) inadmissible for any ties to exist between a CA and a malware company.

Seeing how a closer look by Mozilla, Google and Apple into publicly available data quickly turned up more points of suspicion, I wonder how much scrutiny is put into CAs in general, and whether it's enough. Mozilla currently lists 148 trusted certificates [0] (soon to be 145, with TrustCor's departure).

[0] https://ccadb-public.secure.force.com/mozilla/CACertificates...


Certificates are broken anyhow, we might as well do away with them all together. How am I ever able to research, verify and in the end trust all the hundreds of certificate providers out there? Answer: I don't, nobody does, and that's why it will never work. What's wrong with SSH's encryption, btw? Can't we put that in a browser?


SSH's encryption isn't much different from TLS. The big difference is in how the authentication works. The way most people use SSH, essentially all certificates are self-signed, and the only way it knows which servers to trust is by storing a list of certificates it's seen before, so you're just hoping that you don't get MITM'd the first time you connect. It's not scalable to a web browser where you're connecting to thousands of hosts you've never seen before.


I personally don't visit thousands of hosts I've never seen before on a reasonably short timescale. I do visit some hosts/websites very often. Wouldn't SSH be ideal for those? See it like this: I connect to yc for the first time, hope I don't get MITM'd or have some web-thingy to verify manually, and after that, voila, forever good encryption and verification. This way I don't have to trust some random CA from a long list of CA's I know nothing about.

And even for every new site you visit, sure, you must hope you don't get MITM'd. Is that worse than hoping the random CA the site uses isn't compromised and a hacker uses that to MITM you? How does it compare to risk of the site being hacked already?

My point is, alternatives exist, good ones, but website security feels like a business that's captured by a bunch of CA's and browser manufacturers that don't want change for selfish reasons.


What you're proposing doesn't sound that different from public key pinning (HPKP), where the web server tells the browser to distrust anyone any other certificates than the pinned one (or certificates from any other CAs). HPKP is deprecated now though.


Why aren’t we using SSH for everything?

https://medium.com/@shazow/ssh-how-does-it-even-9e43586e4ffc


The point of certificates is not to encrypt the traffic, but rather to verify that the server you are talking to is who they claim they are. The server showing you their certificate is like you logging into an SSH session, which I've been doing for a long time with a certificate as well, actually.


In my browser it is either/or though. I can have encryption and verification, or none of those. Technically it would be feasable to have encryption without verification, and thus without CA's. Why isn't that an option?


You can do what you want by creating your own self-signed certificates. It's not that hard, just a couple of openssl commands. Browsers will throw up a big scary warning that the certificate can't be verified (as you'd expect), but most browsers let you click through that warning, and you get encrypted but unverified traffic.


Because encryption without verification is practically useless.


More specifically, because encryption without verification allows for MITM and other chosen-ciphertext attacks which trivially break the confidentiality provided by the encryption.

Encryption needs entity authentication (verifying who you're talking to), data authentication (verifying that the ciphertext has been created by one of the parties in the communication), and a cipher to provide confidentiality in practice.


You can set up your clients and servers to prefer and/or allow the NULL cipher.


So how many of the other CAs work with spyware / NSA / MI5 etc? Or corporate espionage? I doubt that these are the only bad eggs.


If you have evidence, I'm sure you can bring to the attention of the Mozilla Root Store Inclusion program and the CA/Browser forum. Moreover, "there are other criminals who have gotten away with much more" is not an argument.


I think the steel man version of the argument is: "You showed that there is no effective monitoring or transparency that me as a user can get, and as such, Where is the trust come from. What fundamentally causes Mozilla (or any) to trust CAs more than randomly distributed certificates."


The CA/Browser forum's requirement and enforcements such as this one (and against DarkMatter, CNNIC and the likes) give me the required confidence to trust them, even though I'd agree it's not a perfect process.

Your average user is unlikely to begin to understand why a CA would be trustworthy, and a web of trust model only works for social situations but not for certificate distribution.


"A moose once bit my sister. Therefore all meese must be sacked".

I have trouble seeing this as a steeled version of anything. "People have uncovered a flaw in the system, therefore the entire system is unfit for purpose" does not really make a compelling argument. It displays selection bias, hasty generalization, nirvana fallacy, and something about babies and bathwater.


If people have "uncovered a flaw", but there is reasonable expectation that the flaw is very widespread and broadly ignored, then there is a reasonable suspicion that the flaw is being weaponized in this case. This is why in many legal systems it is in fact a valid defense to note that the law you are being prosecuted under is not being broadly applied, implying state caprice and corruption.


> uncovered a flaw in the system

I will argue it is not the a flaw in some random aspect of the system, but the main propose of the system. Which is to vet companies they trust to distribute it through CA signing.

Do you think I will buy from a restaurant after finding they had expired food? Good food is the reason I'm there in the first place.


CAA records and CT logs work, do browsers check them?

I know nobody likes DNSSEC, but DANE works too :)


DANE works on the assumption that DNSSEC is secure, but it's just an another PKI that's way worse and less transparent.


How so?

DNSSEC is a PKI that follows DNS delegation, and no CA can issue certificates out of scope by definition.

That alone should be enough to consider it a strictly better subset of the browser CA PKI model.


> DNSSEC is a PKI that follows DNS delegation, and no CA can issue certificates out of scope by definition.

Sure, and with that you are forced to trust your name servers (and/or the registry's) and your TLD's and the roots'.

All that with little choice in the matter, and little to no transparency into the process.

Just one example - if your TLD leaks their keys, that's sufficient to forge all the replies a middleman would need and nobody would really notice.

With WebPKI you can use CAA records and Certificate Transparency logs, plus you can get some extra assurance from the fact that they have to comply with the policies set by independent trust stores.

> That alone should be enough to consider it a strictly better subset of the browser CA PKI model.

It's a subset that leaves out the parts that would make it better than WebPKI. Right now it just complements WebPKI, at best.


If the TLD leaks their keys, and an attacker can impersonate a registrar, you are screwed today. That's on top of the possibility that the CA -- no, any CA -- leaks their keys.

CAA and CT are absolutely wonderful initiatives and do a lot to keep the creaky CA PKI usable. But that's on top of the domain registry which underpins everything.

The registry control ownership of domains. With that comes an indirect power to control who can get domain validated certificates issued. Then on top of that we also have to trust the CA who only do the actual issuing.

That's just strictly worse for no upside other than historical reasons.

Look at the most popular protocols for domain issuance. It's variants of a simple theme, store-and-forward signed ascii messages. It's crypto every step of the way. Yet most of the large TLDs manage with less screwups than many of the CAs.

In my anecdotal experience both types of institutions are manned with very competent people, but I would not hesitate between the ccTLD and the CA which one to trust if given the choice.


> But that's on top of the domain registry which underpins everything.

Everything but trust. A registry lying to issue certificates for its domains will become visible real quick. If CT makes "creaky WebPKI usable" then DNSSEC is just unusable.

> Yet most of the large TLDs manage with less screwups than many of the CAs.

Hard to screw up what you don't have. Even if a bunch do implement DNSSEC, nobody has really trusted them with the task in a way that it'd actually matter.

TLD operators can't even mandate the use of DNSSEC by registrars, requiring audits is lightyears away in comparison. WebPKI at least does that.

Nobody in their right mind would be claiming an opaque system with zero oversight is somehow better for trust, than the alternative.


The above comment is not right. How do the registrar come into this?

I don't know what you base your experience on, but it is not representative of the better ccTLDs. The oversight there are beyond what you have in any CA. That much is a fact.

If you have specific criticism, feel free to ask any of the people concerned at for example the next IETF. In my experience criticism is welcomed and listened to. That is, indeed, what builds trust.


> The above comment is not right. How do the registrar come into this?

People often use their name servers (and possibly delegate a zone further) instead of adding their keys directly. Or at least people use their registrar's interface for managing those keys.

> The oversight there are beyond what you have in any CA. That much is a fact.

Absolutely not. There's no system for monitoring key (mis)usage at all. There isn't a way to mistrust anyone if they do violate any agreements.

Maybe you mean oversight internally by some ccTLDs, but that does not build trust externally.

> If you have specific criticism, feel free to ask any of the people concerned at for example the next IETF. In my experience criticism is welcomed and listened to. That is, indeed, what builds trust.

These issues have been described in detail, but you've skipped over them a few times now. Plus they are not for the IETF to solve really, as they mostly relate to the human concept of trust (or lack of it), not the raw technical cryptographical aspects.

What indeed would build trust would be adopting public audits, transparency and revocation methods from WebPKI.

Let's start by logging all zone files signed. The fact that this doesn't exist already shows how much worse DNSSEC is and don't skip this point this time.


It's cryptographically signed. it can be validated. browsers can implement it if they wish.

I mean it's not like you need it for every HTTP request, or not like DNS is slow.

yes, there are potential risks. keys can be leaked. just as in any other scenario.


> It's cryptographically signed. it can be validated. browsers can implement it if they wish.

Valid != trustworthy.


No I don't have evidence - however logical deduction shows that the probability of this happening is high. Any system involving humans is fallible, so it would be naïve to think that it doesn't happen.

Or put another way: if I was the NSA or MI5 this is exactly how I would attack the problem of traffic interception or targeted black ops. Get a puppet CA via hook or crook.

Totally agree that "there are other criminals who have gotten away with much more" is not an argument; I'm not sure what that has to do with my comment? I'm certainly not suggesting that. If anything I suggest that such systems are pretty much broken by design (at least if you care about state actors / extremely well funded actors).


My assumption is that most CAs have someone working for them who is also employed by an intelligence agency, possibly more than one from any given agency and more than one agency per CA (and more than one national government, e.g. both Russia and the USA have intelligence interests in Russia); this may be any combination of actively inserting malware, passively watching to get forewarning of zero-days before the CAs themselves know about them, and actively advising the CAs about exploits the agencies know about that aren't public yet.

Most of the agencies are likely to be more subtle than this, given it took Snowden's whistleblowing for us to learn about much of what they actually got up to.

But not all of them will be super-competent, and some of them will be spotted from the outside in much the way this was.


Certificate Transparency should be able to detect that


I would assume that any decent spying agency can produce any certificate they want by some CA trusted by Chrome and Firefox and Safari.

And not just NSA / Mossad, by also spying agency of an Estonia, Slovenia and Mexico.

- Or is there reason to expect that is not happening?

- And anyway that is not a good reason to let also known malware companies to do the same.


Certificate Transparency system requires certificates issued by CAs to be publicly reported, otherwise web browsers will reject them. Publicly reported certificates then can be monitored by anyone, and anything suspicious could be reported.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: