Hacker News new | past | comments | ask | show | jobs | submit login
Kaspersky: SSL interception differentiates certificates with a 32bit hash (chromium.org)
254 points by ivank on Jan 3, 2017 | hide | past | favorite | 144 comments



I don't understand how these antivirus vendors are still in business. Even when they do have marginally better detection rates than the built in solution, the licensing, annoying popups, system slowdown, and security-defeating 'features' make them a losing proposition.

Pre-bundled and even purchased AV is so dramatically deleterious to PC performance that MS should kill it as public service. It gives the entire ecosystem a bad reputation and nowadays is completely unnecessary.


That's part of why I keep pointing new PC buyers to the "Signature Edition" program [1] where Microsoft does forbid pre-bundled AV on PCs sold through that program (at Microsoft Stores and participating Best Buys and other stores if you know to ask).

[1] https://www.microsoftstore.com/store/msusa/en_US/cat/categor...


I can't wait until retail stores start trying to charge more for signature edition. Pay more for less!


To be fair, many internet business models operate on the principle of paying to get less garbage (such as ads). And it makes sense from an economic perspective that if you're not being bribed by crapware vendors you'd want to recoup the opportunity cost.


It's hard to compare Signature Edition models directly to OEM direct models (because OEMs don't want you comparing store-to-store in most cases), but from what research I attempted it did seem like the Signature Edition builds I was examining cost roughly $50-$100 to their direct equivalent the last time I tried to do that math, which seems about what I'd expect crapware vendors are throwing at the PC OEMs to get their stuff installed. (It says a lot about the long term revenue expectations of the crapware vendors if the potential value of a single "trial install" might be worth as much as $100.)


They've done a wonderful job convincing the majority of people that anti-virus is something that you absolutely need, full stop. Once you have that ingrained in someone, they'll put up with all the annoyances.


> They've done a wonderful job convincing the majority of people that anti-virus is something that you absolutely need, full stop.

I think Microsoft did that for them by being so terrible at security for so long. Windows Defender came out 5 years after XP shipped and several years after average XP systems were riddled with malware.


Microsoft had a statute of limitations against shipping anti-virus with the operating system because it was "anti-competitive" according to the EU/DOJ anti-monopoly decisions. Anti-virus wasn't seen as a core security need for an operating system at the time, because the third party AV vendors fought hard for that and used the "Browser Wars" as justification to do that.


Then they should have made a much more secure operating system. You only need an AV if your OS has already failed.


So literally every OS of any notable use in the world has failed then? Because I'm not aware of any widely deployed OS that hasn't had a successful exploitation written for it.


Most OS have some level of penetration, but it is so clearly not the same with windows. Even with the latest exploits free spreading viruses on android are rare but hook a windows machine up to the Internet without a router or firewall and get a virus in seconds.

Android easily outnumbers windows, by some counts 5 to 1, yet it has a smaller percentage and absolute level of malware installation.

The old truism of computer security arms race is bullshit. Computers that are secure from general mass spreading infections it not even all that hard (securing from governments is much harder). Simply ignore all incoming connections and don't execute downloaded data (or give to untrusted executables. The only hard part about this is making sure that network card drivers can be trusted, this isn't hard on most operating systems, but most windows users are totally at the mercy of some low budget buggy vendor.

Windows is well positioned to fail at computer security and they have done a great job at leveraging their position.


> Even with the latest exploits free spreading viruses on android are rare but hook a windows machine up to the Internet without a router or firewall and get a virus in seconds.

I don't buy this at all. Unless you are comparing Android with Windows XP?

First of all, you'd have to go out of your way to disable the firewall on a modern Windows install.

Secondly, there have been tons of Android infections: http://arstechnica.com/security/2016/07/virulent-auto-rootin...

I'm sure the percentage is lower than "all infected Windows machines" but I very much doubt it's lower than "all Windows 10 installations" or even "all Windows XP SP2 + Windows 7 + Windows 10 installations with updates enabled".

Nonetheless, it's definitely the case that an app sandboxing model provides some defence in depth, at the cost of reduced flexibility w.r.t. e.g. a real generic filesystem, ability to distribute/install without an app store, etc.

The Windows Security Development Lifecycle material is worth reading: https://en.wikipedia.org/wiki/Microsoft_Security_Development... https://www.microsoft.com/en-us/sdl/

Ultimately, if you were to allow Android devices to by default, open and run arbitrary native code that users download from the internet, you'd see the same host of problems (probably much worse for the average Android device given the fragmented OS patching nightmare).


>First of all, you'd have to go out of your way to disable the firewall on a modern Windows install.

There is a screen that pops up everytime you plug in a network cable or connect to a wifi network for the first time. It asks what kind of network you are on, personal, work, public. Depending on what you answer here it does stuff to the firewall.

Then there are the multitude of AV products that disable the firewall with their own suboptimal garbage. These wouldn't have been made if they weren't once necessary. Windows should have had a firewall when it got its own networking stack, but it went a couple of decades without one, so people got used to needing to add one.

Even if that article is correct it is Android off of windows by an order of magnitude.

> Ultimately, if you were to allow Android devices to by default, open and run arbitrary native code that users download from the internet

But they aren't setup that way. That is part of why they are more secure.


No OS security can beat user stup^W cluelessness or accidental mistakes. Also, sometimes trusted providers get hacked and malware is distributed in disguise.


If only!


Depending on your definition of "AV", of course, but: intrusion/threat detection is a need for every modern OS in some capacity regardless of how "secure" the OS is or not.

Even if you have the most secure kernel and no threat might ever escape user space, you'll still have pissed off users when they find out their stuff is all up for ransom or deletion.

"AV" is a component of a secure operating system, not an "add on" you only need for a "bad operating system".


> I think Microsoft did that for them

I think you'll find nearly everyone in IT and certainly many public bodies, banks, utility companies and $deity knows who else, does that - recommend AV to all users.

It's pretty much a mantra for IT "cognoscenti".


Antivirus are just another attack vector - and due to their often escalated privileges running on a machine, they are a very attractive attack vector.

Not to mention the pitiful detection rate (<50%~ for nearly any antivirus software) makes them nearly useless: once you're pwned, you're pwned.


I agree with the anti-prebundled AV comment. I for one have even disabled Windows Defender on Windows 10. It seems to become just another way of Microsoft tracking you these days. Plus, it's not even that effective as an anti-virus, and anti-virus software is mostly useless as it is (for more tech savvy users).

It would probably be even more practical to kill Windows Defender on Windows 10 considering Microsoft makes updates mandatory now - if only it wasn't for the fact that Microsoft gutted its Q&A team and now Windows 10 updates often break your computer, forcing you to wait on the updates before you're sure your notebook can't be bricked or something.

It's also annoys me that Microsoft is tying its App Guard sandboxing technique to Windows Defender, when it's completely unnecessary, and it will only make it harder to separate the two in the future, if nothing else for branding's sake.


FWIW, I use Kaspersky in a corporate environment with > 250 Macs and I haven't seen any issue thus far. I've seen plenty of HDD failures, fan problems, Chrome hangups, general system slowness due to problems with corrupted preferences and other bening issues, but never ever seen a system slowed down by the AV.


I suspect the "corporate" version of a lot of these AVs are pretty different from the consumer versions.


>Pre-bundled and even purchased AV is so dramatically deleterious to PC performance that MS should kill it as public service.

Ah, so Microsoft would then take the responsibility for preventing malware, and by virtue of preventing the use of third-party solutions make it more likely that they'd be assigned responsibility if sued over losses?


"Microsoft takes its users' security seriously, and it recognizes that third-party antivirus vendors can exacerbate the problem by providing additional vulnerabilities that can be exploited by attackers. Ironically, non-Microsoft software you buy to keep you safe can make you even more vulnerable. To mitigate this, Microsoft is launching a site for white hat penetration testers to report vulnerabilities in antivirus software to a Windows security team, as well as the antivirus vendor itself. If no fix is pushed within 10 business days by the vendor, or a shorter period (at the discretion of the security team) for high-impact vulnerabilities, a subsequent update to Windows Defender will mark that antivirus vendor as insecure, and will provide users the option to disable or uninstall the antivirus software. Vendors requiring additional time to push fixes (for vulnerabilities which have not been exploited in the wild) may purchase a limited reprieve from Microsoft, revenue from which will be used to justify and support this service, as well as to incentivize vulnerability reporters."

Bam.


This is not an actual quote. Using quotation marks for something you are proposing is highly misleading.


It's also a highly effective way to present something as coming from a third party, for comedic or other effect.


But muh antitrust!


I don't know why you're being downvoted; this is a serious concern other vendors bring up when discussing windows defender (or whatever the free solution is called). At first blush the situation is very similar to I.E. shoving out other competing browsers.


People with actual knowledge in security, and observing the absurd designs and subsequent holes in 3rd party AV products, are secretly wishing MS do that :p


Speaking of...

I started my Surface 4 running Windows 10 up last night. The batter applet in the task tray reported, once I started up Chrome:

____________________

"Chrome is is draining your battery faster

Switch to Microsoft Edge for up to %%% more browsing time"

____________________

(Evidently, this was first reported in July 2016 http://www.nextpowerup.com/news/29282/windows-10-warns-about... )


There's no OS feature you can't say this about.


While I agree, that's the absurdity of arbitrary monopoly law, especially in the EU. (IANAL.)


Ah, but that's only a concern if microsoft has serious marketshare in the market of antivirus. Do they?


If people follow the advice of security practitioners, Microsoft will end up with the majority of the market, yes.


Besides the obvious badness of the overall system described, things like

> The cache is a binary tree, and as new leaf certificates and keys are generated, they're inserted using the first 32 bits of MD5(serialNumber||issuer) as the key.

You know. That's not a mistake. That's what a consciously designed-in vulnerability to enable taking over the system looks like.


I've met plenty of programmers who are stupid enough to do this kind of thing. Don't attribute to malice what could easily be attributed to stupidity.


You should treat stupidity the same way you treat malice, because it's just as dangerous. The fire doesn't care who set it or why.


Don't attribute to malice what could easily be attributed to stupidity.

I'm no fan of pitchforks, and I don't doubt plenty of people might make this mistake. But lots of malicious actors hide behind this response.


Perhaps we should just avoid drawing any conclusions about intent from a single line of code.


Yes. Although I suppose the probability you're a malicious actor given that you've produced a vulnerability rise substantially (though still remains quite small).


> I suppose the probability you're a malicious actor given that you've produced a vulnerability rise substantially

That's ridiculous on it's face - all software has vulnerabilities, and not all malicious actors produce vulnerabilities. Heck, the vast majority of malicious actors don't even discover vulnerabilities, they simply exploit them.


You omitted the parenthetical "(though still remains quite small)," which was important. I think the point was: if you were (miraculously, in your view) to produce a piece of software that was free of vulnerabilities, I could safely conclude that you weren't trying to maliciously produce vulnerable software. If you instead produced a piece of software that contained vulnerabilities, it would at least be possible that you were such a malicious actor, so the probability would be higher, if still very small (small-but-nonzero vs. zero).


I'm not a fan of conspiracy theories, but this vuln was not patched in 90 days. Kaspersky frequently patches things faster than that, and this sure sounds like it's a pretty trivial fix.


Sounds like it was patched:

> This issue was fixed on the 28th, there was some delay unrestricting this bug due to the holidays.

And it's been less than 90 days since the issue was reported, so they wouldn't have disclosed yet if it wasn't.


Ah, sorry, Nov 1 -> Dec 28 is 58 days.


It's kernel TLS interception code. Not only is it not a trivial fix, but Tavis Ormandy had to revise his exploit model to account for functionality he hadn't noticed. It's not a conspiracy.


Fixing that kind of stupid hashing scheme and subsequent use of only 32 bits of the result? Might not be trivial, but, regardless of being in the kernel or not, SHOULD be easy enough to fix quickly. Even more so given how bad that hole is.

So, they taking too much time to fix that? In every case one more reason to stay far far far away from their products.


I'm not saying you should use Kaspersky AV. Don't do that.


Plausible deniability?


Its not a mistake or a vulnerability either.

Incorrect cache hits are fine, as long as there is some checking following the cache hit

    hash(A) == hash(B) && A == B
> I wasn't aware they were checking the commonName sent with SNI.

If I'm not mistaken, it does sound like they were doing some checking after they pulled the generated cert from cache, but it obviously wasn't enough checking


> Incorrect cache hits are fine, as long as there is some checking following the cache hit

Exactly. That’s why hashmaps don’t have objects in each bucket, but an actual linked list (or partitioned linked list) with all elements hashing into that bucket.

This is CS101. If you build a hashmap yourself, at least get this right.


Does kaspersky AV also keep a DNS cache? That would be convenient.


Almost certainly. Kaspersky are a global cybersecurity provider.

There is no room for Hanlon's razor here. This was malice, not stupidity.


Nobody who has done software security work for the security product industry agrees with you. For a variety of reasons, these products are typically several degrees worse than the median technology product.


A briefcase full of cash could easily get a "bug" like this introduced into a security package.

I'm not saying this is the case, but given the current climate, it's hard to say it isn't.


You can say that about any kind of product that everyone deploys. All software has bugs, most bugs are exploitable.

So, no, it's easy to say that it isn't.


So, no, it's easy to say that it isn't.

It is easy to say whatever you want, but impossible to prove either way.

Kaspersky comes from Russia, and has high level connections within the Russian government. Past actions (eg their analysis of Stuxnet) have been in line with Russian foreign policy goals.

Everything that they do is easily explained by their being a normal company. But multiple things that they have done fit very well with their working for Russian policy goals in a smart way. Which theory you find more plausible depends on your level of paranoia about Russia, and how much pressure you think that Putin would bring to bear behind the scenes to force cooperation. And even if they are subject to state backdoors, any particular one might or might not be one.

I personally am inclined towards paranoia. Russia using Kaspersky would be perfectly in line with standard fare in lots of countries. For example here in the USA look at how the NSA created the FREAK attack. Or look at Lenovo's use of Superfish in their BIOS. Why would we expect Russia to be more restrained than other state actors with as tempting a channel as Kaspersky? I see no reason to expect that. No reason at all.


Because the simpler explanation (that this is a stupid bug) is borne out by the uniformly horrible state of AV software.


And also the relatively limited capability it offers to attackers, compared to other bugs in AV software.


Now THAT is a good reason to think that this particular item is not intentional.

But I would still bet money that there is at least one intentional hole, somewhere.


You can probably identify it by looking at parts of the code that are unusually free of bugs.


I would not take the other side of that bet.


This bug is so shockingly stupid you have to wonder if it's malice or staggering incompetence, so either we must admit that Kapersky has no business staying in business, or they're willing to overlook little "mistakes" like this.


This is exactly what people said about the Apple "goto fail" bug. The conspiracy theories were silly then and they're silly now.


It used to be a "conspiracy theory" that the government was reading all your email, but it's turned out to be a fact.

It's worth noting the "goto fail" bug was likely exploited, so it had value to someone. As has been shown through various leaks, the NSA and other entities like it have a war-chest of these toys. Things that used to be "you're just being paranoid" type concerns are now "we may never know" ones.

Regardless of the origin or intent, it still paints Kapersky in a very, very bad light. There's no escaping that.


It's still a conspiracy theory that the government is reading all your email!


Wasn't it proven they tap most if not all internet backbones? If so, they can (like all ISP's together) read all unencrypted email, and most email is still sent unencrypted.

I don't really care about the difference between their "collecting" (= processed by a human) and regular human's collecting (getting it on gov't controlled storage media), fact is they have built the infrastructure to do just what that theory claims, have incentives to do it and no disincentives to not do it.

You call it a conspiracy theory, I call it a "very likely to be true" theory.

It is also a likely target of media manipulation. It's not quite the same as saying "evolution is just a theory", but maybe that's just because secularism is so popular now.


I'm not sure you can call government itself a conspiracy.


There is in fact no evidence that the government (any government) is "reading all our emails", or has had the access to do that. There is, on the other hand, a virulent conspiracy theory that the PRISM disclosures from Snowden revealed NSA's ability to do that; that claim was so false that Greenwald now (somewhat deceptively) denies he even acknowledged it.


Unless I'm missing something, the national security letters allow just that. My point stands.


That would be targeted to individual(s) not "all email from everyone" NSLs target a service or an individual or a corporation. Bad as that is, it's not intercepting, storing, scanning every email.


I suppose it's how you choose to interpret other people. They may not read everyone's email, but they certainly can read anyone's. This was always the scary part to me.


National security letters target individuals and aren't sent in enough volume to cover more than 0.01% of "everyone's emails".


Who are you quoting?


Apple's "goto fail" bug is completely different: it was not a design flaw, but a merge error. Hanlon's razor applies sufficiently.

But anyone with a basic understanding of information security would know that keying by truncated MD5 is certainly a bad idea. This is not just stupidity. This is not accidental. This design not only flies in the face of everything you want from a cert store, but is so blatantly bad that surely more than one person at Kaspersky was in on it.


I don't know why you're operating from the presumption that most security people know anything at all about basic cryptography, because practically none of them do. Cryptography seems like it should be a general topic in information security, but it isn't; it's very much a specialist topic. This is far, far from the dumbest crypto bug I've seen in a security product. You're simply wrong about this.

If you'll take a second to look at the "goto fail" threads on HN, by the way, you'll see people every bit as committed to the idea that "goto fail was an inside job" as you are to this bug being enemy action.

Have you taken a second to even think about how this precise bug even makes sense as an implant? Hint: it does not make sense as an implant.


> I don't know why you're operating from the presumption that most security people know anything at all about basic cryptography

I don't know why you're operating from the presumption I'm talking about security people in general. I'm talking about the people at Kaspersky who were tasked with writing a SSL certificate store. I will also quote the bug report: "You don't have to be a cryptographer to understand a 32bit key is not enough to prevent brute-forcing a collision in seconds. In fact, producing a collision with any other certificate is trivial."

> Have you taken a second to even think about how this precise bug even makes sense as an implant? Hint: it does not make sense as an implant.

Hint: Pair this with a DNS spoof and you can pretend to be whatever website you want.


First: generalist software developers muck with TLS all the time. There's a whole huge class of enterprise automation and management software that also does custom TLS. How good do you think that stuff is? Do you think the companies that churn this stuff out hire TLS experts to build those components? Of course they don't.

Second: https://news.ycombinator.com/item?id=13313794


Concocting theories as to intended malice based on this bug alone would be ridiculous. But it isn't dificult to imagine malice if you frame this with the history Kaspersky has

Particularly when they have a history of remote access "bugs"[1], insecure broadcasting of user-agent info[2], and a cozy relationship with the Kremlin[3].

[1] http://www.zdnet.com/article/flaw-found-in-kaspersky-antivir...

[2] https://theintercept.com/2015/06/22/nsa-gchq-targeted-kasper...

[3]https://www.wired.com/2012/07/ff_kaspersky/



If you are knowledgable enough to MITM SSL then you should definitely be knowledgable enough to know how stupid this is, and - frankly - knowledgable enough not to fucking MITM SSL. I would attribute it to malice, assume it's malice, fire the developer, and contact the authorities. Of course, that's moot if the authorities are the ones who asked for the bug.


Let's try a little exercise here. Stipulate that it's malice, and then generate the narrative in which malice makes sense.

Specifically:

You are whatever entity (I assume the FSB?) that can demand Kaspersky implant backdoors into its desktop antivirus software.

You decide to target that backdoor at the component of Kaspersky's software that gets, by design, unrestricted access to the plaintext of all TLS sessions on the machine, in addition to practically unrestricted access to every file on machine itself, and to the machine's memory in cpl0 and cpl3.

From that vantage point, you decide to...

... collide certificates?

Remember, your evil backdoor only impacts machines running your software already. From this thread, I'm wondering if that's maybe unclear to people.


Let's continue your exercise. Your backdoor gets detected by any security researcher. You're busted, your business is near destroyed, you cause diplomatic stress between nations and, you face criminal charges. The stakes are way too high.

So instead you consider ways to introduce vulnerabilities that leave plausible deniability.

Indeed, from that vantage point, you decide to...

... collide certificates.


So you propose that they created a backdoor that regularly and sporadically breaks TLS for normal users because they are evil geniuses who needed a plausibly stupid backdoor in case they got caught.

Not only are there better plausibly deniable backdoors they could use from this vantage point, but there are NOBUS backdoors they could use from this vantage point. You propose an attacker so clever that they've carefully calibrated the sophistication of their backdoors, but not clever enough to know how to backdoor a TLS MITM (a TLS MITM installed by design) without leaving the absurd tracks this one does.

No. It's not a conspiracy. As usual: it's just a bug.


>Let's try a little exercise here. Stipulate that it's malice, and then generate the narrative in which malice makes sense.

Sure. Kaspersky creates security software. The FSB benefits from vulnerabilities in antivirus software. Kaspersky knows this and puts company resources into particular areas. ie how to spot the next stuxnet, rather than fix bugs like this.

The FSB does not need to write the source code (for Kaspersky Anti-virus) to benefit from vulnerabilities. In fact NSA, GCHQ, and FSB all benefit from subversion of https.


You can't stop at that point in my comment. You have to read the whole thing first. I'm not asking "does it ever make sense to backdoor Kaspersky AV". Clearly it does.


I did. You created a ridiculous narrative, that seems to amount to a strawman. No one ever said that FSB wrote the antivirus program.


My narrative has in fact nothing whatsoever to do with the FSB. That's why I said "whatever entity".


"your evil backdoor only impacts machines running your software already. " seems to assume that "whatever entity" can freely put in whatever code they want.

I agree that cert collisions is a strange way to backdoor, but to dismiss any questions as to possible malice as "conspiracy theories" seems to ignore many recent events (such as Juniper Networks)


Once again, you're responding to an argument I did not make. I'm not saying this bug doesn't make sense because nobody would want to backdoor Kaspersky AV. Clearly they would. I'm saying it doesn't make sense because it doesn't make sense as a backdoor. It provides the attacker with far less access than they already have, and does so in a way that leaves tracks all over the Internet even when the "backdoor" isn't "in use".


Know what else you said doesn't make sense as a backdoor, times a million?

And this makes way more sense than that one did. Not that I'm convinced that it is one. But if it is, it's a pretty good one IMO.


Did you read the link? It describes an attack where you only need to have access to the same network as the target. Perhaps through that smart lightbulb they just installed?


Yes, I did read the link. Do you understand the thread? The proposal is that this bug in Kaspersky's kernel code is a backdoor in Kaspersky's kernel code --- that is: an attacker that already owns your OS kernel is backdooring it to come up with an extremely convoluted and noisy way to get access to your TLS sessions --- by backdooring the part of their kernel code that already has access to every TLS session on the machine.


The described attack doesn't sound convoluted at all, it seems very practical.


Once again: the attacker here already controls TLS on the machines that are affected. They don't need to collide certificates. By design, they not only have the plaintext of every TLS session on the machine, but they are also the client side of every TLS session originating from the machine (they terminate TLS locally and proxy it over a new connection they control).

I feel like the people claiming this is a smoking gun backdoor don't really understand what it means that Kaspersky's software is designed to proxy TLS. Yes: it is batshit that they proxy TLS. It's so batshit that this particular bug makes absolutely no sense as a backdoor. That idea is Xzibit-level crazy.


I see what you mean now. If you wanted to make a backdoor and have plausable deniability, you'd make it a bug.


Yes, but you would not make it this bug, which (a) doesn't get you any access you don't already have, and (b) leaves a trail of sporadically broken TLS connections so obvious that they show up in CT logs.


No, you don't understand. They already had that access, but in order to exploit it they (hypothetically) need a sneaky way to do so. Hence the bug.


From the vantage point they have now they have approximately 18 bazillion ways to generate a backdoor that yields control of the entire machine to an arbitrary person on the network. The fact that virtually every backdoor ever created works this way, and not by creating an elaborate set of circumstances through which you can get access to specific TLS sessions, is not an accident.

But even in the bizarro world --- which I do not concede that we live in but will briefly stipulate to --- where you would backdoor kernel AV software that looks at TLS plaintext solely to give people the ability to get access to TLS plaintext, this still doesn't make sense. They terminate TLS locally. They are the client to every TLS session originating from the machine. They don't need to create a noisy 32 bit certificate collision, so noticeable to the Internet it shows up in CT logs, to accomplish this task. They can backdoor TLS itself in a zillion plausibly deniable ways that would give network observers access to plaintext.


What if they don't need control of the entire machine? Such a backdoor could be much more conspicuous. What if the NSL they were handed only called for intercepting TLS traffic as a means of surveillance?


Ironically, the backdoor that provided access to the whole machine would be far less conspicuous than this one. That's the thing about the human cognitive bias towards narratives: RCE vulnerabilities in AV are found routinely, and don't land on the top of HN. This bug did because it was so cosmically stupid, it's surprising nobody noticed it before --- something Ormandy repeatedly states in his writeup.

Which is all the more reason not to believe that it's enemy action.


Well, I don't know much about Windows and I don't know much about the design of this software, but let's speculate some more. I imagine that on Windows, you have to have escalated permissions to modify the certificate store. I also imagine that if you were writing a piece of software that could do this (i.e. AV), you wouldn't always be running everything as root (so to speak). I would guess that most RCEs wouldn't lead to being able to modify the certifiate store.


It's kernel code.

I don't know if comparing "bugs" from AV software and OSs is apples to apples. Also, goto followed by erroneous goto looks like an error (I know) but

"keys are generated, they're inserted using the first 32 bits of MD5(serialNumber||issuer) as the key. If a match is found for a key, they just pull the previously generated certificate and key out of the binary tree"

doesn't


Why not? Caching certificate chains makes sense. Bad hash functions are the norm in systems code, not the exception.


I guess I'm reading these two examples as: 1. an extra goto and 2. a strategy dealing with creating an ssl store that uses a 32(!) bit key. I'm not implying malice but they seem fundamentally different type of errors.


Wouldn't it be fair to say that a vulnerability in software you willingly bless to install a MITM, is worthy of more distrust (as to whether it's a deliberate vulnerability) than a similar error in an office productivity program?


I don't know. Aren't you asking whether I should be more worried about Microsoft Office or AV software? That's a tough choice to have to make. Productivity software file-based vulnerabilities compromise orders of magnitude more computers than MITM vulnerabilities do.


This is true of virtually any exploit.


AV = Trashfire. If there were actual repercussions for writing code irresponsibly, even their billions in revenue wouldn't keep them from bankruptcy.

Time and time again we see these ridiculous vulnerabilities but nothing changes. AV insists on massively increasing system attack surface under the guise of security.


It's security theatre for your computer. It's truly astonishing how little these programs do and how much they do wrong.


So what should a non-technical Windows user use on a daily basis to protect their PCs?


If you want an AV, MSE has the lowest attack surface impact. It has always been good at implementing security mitigation techniques (whereas other vendors are atrocious here) and it's not very invasive.

For security as a whole, I would suggest a good adblocker, keeping your system up to date, keeping your browser and plugins up to date, and I'd suggest Chrome although maybe that's controversial.


Whenever I set up a Windows PC for a family member or non-technical friend, I just make sure Windows Defender is running, set everything that is installed to auto-update and install an adblocker. Also, I check every once in a while if the auto-updates are working correctly because in my experience they will inevitably and inexplicably stop working at some point :)


> I check every once in a while if the auto-updates are working correctly because in my experience they will inevitably and inexplicably stop working at some point

I think Windows 10 tries REALLY hard to make sure that doesn't happen.


Microsoft's own antivirus.


To be fair, Kaspersky has always been less bad than the others.


I've found this too. I've been using Kaspersky for a few years (most of the contracts I've had for contract work have specified they want an AV program running), and I have to say it's been much less painful than the McAfee and Norton shit I used as a FTE.

If anything, it seems to go nuts a bit less often than the Windows 10 AV, which will quite often sit there occupying 100% of 1 core for a while... though Kaspersky is so invasive that (even ignoring issues like the one the link...) I'm still happier without it.


This puts in perspective the Eugene Kaspersky's recent rant about Microsoft making AV vendors life harder: https://eugene.kaspersky.com/2016/11/10/thats-it-ive-had-eno... Maybe Microsoft is actually improving the Windows users experience.

Discussion on HN: https://news.ycombinator.com/item?id=12929230


There's no "maybe" about it: there are a lot of security professionals, including ones with no great love for Microsoft and their past anticompetitive practices, who are longing for the day when MS announces that third-party AV software is malware and takes action accordingly. I for one would be spitting on their grave if and when it happens.


An even dumber TLS bug in the same software:

https://bugs.chromium.org/p/project-zero/issues/detail?id=98...


I wonder why nobody assumes Kaspersky might be working at the direction of Putin.


Kaspersky's connections with the Russian government are discussed all the time in the industry.


There would seem to be significant benefits to maintaining a white hat presence while gaining access to thousands of machines and maintaining a body knowledge about the latest threats/techniques.

Do you have any thoughts on the validity of the speculation?


No idea if Kapersky is a black hat in white haberdashery, but the behavior you describe is 100% Kremlin modus operandi. That's how they do propaganda too -- a bunch of real news with some fake news mixed in. It's a good idea for any propaganda outfit.


People have been assuming this for years.


Not sure if this is sarcasm, but its not adding much to the discussion IMHO.


It wasn't, but I am apparently naive.


Contrary to what the west media wants you to think, not every russian in the world is working for putin to fulfil his evil masterplan


In an age where an overwhelmingly large chunk of valuable data is stored digitally, though, speculation about connections between the Kremlin and Kaspersky isn't unfounded, given Kaspersky's line of work; especially given that the Russian government's been known to act in bad faith in the past.


Do you wonder why nobody assumes that Symantec, McAfee, and half of the other AV companies are working at the direction of Obama?


Trying to understand this: Is it that Kasperkey thinks its certs are more secure? Which I suppose would be the whole point behind this WPF driver?


I'm not familiar with WFP (or Windows in general really), so i can't comment on the use of that particular technology, but the fundamental idea here is to man-in-the-middle the user's TLS connections so that the product can inspect or manipulate the data within. Presumably in this case it's for anti-virus purposes, but the same technology is used by schools and businesses to filter pornography, piracy, gaming, phishing, and other 'undesirable' Web sites, to shape bandwidth, and to log Web activity.

The firewall/software generates a root certificate so that it can sign and serve 'fake' leaf (site) certificates on the fly. Because the root certificate is self-signed, however, most browsers/whatever won't automatically trust it, which is why the user/admin has to add the root certificate to each affected machine's browser or OS certificate trust store.


No, my understanding was in order to inspect network traffic for malware, it's essentially doing a man-in-the-middle using its own certificates.


It wants to look at the contents of every connection in order to hunt for malware.


sort of like following ones own footprints.


Just yesterday I had posted a similar thread where an AV was MITMing traffic between the browser https://news.ycombinator.com/item?id=13307096. I'm still curious why this practice is allowed and these CAs are still trusted by browsers and such.


Because the AV is doing it to check for malicious content being downloaded via a secure connection - content that may be exploiting security holes that have not been patched yet, or which have been patched but the user hasn't updated. It's not at all uncommon for me to be asked to look at a PC and find it has an old version of Firefox and the only user is running as a non-admin, or even more commonly to find all those nice little buttons in the system tray advising about updates to Adobe Reader, Java, and maybe a few drivers.

Many things that would take advantage of those would need to be written to disk and the AV could presumably catch them then, but how about JavaScript exploits? And how does the browser respond to writing things to disk (presumably having exclusive write access) then losing access to them?

There's a juggling act going on there, and I suspect it's a lot easier for an AV vendor to capture and review the network traffic as a single point of contact rather than trying to work with multiple browser and other software vendors to make sure that their software interoperates correctly with the AV.


Is there a convenient way to check whether your Windows 10 and Firefox certificate stores contain anything that Microsoft and Mozilla didn't suppose to be there?


RCC from https://www.trustprobe.com/fs1/apps.html does iirc. It's been a while since I've used the tool and I'm not on my PC to check.


That's exactly what I had been looking for. Thank you so much!


Something similar is done by my organisation with the firewall(Fortinet/Fortiguard). We are forced to install a certificate as a trusted authority in the system store. So as the link says if we check the issuer it shows issued by FortiGate CA. This is very annoying and needs to stop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: