Hacker News new | past | comments | ask | show | jobs | submit login
Still Got Your Crypto: In Response to Wallet.fail’s Presentation (ledger.fr)
49 points by asymmetric on Jan 1, 2019 | hide | past | favorite | 24 comments



This seems a little disingenuous to me. The implication of the first attack is not that someone might sneak into your house and modify your Ledger hardware. It's that hardware could come to you with modified firmware from the get go.


This is largely the issue with 'hardware wallets', the security models aren't particularly well defined and this leads to ambiguity like this. You're absolutely correct in that the concern is the device is not genuine, which is what attestation of the firmware is supposed to prevent. In the case of the Ledger, the attestation doesn't prove anything about the safety of the device unfortunately for a number of reasons (this attack, and other logical ones).

The obvious one is that the security domains in the device are idiotic. There's a "secure" processor with almost no processing power or IO, and a "insecure" one which handles the screen, buttons and IO. Both of them handle secrets (for example, the seed shown on the screen), which leaves you with essentially no gain whatsoever.

The more logical hardware implant than the one shown at CCC is a bluetooth module that can simply read the I2C lines going to the screen and transmit the seed as a beacon whenever it is plugged in. This has the advantage of not needing presence as with their demonstration, and with assistance doesn't need any physical presence.

I described this as a concept for a security review of a cold storage setup which was "unbreakable". Is this sort of thing realistic? Perhaps. Is a $5 wrench attack more sensible? Probably. It's worth considering what supply chain attacks are possible though.


> The obvious one is that the security domains in the device are idiotic. There's a "secure" processor with almost no processing power or IO, and a "insecure" one which handles the screen, buttons and IO. Both of them handle secrets (for example, the seed shown on the screen), which leaves you with essentially no gain whatsoever.

I think the idea is that the secure processor will verify the insecure processor's firmware (the "MCU check"), making such attacks impractical.

Of course, the design is broken and it can be bypassed by emulation, but security isn't all or nothing - "no gain whatsoever" is not true.


Even if the firmware on both firmwares are completely virgin, this doesn't say much about the safety of the device. I agree that security is not all or nothing, protection in layers is always the goal of secure products. I do however caution that it can cause complacency if things are presented as bulletproof, you need to be up front about what tools such as attestation afford you. In this case it can not tell you that the device is safe or not tamptered with.


i have occasionally heard stores of ppl losing funds from hardware wallets. usually it's after buying it on Amazon.


Its usually because the buyer was duped into following a fake instruction setup. The scammer will produce an official physical 'card' of recovery words that ships with the wallet, tricking the user into putting money into accounts that the scammer also has access to.


I didn't watch the 35c3 presentation, but it certainly looks like it's an absurd attack. Kudos to Ledger people for constructively replying to it.

Some talks in 35c3, defcon, etc remind me of the rubber hose security (https://xkcd.com/538/).

On the other hand, www.ledger.fr web site does not properly redirect to HTTPS (e.g http://www.ledger.fr/bounty-program/) and that would've been a more practical one.


Ledger has for a long time been grossly inept in security, there's really nothing absurd about this attack at all. In the Bitcoin industry we frequently see very detailed setup for long timeframe attacks and substantial effort going into identity theft and physical compromise. Worse, for these deices in particular a backdoored device is almost undetectable due to the way ECDSA can be used to transmit encrypted data in its signatures.

Ledger was recently compromised, or showed that they have no release process (both equally bad) by releasing a version of their application which stole user funds. Their claim is that they released a development version from a dirty git clone that contained "testing" code which happened to have a hardcoded address for sending every transaction to.

https://www.ledger.fr/2018/08/03/important-message-concernin...


That's indeed impressively bad. The full address seems to be https://etherscan.io/address/0xC33B16198DD9FB3bB342d8119694f...

There don't seem to be any outbound transactions from that address, so Ledger refunded the victims separately instead of sending the funds back. That means they likely don't control the key. OTOH, the funds (worth about $40k for the Ether + another $20k for the tokens) haven't moved at all, so "test key that was lost long ago" does seem plausible. (Especially since it also was used on the testnet before https://ropsten.etherscan.io/address/0xC33B16198DD9FB3bB342d...)

Could of course also be an attacker who was hoping for a bigger loot and didn't want to risk getting caught over $60k, but as you said, not sure what's worse - incompetence or compromise.


Yep.

Either they are so incompetent that they released software out of their git tree from someone's work environment, and had absolutely no process to catch a ridiculous and obvious failure. Otherwise they got popped and are lying about it. Neither is anything but a disaster.


What bothers me is that they did not responsibly disclose the vulnerabilities to the manufacturers ahead of time. This is not moral, and I'm not sure what one gains by not doing that. I think that conference organizers should pressure presenters to do that before talks.

Either that or attendees should apply bottom up pressure and ask live questions like "what did you do to responsibly disclose this issue?". I think I'll do that on future security conferences I attend.


I wish the everyone would stop using the term "responsible" to describe "coordinated" disclosure. Researchers do not owe vendors any cooperation at all. It is perfectly moral to present factual information without any notice whatsoever. I think there's often something to gain through coordinated disclosure, but not always, and it's not your choice to make unless we're talking about your own findings.


Agreed. Researchers owe the companies absolutely nothing.


It's not about the companies. I do not care much about them.

It's about people that may be hacked between someone's 0day disclosure and manufacturer's response. And if the manufacturer doesn't care to fix the bug - roast them about that. It's their fault.

It's not moral because people (not companies) may suffer. Your actions have consequences.


The vulnerability doesn't pop into existence the second it is publicly announced. It was already there. Everybody was already vulnerable.

At least if it's publicly announced people can take steps to defend against it.


Yes but why not send a single email to the manufacturer before making it public? Does it really hurt so much?

From a "cyberpunk hacker" mentality this only gives you an opportunity to roast the manufacturer if they do nothing. Perhaps even bankrupt them, I don't care. Competition will take their places and hopefully be better.


> Does it really hurt so much?

Potentially yes. The manufacturer may attempt to prevent publication through legal threats or action, which can be annoying and expensive even if you ultimately win. The incentive to be annoying goes down significantly once the disclosure cannot be prevented (because it's already public) and the public is watching (i.e. any action against the researcher has a higher likelihood of public backlash).

It also allows the manufacturer, who is likely more experienced and has more resources, to start PR to downplay the attack.

I generally default to responsible/coordinated disclosure, but I also do my research first. If the company has previously shown undesirable behavior (like the stuff I've described), or I've reported to them previously and didn't like the experience, they'll learn about the disclosure from the news.


That doesn't mean hackers knew about it.

It's like finding out my neighbor doesn't lock his front door at night and announcing it on twitter. I didn't create the vulnerability but I'm helping criminals take advantage of it.


>It's like finding out my neighbor doesn't lock his front door at night and announcing it on twitter.

No, it's like finding out your neighbor sold a bunch of faulty locks to a bunch of other people. There's a difference between information that would benefit only one person (the neighbor in your analogy) and information that would benefit many people (the neighbor's customers in my analogy)


In that case it would be better to inform future customers so they don't buy the faulty lock, rather than throwing together an in-depth tutorial on how to take advantage of the lock. Especially since, unlike a lock, software can be updated to fix the problem.

"There's a known exploit that has yet to be fixed"

But then there's an issue of trust. Without documenting the exploit to the public I suppose no one would believe you.

Nevertheless the consequence of releasing an exploit to the public is that you've also informed nefarious players. Actually it's worse than that. Likely the nefarious players are the only ones paying any attention to stuff like this.

Perhaps what's needed is a trusted third party middleman who can verify an exploit exists without releasing it to the general public?


It's not the researchers responsibly that a vendor is incompetent, frankly. The vendor released something broken, that's their burden to bear, it's not wise to assume that you're the first to find a bug, with that in mind expedient full disclosure is acting with the customers best interests.


I've been doing security research in this industry for a while and it's effectively not worth anybodies time to attempt to report hardware wallet, or software bugs 'responsibly'. I've found that by far the most common responses are being told that the attack doesn't fit into their design (as here), it's not reasonable to expect, or that for some reason. For a long time the Trezor wallets custom crypto library was simply python transliterated and had a sidechannel attack so large that you could measure the EC multiply operation with a SDR from across a room. This was supposedly out of their scope (but has been since somewhat fixed, but not entirely) for some reason, despite being something that is easily fixed using industry standard constant time operations.

Responses you do get at protecting the fact that a lot of the bugs are burned into hardware and can't be fixed by anything but them re-issuing it. It's not in the interests to ever acknowledge issues.


Power analysis is a lot harder to fix then you think. Constant time isn't enough: you probably need blinding and some hardware assistance with features that are inevitably under NDA.


This is true. Many of these devices are using effectively joke crypto libraries however, theres a world of difference between something that needs a sophisticated setup, and the python crypto library that was transliterated into C and makes absolutely no effort at being constant time. The original Trezor software took something in the order of 300ms to produce a single signature. You could probably have done recovery of the nonce from ec multiply with a stopwatch.

If your security appliance is using an ECDSA library for Arduino that has absolutely zero tests or review, you just outright lost. Some of the more well known products in the space do exactly this.

https://github.com/kmackay/micro-ecc/blob/master/test/test_e...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: