Hacker News new | past | comments | ask | show | jobs | submit login

It is fascinating (to me at least) that almost all RSA implementations rely on a probabilistic primality test, even though the probability of picking a pseudoprime is extremely small.



There's extremely small (like 2⁻²⁰, the chance of winning $50 000 with a $2 Powerball ticket [1]), and then there's cryptographically negligible (like 2⁻¹²⁰, the false negative chance of our primality test). The chance of something cryptographically negligible happening is about the same as the chance of the attacker guessing your key on the first try, or of a cosmic ray flipping a CPU flag inverting the result of "if !signature.verified { return err }".

[1]: https://www.powerball.com/powerball-prize-chart


I happened to look at this recently, and while I understand the argument (but not the math) of having to do fewer Miller-Rabain rounds, why would you do so in PRACTICAL settings? Unlike ECC you're likely only generating long term keys, so shorter key generation time seems like a bad tradeoff. Composite candidates are going to be rejected early, so you're (with high probability) not doing expensive calculations for most candidates. My reading of [BSI B.5.2](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publicat...) confirms this.

Of course random bit flips could interfere, but other measures should thwart this in high-stakes environments (at least to some degree).


The number of Miller-Rabin rounds has to be bounded, so if you're not going to base your bound on reaching a cryptographically negligible change of false positives, what are you going to base it on? Should we do 10? 15?

The problem with "x should be enough, but why not do more?" arguments is that they can be applied recursively, and never answer the question "ok so when should we stop?"


The BSI recommendation I linked says 60 times here (considering the worst case and the security level they're targeting). Just wondering why we'd want to do less for a (presumably rare) one-time operation.


The performance of this can matter in some scenarios. In embedded systems, smart cards etc., generating the primes can take a significant amount of time (15-20 seconds typical) even with the 'low' number of iterations. Higher iterations means the user will have to wait even longer. Actually, in such systems it is not unusual to see occasional time-out errors and such when the smart card is unlucky with finding the primes. The timeout value may be a fixed, general value in a part of the stack difficult to change for the specific call.

Another case is short-term keys/certificates, where a fresh key pair is generated, and cert issues, for each and every signature. This setup makes revocation easier to handle (the certificate typically has a lifetime of a few minutes so it is 'revoked' almost immediately).

There are also scenarios where the keys are generated on a central system (HSM's etc.) to be injected into a production line or similar. There performance can matter as well. I worked on a system where the HSM's were used for this. They could typically generate an RSA key in 1-2 seconds, so 12 HSM's were needed to keep up with demand - and this was again with the reduced number of rounds.


Thanks for the reply (and it definitely answers my original question). For both short-lived keys, and where production time matters it makes perfect sense. I was mostly thinking about the scenario where you're generating a key for e.g. a long-lived certificate (maybe even a CA) or high-stakes PGP stuff. Just seems like you'd want spend a few more seconds in that case.


Why not do 120 then? We can show that the chance of false negative of 5 rounds is cryptographically negligible, so 5, 60, and 120 are all the same. If the only argument for 60 is that it's more and this is a really important rare operation, doesn't it also apply to 120?

I'm not trying to be glib, there is no rational stopping point if we reject the objective threshold.


I don't pretend to understand all the involved math, but what I'm trying to say is that as far as I understand T rounds gives 4^-T probability that we've chosen a "bad" prime (really composite) per normal Miller-Rabin in the worst case. Doing ~5 rounds has been shown to be enough to choose a good prime candidate, under most (very probable!) conditions when we get it a random, and thus the argument is that that ~5 rounds is fine. We agree so far?

I'm just asking, why not run the conservative 60 round test, rather than ~5 when you're doing a very rare, one time, key generation? I understand that it's very unlikely to reject any numbers, but at least BSI thinks it's worth it for important keys.

If I understand the recommendation right, you wouldn't do 60 for a 2048 bit key and then 120 for 4096, rather 61 rounds would be enough for 4096 if 120 is alright for 2048.


You're asking why not apply the formula for adversarially selected candidates even if we are randomly selecting candidates. There is simply no reason to, except "maybe we made a mistake" but then why would we not think we made a mistake also in calculating the 1/4 value, or in any other part of the code?

Phrased another way, do you have an argument for why run the conservative 60 round test, instead of asking for an argument for why not run it?

Again, you are "very unlikely" to win the Powerball jackpot. Rounds 6-60 have a cryptographically negligible chance of rejecting a composite. It's different, otherwise we'd have to worry about the "very unlikely" chance of the attacker guessing an AES-128 key on the first try.

(I don't follow you on the key sizes, if you apply the 1/4 probability, the candidate size is irrelevant.)


Thanks, understand what you mean. I probably botched the 1/4 probability thing, was thinking 4^-60 gives 2^-120 bit assurance (roughly, security margin in my mind), and an extra round would quadruple it, but doesn't work that way I realize.


As I understand it, the number of rounds needed (for a given acceptable failure probability) goes down the larger the number is. Note (very importantly) that this is assuming you are testing a RANDOM integer for primality. If you are given an integer from a potential malicious source you need to do the full number of rounds for the given level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: