I'm no cryptographer so it's good news you don't have any cryptographic question ; )
I basically used my everyday workstation. There were 79 trillions operations to be made and I'd save the intermediate result every 1 billion result (so approximately every 22 minutes I'd save a result).
The computation cannot be parallelized and the i7 I was using has four cores. So basically instead of turning it every night off I'd simply leave it running 24/7.
I'd simply backup the files with the intermediate results once in a while. I rebooted the computer about 50 times in 4 years and during those 4 years during about 3 and half years it was computing towards the solution.
But I coded (totally unrelated things), compiled things from source, browsed the web, did my email, etc. from the very workstation which computed the solution.
The computation is easy to relaunch from any intermediate step.
This reminds me of a time during college a professor gave us an extra credit assignment to essentially find prime factoring large numbers (classic problem). I had come up with a pretty bad solution, but thought it might just finish in time for the deal line if I left it running all week. Of course a transformer blew up the day before the deadline, and I never knew... chances are my program would have never terminated :P
No it really didn't. I think I mentioned here elsewhere in a comment but I use an Intel Core-i7 6700 "non K": so the version that is not meant to be overclocked and which Intel rates as having a max TDP of 65W. And this would be with the four cores running. I don't know the TDP / power consumption when it's turbo-boosting at 3.9 or 4.0 Ghz (when one or two cores are at full speed on the 6700, it goes from 3.4 to 3.9 or 4.0 on these cores).
Basically I like my workstation to be quiet. Really very quiet: as in I can only hear it if I put my ear on the tower. So I never have ultra power hungry CPUs. I don't game so no GPU besides the integrated one. A M.2 SSD which consumes next to nothing.
Power bill probably did increase a bit but honestly it was lost in the noise: I definitely didn't notice anything : )
> Each step depends on the result of the previous step.
But this doesn't mean anything. If I said to take a number and add 1 eighty trillion times, each step would depend on the result of the previous step, but obviously it would be unnecessary to actually compute each step.
Doesn't this suggest that computing 2^i is just as hard?
You can compute g^(2^i) by just multiplying g's in the same way that you can compute 2^i by adding 1s, and multiplication is as associative as addition is. Specifying that you want g^(2^i) rather than g^k for some arbitrary k looks like a way of ensuring that a partial product computed in parallel (say, g^3) will be of nearly no value, because every partial product that helps compute the total will already be produced by the single agent doing squares.
So why compute g^(2^i) rather than just 2^i? Is 2^i in fact hard to compute? (In binary representation, it certainly isn't, but maybe 3^i is?)
Using the doubling trick you can compute `2^i` in `log i` time, so you can compute `2^(2^i)` in `log 2^i = i` time. Let `i` be big enough, and you've got a "difficult" problem.'
> Read the paper. It tells you why they think it's better than that.
OK, I read the paper.
This is the entire discussion of the inherent difficulty of repeated squaring:
> More importantly, repeated squaring seems to be an "intrinsically sequential" process. We know of no obvious way to parallelize it to any large degree. (A small amount of parallelization may be possible within each squaring.) Having many computers is no better than having one. (But having one fast computer is better than one slow one.) The degree of variation in how long it might take to solve the puzzle depends on the variation in the speed of single computers, and not on one's total budget.
I see two problems here:
(1) This is a purely conclusory argument, suggesting that a better response would have been a simple "no, there's no proof".
(2) This does not even attempt to address the question I posed, which was whether it is necessary to compute the intermediate results in order to compute the final result.
In fact, we know that the intermediate results are not necessary to compute the final result, because that is how the message gets enciphered in the first place. It can be done by factoring the modulus, and obviously there is no proof that that is difficult to do.
But beyond that, it also isn't immediately obvious that it must necessarily be impossible to compute the final result directly by means that don't require factoring the modulus. It is possible that somewhere in the literature on Blum Blum Shub this question has been analyzed, but that goes a bit beyond your suggestion to "read the paper".
So, I'm stumped. Tell me what you saw in the paper that you thought would answer my question?
> In fact, we know that the intermediate results are not necessary to compute the final result, because that is how the message gets enciphered in the first place. It can be done by factoring the modulus, and obviously there is no proof that that is difficult to do.
Well, there's no mathematical proof. But physicists and judges would accept that there's lots of money and fame to be made breaking eg RSA, so countless people have tried and keep trying, but haven't come up with much good so far.
I hate it when people think they know better than the experts. Do you really think that if they could have made it run in parallel they wouldn't have? Get off your high horse and solve it yourself if you're so smart.
I basically used my everyday workstation. There were 79 trillions operations to be made and I'd save the intermediate result every 1 billion result (so approximately every 22 minutes I'd save a result).
The computation cannot be parallelized and the i7 I was using has four cores. So basically instead of turning it every night off I'd simply leave it running 24/7.
I'd simply backup the files with the intermediate results once in a while. I rebooted the computer about 50 times in 4 years and during those 4 years during about 3 and half years it was computing towards the solution.
But I coded (totally unrelated things), compiled things from source, browsed the web, did my email, etc. from the very workstation which computed the solution.
The computation is easy to relaunch from any intermediate step.
Thanks for the congrats!