Fun fact: 2^133477987019 is the smallest power of two that ends with 40 even digits. In fact, it ends with 46 even digits - which is surprising, given that it is significantly smaller than 2^(2^46). The last 50 digits of this number are ...32070644226208822284248862288402404246620406284288. This number has over 40 billion digits, though, so it seems kind of unlikely that we will ever find another number where all the digits are even. The relevant OEIS sequence is here: https://oeis.org/A096549
Context: I wrote a search program that is substantially faster - it takes just a few minutes to get up to 2^(10^13), although my laptop's limited memory is starting to be a problem (my intermediate result file is already nearly 1GB in size). Unfortunately, it seems there are no results up to 2^15258789062500, which is a 4.5-trillion digit number.
I am already doing that (thanks to `pow(x, y, z)` in Python). The numbers I'm working with would have trillions of digits were it not for this trick - way more than 1GB. 1GB is what I use to store all of the candidates, in an inefficient JSON format.
I literally just invented it, so it's not published yet, although I'll probably throw it up on GitHub at some point. It matches all published results up to an exponent of 3 billion or so, so I'm quite confident it's correct.
The short explanation is that 2^x mod 10^k will repeat with a cycle length of 5^(k-1)*4. This is easily obtained from Euler's phi formula on 2^x mod 5^k, plus the fact that 2^x === 0 mod 2^k for all x >= k. So, after the k'th term, the rest will repeat with a particular cycle period. We'll manually test all of the 2^x for x < k (there aren't many) and then rely on the cycles to test all of the larger powers.
The algorithm itself is a kind of sieving + lifting procedure: it inductively identifies all of the candidate exponents mod 5^(k-1)*4 which yield numbers with k trailing even digits (i.e. all even digits mod 10^k). Each such exponent will yield 5 possible exponents mod 10^(k+1) via a lifting procedure, which we can test; on average, half of these will have a top digit that is even and is therefore a candidate for the next power of 10 (10^(k+1)). Therefore, on average, we grow our candidate list by a factor of 2.5 per added digit - thus, for each 10^k, we test O(2.5^k) candidates (approximately 1.6*2.5^k, experimentally). This isn't too bad - at mod 10^19, we test only 55097940 candidates, representing every exponent below 5^18*4 = 15258789062500.
My rather hasty prototype is actually implemented in Python - not a language you want to do tons of arithmetic in - but it's still fast enough to chew through all those candidates in ~20 minutes on a single core. Obviously, there's ample room to make this faster; I figure a good, parallelized native (C/C++/Rust etc.) implementation could easily be 100x faster.
> My rather hasty prototype is actually implemented in Python - not a language you want to do tons of arithmetic in [...]
It might actually not be too bad? The actual big integer arithmetic is written in some fast language, and Python is just the glue holding everything together.
Additionally, it requires allocating memory for every operation. With a mutable bigint type, or fixed-size large integers (e.g. uint256) we could skip a lot of allocation overhead. I have previously prototyped big integer algorithms in Python, and when rewriting to Rust I have gotten massive speedups - there are just so many more opportunities to optimize in Rust/C/C++ than in Python.
Well, it's still better than trying to implement bignums in Python itself on top of limited precision integers.
I'm not even sure whether the prototype in question here uses bignums. Perhaps they use numpy in some clever way (which would probably be a better illustration of my thesis). Or perhaps it's fast enough as it is?
That is basically the same algorithm I used. The sieve I used was only mod 10^15 but it sufficed to give the results at 10^13 in a few seconds and 10^15 in a few hours.
I worked on this once after an argument with my boyfriend.
The original argument was "the ones digit has permanent pattern in 2^n {2,4,8,6,2...}.
We made a system to generate digits for powers of two, although eventually we just made one that can take arbitrary bases, and found that you can decompose digit frequency and find a variety of NMR like resonances that vary based on where you terminate data collection.
It was really fun and this makes me want to get back into this so I could check the properties of those resonances across bases and stopping points for data collection.
> The original argument was "the ones digit has permanent pattern in 2^n {2,4,8,6,2...}.
Isn’t that obviously the case (for n >= 1 anyway)? If each successive power of two is just the previous number times two, then it would always have to follow that pattern.
Any integer >= 10 can be expressed as the sum of a multiple of 10 plus a single digit number, for example 32 = 30 + 2. So 32 * 2 can be written as 2 * (30 + 2). And since any integer ending in zero multiplied by any integer must also end in zero, you only need to look at the single digit part of the number to see that a pattern must immediately emerge for powers of two, or of any number for that matter.
As noted in 3) in the Shepherd's comment, 2^k has no odd digits when 2^k mod 10^n for all integer n have no odd digits as well. So many k would be filtered by checking whether 2^k mod 100 has an odd digit, then another portion of the remainder will get filtered with 2^k mod 1000, 2^k mod 10000 and so on. (EDITED: Thanks to andrewla!) All of them would be periodic, so first few steps can be made into a lookup table to filter almost every k.
2^k mod 10 is never odd; it's the cycle (2, 4, 8, 6).
Related here is the length of the cycles mod 2^k, https://oeis.org/A005054. Interestingly, the number of all-even-digit elements in those cycles does not appear to be in the oeis, I get 4, 10, 25, 60, 150 as the first five terms.
This does appear to get more efficient as k gets higher; for k=11 I get a cycle length of 39,062,500 with an even subset of 36,105, meaning only .09% of the cycle is all-even.
This is all brute force; there's probably a more elegant way of computing this.
I see my error here -- you can in fact eliminate half even for 2^k mod 10, because both 6 and 8 force a carry; so ending in a 2 or a 6 means that the next higher digit must be odd.
The length of the cycles mod 10^k is simply Euler's phi function of 5^k: 5^(k-1) * 4 (or a factor of phi(5^k); AFAIK it is always exactly phi(5^k), although I don't have a proof of this handy).
The length of the even subset grows roughly as 2.5^k * 1.6. To see why, consider that the length of the cycle grows by a factor of 5 when incrementing k. Each all-even-digit power mod 10^k leads to 5 numbers mod 10^{k+1} which all share the same last k digits - i.e. their last k digits are even. We can model the k+1'th digit as being random, in which case we expect half of all those new numbers to consist entirely of even digits (one new digit, which is either odd or even, and k digits from the previous round that are all even). Thus, when incrementing k, the number of all-even-digit powers in the cycle will grow by approximately a factor of 2.5.
You only need to check enough digits to find an odd one. An odd digit appears in the low order (< 46 up to a high level) digits for the first quadrillion cases so you only need to compute 2^n mod 10^d where d is big enough to be safe. I used d=60 in my computations to take this to 10^15 candidates (with no additional terms found).
To prove that there is no value of k between 12 and 10^10 such that 2^k has all even digits, you only have to prove that there is an odd digit among the lowest X decimal digits for all 12 ≤ k ≤ 10^10.
The value of X necessary to prove this grows rather slowly compared to k. For example, the smallest power of 2 that doesn't have an odd digit in its last 16 digits is 2^12106. The smallest power of 2 that doesn't have an odd digit in its last 32 digits is 2^3789535319. So it makes sense to try increasingly large values of X until you are able to rule out all values of 2^k for k up to 10^10.
Here's a C++ program you can run to replicate this proof. It takes around 20 minutes to run, and can probably be optimized further, but it shows the principle: https://pastebin.com/DVK2JKdq
This optimization is important because you can then discard most of the number in question, bounding the integer size required for computation.
For instance you could store the number in question in a 128 bit integer, shift left (double), check for odd digits (a series of modulo & divide operations) and then truncate using a modulo and subtract. You can repeat this process as long as you like. If you find an all evens number than you can do a more expensive indepth check.
There likely is a trick but the above is also technically feasible as-is.
You have to do this for 10^10 (ten billion) powers. Each operation needs to check ~4.3billion decimal digits at worst (half that on average). It's highly parallelizable since each power is an easy to compute binary digit and you can do a binary->decimal conversion without relying on previous results which is a log(n) operation, ie one operation per decimal digit.
All up 10^10 powers * ((10^4.3)/2) decimal digits to calculate and check for each of those powers. Around 200 trillion operations all up in human terms. It's still hard enough you'd want a lot of compute. Getting each operation down to a nanosecond still means you're waiting 2.3days for a result. But it's also fair to say it's feasible.
You forget that the number of decimal digits grows linearly with the exponent. To generate the first n numbers of the form 2^n numbers you need O(n^2) time.
For example, 2^(10^10) is 10^10 bits and about 3 billion decimals digits.
So for n up to 10^10, you need to do about (10^10)/2 = 5×10^19 elemental operations. At one operation per nanosecond that takes 1584 years of CPU time. Not at all easy to brute force!
First of all, 1584 years of CPU time is not that bad.. if your university has a lab of 200 computers, each with 64 cores, that's already 45 days. If there is SETI-like system which lets researchers run their code on idle PCs, the calculation like this might get finished in a few months. Don't underestimate amount of idle compute sitting around in large organizations.
Second, while you can use naive algorithm (generate number, use something like GMP to convert to decimal, find odd digit), there are some pretty trivial optimizations. The OEOIS comments mention most numbers have odd values in last few digits, so in most cases, all you need to do is to calculate (2^n mod 100000000) and check that there is an odd digit there. Only if if there is not (which should be pretty rare) then you pull out that GMP and start do full check.
But wait, there is more! 2^(10^10) is a single binary 1 followed 9999999999 binary zeros, so it seems stupid to waste gigabytes of memory bandwidth storing all that zeros, and you don't need a result either. Implementing your own custom division algorithm specialized for those numbers will let you have tight loop with almost no memory accesses - something that modern CPUs do very fast. I would not be surprised if you can even get GPU to do it for you.
There could be more opportunities for improvement.. For example, I suspect the internal state of that division algorithm might end up being periodic, in which case you'd be able to quickly come up with an answer without having through go to every digit. But even if that's not possible, the optimization will make this problem pretty tractable.
To prove a number is on the list, you need to calculate all its digits. But to prove it's not on the list, you only need to calculate its digits up to the first odd one. It looks like the number of digits until the first odd grows very slowly; per the comments there up to n=50000 it has a maxiumum of 18.
There’s probably a smart way to rule out a lot of cases so you only have to check a relatively small number of candidates. It would be good to know what it is.
for i in range(1, 10**10):
for k in range(1, 5):
s = str(pow(2, i, 10**(10**k)))
if '1' in s or '3' in s or '5' in s or '7' in s or '9' in s:
break
else:
print(2**i)
It's really easily to parallelize, I was able to run it up to 10**8 in about 15min, so you would be able to run it up to 10**10 in a few hours with parallelization.
Checking about 10^10 of them is just about doable as vhcr correctly showed. (I mean it wasn't optimal, but 'leave this running for 400 hours' is far from impossible)
It is 10^10 cases, checking numbers up to 2^(10^10). The numbers themselves are pretty big (~9 gigabytes each if you want to write full binary representation), but nothing that modern computers can't handle.
I'm not a number theorist, but I note that 16 is 2^4 and 8 is 2^3 (both powers of 2). Maybe there is a provable statement about whether these lists are finite in bases that are not 2^k, and maybe there is a bound on the length of the list by the value of log_2(base).
I'm not going to write it out, there is certainly a proof that the list is infinite in base 2^k (for integer k >= 2). I'm more wondering about how hard it is to prove that the list is finite in a different base.
This is remarkable! I always find it fascinating that simple to express properties lack a proof. This is a very simple thing to evaluate and seems like it should be straightforward to establish that 2048 is the highest such power.
Base‑10 is just our chosen way of writing numbers, it doesn’t need to have any deep relationship with the arithmetic properties of sequences like the powers of 2. For most series (Fibonacci numbers, factorials etc), the digits for large members will be essentially random, their digits don't obey any pattern - it's just two unconnected things. It seems extremely likely that 2048 is the highest, but there might not be a good reason that could lead to a proof - it's just that larger and larger random numbers have less and less chance of satisfying the condition (with a tiny probability that they do, meaning we can't prove it).
Interestingly, there are results in the other kind of direction. Fields medalist James Maynard had an amazing result that there are infinitely many primes that have no 7s (or any other digit) in their decimal expansion. This actually _exploits_ the fact that there is no strong interaction between digits and primes - to show that they must exist with some density. That kind of approach can't work for finiteness though.
Yes, I find math problems that depend on base 10 to be unsatisfying because they rely on arbitrary cultural factors of how we represent numbers. "Real" mathematics should be universal, rather than just solving a puzzle.
Of course, such a problem could yield deep insight into number theory blah blah blah, but it's unlikely.
Everything about this seems so arbitrary. You look at the powers of an arbitrary number (here, 2), you pick an arbitrary base (here, 10) in which to express those powers, and ask for a random property of its digits (whether they belong to the set {0,2,4,6,8}).
Nothing about this question feels natural. I've noticed that random facts often don't have simple proofs.
In this case, it doesn't even help to downsize the problem. Erdős once asked the same question, but with powers of 2, base 3, and the set {0,1}. (If you want to, you can disguise that version as something more natural-looking like "Which powers of 2 can be expressed as the sum of distinct powers of 3?") But we're still nowhere close to solving it.
You can generalize it if you want. Given powers of p in base b, what is the largest n=p^i such that each digit is divisible by k. Here we have: if p=2, b=10, k=2, then n=2048 and i=11. Why? Maybe there is a deeper reason that applies to all values of p, b, k.
I mean, clearly it isn't in this case. But given that the digits of 2^n are cyclical at each decimal position, it does feel like this should fall out of some sort of chinese remainder theorem manipulation.
True. It might also just be that the question hasn't attracted the attention of number theorists, and finding a proof wouldn't be unreasonably difficult to an expert in the field.
Nope, it's not that easy in this case. E.g., Erdős conjectured in 1979 that every power of 2 greater than 256 has a digit '2' in its ternary expansion [0]. This makes sense heuristically, but no methods since then have come close to proving it.
Digits of numbers are a wild beast, and they're tough to pin down for a specific sequence. At best, we get statistical results like "almost all sequences of this form have this property", without actually being able to prove it for any one of them. (Except sometimes for artificially-constructed examples and counterexamples, or special classes like Pisot numbers.)
I thought that at first as well. Then I read the notes which made me reframe it as ‘odds your digit sequence won’t include a six ever’ and note that checking up to 2^50000 has only two candidates with the first 15 digits even, and I came down on ‘shrinking so quickly it’s super unlikely’. No proof here due to HNs comment limits of course..
I wonder if we can get a sense of how fast it would grow if we hypothesize it is an infinite sequence.
And if it is a finite sequence, one could define f(p, n) as the sequence of successive exponents of 2 such that the ratio of even digits over its total number of digits is greater than p. This could be an interesting way of describing a set of fast growing functions from exponential growth (p=0) to arbitrarily fast growth as p grows closer to 1 (or P where P is the smallest number such that f(P, n) is a finite sequence).
I might have a proof that this list is complete (I am very tired though and should be sleeping instead of doing this, so my apologies if I'm wrong):
Because we can only get one extra by carrying, each digit of 2^(k - 1) is at most 4 (otherwise the next digit in 2^k will be odd).
Assume this list is complete up to 10^n. We find the biggest l such that 2^(5^(l - 1)*4) < 10^n. Let us consider the 10^(n+1) > 2^k > 10^n such that 2^k has all even digits.
By cyclicity of powers of 2 mod 10^l (that's why we chose this l), this means that 2^(k - 1) = a*10^l + b, where a is some integer and b is 1,2,4,32 or 1024 (because those are the only options with digits less than 5 mod 10^l). If l > 10,that means that we can divide by b to get 2^(k-1)/b = c*10^d + 1 where c and d are nonzero integers. But this is a contradiction.
Now we only need to show up to 2^(5^10 * 4) to allow l > 10, which has already been done by other comments.
> By cyclicity of powers of 2 mod 10^l (that's why we chose this l), this means that 2^(k - 1) = a*10^l + b, where a is some integer and b is 1,2,4,32 or 1024 (because those are the only options with digits less than 5 mod 10^l).
I'm pretty sure this is the part where the argument breaks down. Just because 2^(k-1) mod 10^l only has small digits doesn't mean that it corresponds to a lesser power of 2 with small digits. E.g., 2^18 ends in 2144, which is not one of 1, 2, 4, 32, or 1024. (And for that matter, 1024 ends in 24.)
The hard part is showing that eventually you must hit a digit greater than 4 if you look at a long-enough suffix.
This uses much higher order sieves so that it runs about 32000 times faster than the naive algorithm and was able to search to this point on a single core. It is also possible to thread this algorithm relatively easily.
For those curious, one relevant field of mathematics that could be used to prove properties of this sequence would be Sieve theory: https://en.m.wikipedia.org/wiki/Sieve_theory
I'd love to see a numberphile episode on this, for two reasons:
1: it's been too long since we've had a Neil Sloane episode, which are always a highlight
2: it sounds like the kind of thing where just a little bit more attention from maths enthusiasts will result in a proof of the sequence being finite (or not) very quickly
Not all even digits, but I'm still mindblown that 33554432 is a power of 2 (2^25). It makes a nice little song on one of those singing calculators from the 80s that play a little tune with a different note for each digit.
A puzzle you might appreciate: 2^29 is a nine-digit number. All nine digits are different. Which of the ten digits is missing? Figure it out without computing 2^29 explicitly.
for 2^n only zeroes are shifted in, to all eternity. thus the lowest digits go through a fixed cycle.
as the top but is shifted to the left in each shift+add-threes-where-needed cycle, and leaves "it's" bcd digit after four such cycles, I intuit the next bcd byte will also switch to some cycle, as it's 'input' is boringly deterministic: all zeroes for the lowest digit, leading to 1, 2, 4, 8 (1)6, (1)2, 4, 8, (1)6, (1)2, ... so 0000(1100)* is shifted in to the tens digit.
that gives 0,0,0,0,
0+1, 2+1, 6, (1)2,
4+1, (1)0+1, 2, 4,
8+1, (1)8+1, (1)8, (1)6,
(1)2+1, 6+1, (1)4, 8,
(1)6+1, (1)4+1, (1)0, 0,
0+1, 2+1, ... for the tens digit. which has a period of 20 ... with a shift to hundreds pattern of 0000(00010100011110101110)* and an odd odd even even rhythm on the tens digit.
noice.
some number nerds will for sure figure or know ways to spin this on for the hundreds digit. and determine the periodicity of having all the lowest n digits even. or the loss of that periodicity... because maybe just maybe this spins into some wheel where one of the digits foo to bar always is odd. and then you can stop searching...
but what do I know.
I just Dunning-Kruger an intuition that the "double dabble" bin2bcd _may_ be useful in this :-D
The phrase squared is "powers of two" is much, much more common than you know. I'm part-time substituting as a grade-school teacher. Ask any if they are explaining this difference.
"power of two" with "power" singular perhaps but "powers of two" with the word "powers" plural is really not common as a synonym of squared (excluding people who don't know what they mean at all in the first place)
This does not clarify -- your initial post made a claim about 0^2, which (correctly) does not appear in this list.
Moreover it is trivial that there are no negative powers of 2 that have all even digits, since the trailing digit will always be 5. So the question reduces to whether there are powers of 2 greater than 2048 that have all even digits.
Somehow I missed the title and wondered what the fuck was going on...
2, 4, 8, 64, 2048 are powers of 2 (i.e. 2^n), and they don't contain odd numbers (e.g. 16, 128, 1024 contain 1 so are not in this list, same with 4096 containing 9).
I'm confused by your comment. First, powers of two are 2^n not n^2. But what do you mean you missed the title and wondered what was going on? How could you expect to understand the contents without reading the title? Surely I'm missing something.
Context: I wrote a search program that is substantially faster - it takes just a few minutes to get up to 2^(10^13), although my laptop's limited memory is starting to be a problem (my intermediate result file is already nearly 1GB in size). Unfortunately, it seems there are no results up to 2^15258789062500, which is a 4.5-trillion digit number.