Hacker News new | past | comments | ask | show | jobs | submit login

This is remarkable! I always find it fascinating that simple to express properties lack a proof. This is a very simple thing to evaluate and seems like it should be straightforward to establish that 2048 is the highest such power.



Base‑10 is just our chosen way of writing numbers, it doesn’t need to have any deep relationship with the arithmetic properties of sequences like the powers of 2. For most series (Fibonacci numbers, factorials etc), the digits for large members will be essentially random, their digits don't obey any pattern - it's just two unconnected things. It seems extremely likely that 2048 is the highest, but there might not be a good reason that could lead to a proof - it's just that larger and larger random numbers have less and less chance of satisfying the condition (with a tiny probability that they do, meaning we can't prove it).

Interestingly, there are results in the other kind of direction. Fields medalist James Maynard had an amazing result that there are infinitely many primes that have no 7s (or any other digit) in their decimal expansion. This actually _exploits_ the fact that there is no strong interaction between digits and primes - to show that they must exist with some density. That kind of approach can't work for finiteness though.


Yes, I find math problems that depend on base 10 to be unsatisfying because they rely on arbitrary cultural factors of how we represent numbers. "Real" mathematics should be universal, rather than just solving a puzzle.

Of course, such a problem could yield deep insight into number theory blah blah blah, but it's unlikely.


Everything about this seems so arbitrary. You look at the powers of an arbitrary number (here, 2), you pick an arbitrary base (here, 10) in which to express those powers, and ask for a random property of its digits (whether they belong to the set {0,2,4,6,8}).

Nothing about this question feels natural. I've noticed that random facts often don't have simple proofs.


In this case, it doesn't even help to downsize the problem. Erdős once asked the same question, but with powers of 2, base 3, and the set {0,1}. (If you want to, you can disguise that version as something more natural-looking like "Which powers of 2 can be expressed as the sum of distinct powers of 3?") But we're still nowhere close to solving it.


You can generalize it if you want. Given powers of p in base b, what is the largest n=p^i such that each digit is divisible by k. Here we have: if p=2, b=10, k=2, then n=2048 and i=11. Why? Maybe there is a deeper reason that applies to all values of p, b, k.


why should it be straightforward to establish that?


Proofs of non-existence aren't usually straightforward.


I mean, clearly it isn't in this case. But given that the digits of 2^n are cyclical at each decimal position, it does feel like this should fall out of some sort of chinese remainder theorem manipulation.


True. It might also just be that the question hasn't attracted the attention of number theorists, and finding a proof wouldn't be unreasonably difficult to an expert in the field.


Nope, it's not that easy in this case. E.g., Erdős conjectured in 1979 that every power of 2 greater than 256 has a digit '2' in its ternary expansion [0]. This makes sense heuristically, but no methods since then have come close to proving it.

Digits of numbers are a wild beast, and they're tough to pin down for a specific sequence. At best, we get statistical results like "almost all sequences of this form have this property", without actually being able to prove it for any one of them. (Except sometimes for artificially-constructed examples and counterexamples, or special classes like Pisot numbers.)

[0] https://arxiv.org/abs/math/0512006


Thanks!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: