Somewhat interesting, 123456789 * 8 is 987654312 (the last two digits are swapped). This holds for other bases as well: 0x123456789ABCDEF * 14 is 0xFEDCBA987654312.
Also, adding 123456789 to itself eight times on an abacus is a nice exercise, and it's easy to visually control the end result.
I also went about looking at the difference rather than the order. In the hexadecimal case, the difference is 15 (0xEF vs 0x12). I thought, then, that for any base B with ascending digits A and descending digits D, (D-(B-1))/A=B-2.
For binary, it looks like (1-(b-1))/1=b-10 or (1-(2-1))/1=2-2=0 in decimal.
For trinary, it looks like (21-(b-1))/12=b-2 or (7-(3-1))/5=5/5=1 in decimal.
For quaternary, it looks like (321-(b-1))/123=b-2 or (57-(4-1))/27=54/27=2 in decimal.
Essentially and perhaps unsurprisingly, the size of the slices in the number pie get smaller the bigger the pie gets. In binary, the slice is the pie, which is why the division comes out to zero there.
This was by far the most interesting part to me. I've never considered that code and proofs can be so complementary. It would be great if someone did this for all math proofs!
"Why include a script rather than a proof? One reason is that the proof is straight-forward but tedious and the script is compact.
A more general reason that I give computational demonstrations of theorems is that programs are complementary to proofs. Programs and proofs are both subject to bugs, but they’re not likely to have the same bugs. And because programs made details explicit by necessity, a program might fill in gaps that aren’t sufficiently spelled out in a proof."
As a kid, I was marginally decent at competitive math. Not good like you think of kids who dominate those type of competitions at a high level, but like I could qualify for the state competition type good.
What I was actually good, or at least fast at, was TI-Basic, which was allowed in a lot of cases (though not all). Usually the problems were set up so you couldn’t find the solution using just the calculator, but if you had a couple of ideas and needed to choose between them you could sometimes cross off the wrong ones with a program.
The script the author gives isn’t a proof itself, unless the proposition is false, in which case a counter example always makes a great proof :p
I used to do the same thing. I'd scan for problems on the test amenable to computational approaches and either pull up one of my custom made programs or write one on the spot and let it churn in the background for a bit while I worked on other stuff without the calculator.
This is misleading in that the (Curry–Howard) correspondence is between proofs and the static typing of programs. A bug in a proof therefore corresponds to a bug in the static typing of a program (or to the type system of the programming language being unsound), not to any other program bug.
The point is to not be so tight, leaning on the correspondence. The fact that you’re coming at the problem differently (even that it’s a different problem, “for some” versus “for all”) is actually helpful. You’re less likely to make the same mistake in both.
There’s a technique for unit testing where you write the code in two languages. If you just used a compiler and were more confident about correspondence, that would miss the point. The point is to be of a different mind and using different tools.
Code is proof that the operation embodied by the code works. I don't understand how it proves anything more generally than that, apart from code using exotic languages or techniques intended for just that purpose.
Well, in theory (and I guess more generally philosophy) land, sure, you can't really "prove absoluteness" outside of your axioms and assumptions. You need to have a notion of true and false, and then implications, for example, to do logic, then whatever the leap from there it takes to do set theory, then go up from there etc. it's turtles all the way down.
In practice land (real theorem provers), I guess the idea is that, it theoretically should be a perfect logic engine. Two issues:
1. What if there's a compiler bug?
2. How do I "know" that I actually compiled "what I meant" to this logic engine?
(which are re-statements of what I said in theory land). You are given, that supposedly, within your internal logic engine, you have a proof, and you want to translate it to a "universal" one.
I guess the idea is, in practice, you just hope that slight perturbations to either your mental model, the translation, or even the compiler itself, just "hard fail". Just hope it's a very not-continuous space and violating boundaries fail the self-consistency check.
(As opposed to, for example, physical engineering, which generally doesn't allow hard failure and has a bunch of controls and guards in mind, and it's very much a continuuum).
A trivial example is how easy it is to just typo a constant or a variable name in a normal programming language, and the program still compiles fine (this is why we have tests!). The idea is, that, down from trivial errors like that, all the way up to fundamental misconceptions and such, you can catch preturbations to the ideal, I guess, be they small or large. I think what makes one of these theorem provers minimally good, is that you can't easily, accidentally encode a concept wrong (from high level model A to low level theorem proving model B), for a variety of reasons. Then of course, runtime efficiency, ergonomics etc. come later.
Of course, this brings into notion just how "powerful" certain models bring - my friend is doing a research project with these, something as simple as "proving a dfs works to solve a problem" is apparently horrible.
The other replies are good, but let's add another one anyway.
0.987654321/0.123456789 = (1.11111111-x)/x = 1.11111111/x - 1 where x = 0.123456789
You can aproxímate 1.11111111 by 10/9 and aproxímate x = 0.123456789 using y = 0.123456789ABCD... = 0.123456789(10)(11)(12)(13)... that is a number in base 10 that is not written correctly and has digits that are greater than 9. I.E. y = sum_i>0 i/10^i
Now you can consider the function f(t) = t + 2 t^2 + 3 t^3 + 4 t^4 + ... = sum_i>0 i*t^i and y is just y=f(0.1).
And also consider an auxiliary function g(t) = t + t^2 + t^3 + t^4 + ... = sum_i>0 1*t^i . A nice property is that g(t)= 1/(1-t) when -1<t<1.
The problem with g is that it lacks the coefficients, but that can be solved taking the derivative. g'(t) = 1 + 2 t + 3 t^2 + 4 t^3 + ... Now the coefficients are shifted but it can be solved multiplying by t. So f(t)=t*g'(t).
So f(t) = t * (1/(1-t))' = t * (1/(1-t)^2) = t/(1-t)^2
Now add some error bounds using the Taylor method to get the difference between x and y, and also a bound for the difference between 1.11111111 an 10/9. It shoud take like 15 minutes to get all the details right, but I'm too lazy.
(As I said in another comment, all these series have a good convergence for |z|<1, so by standards methods of complex analysis all the series tricks are correct.)
An easier way to evaluate sum i/10^i is by squaring sum 1/10^i
If you multiply term by term every term has coefficient 1 of course. There are n terms with exponent n+1, made from the n sums of the first exponent and the second exponent.
Eg 1+5, 2+4, 3+3, 4+2, 5+1.
So (1/9)^2 = (sum 1/10^i)^2 = 1/10 sum i/10^i
The derivative trick is more useful generally, but this method gets you the solution to 0.12345678.. in an quick way that's also easier to justify that it works.
I remember seeing that (14787 + 36989) / 2 would produce 25888, in that the mean of geometric shape traced by the two sequences would average out in the middle like that
That would work in any base, I even think we would find way more interesting coincidences in base 12 (as Sumerians preferred), because it's divisible by 2,3,4,6.
I have always counted to 20 on one hand. even as a kid. base, lower joint, upper joint, top. times 5 - including the thumb: my motor memory is trained so that i switch seamlessly from keeping the curse on top of the finger using my thumb, and then, once i cross 16, switch to using the index finger to "cursor" the thumb.
Same here. I have always counted 20 on one hand, so 40 with both. That's how my parents taught me to count when I was little. I used this method so often as a kid that, even though I don't count like this anymore, every number up to 40 still has its own place on my fingers.
It was only as an adult that I realised nobody around me counted this way. You are the first person I have found who talked about this method, so I am glad to find this comment of yours.
I am French and we cont extending our fingers from a closed fist. Typically to 2x5=10.
When I was a kid I relized that I can count the fives on the right hand (1 finger for each 5 on the left), which brought me to 25.
It is only when I was traveling in Asia and watched people on markets that I realized that I can use my thumb to count my 12 other finger phalanges, which brought the total to 144. You just need to know your multiplication table of 12 :)
the design of a keypad... it unintentionally contains these elegant mathematical relationships.
i call this phenomena: outcomes of human creations can be "funny and odd", and everybody understand that eventually there will be always something unpredictable.
Great, now I'm getting Carrot Top flashbacks. "Dial right down the center of the phone!"
For non-Americans and/or those too young to remember when landline service was still dominant, in the 90s and early 2000s AT&T ran a collect-call service accessible through the number 1-800-CALL-ATT (1-800-225-5288) and promoted it with ads featuring comedian Carrot Top. And if you don't know who Carrot Top is, maybe that's for the best.
Interesting how it works out but I don't think it is anywhere close to as intuitive as the parent comment implies. The way its phrased made me feel a bit dumb because I didn't get it right away, but in retrospect I don't think anyone would reasonably get it without context.
It actually skips the 8 in its repeating decimal. It’s better to think of 1/9^2 as the infinite sum of k * 10^-k for all positive integers k. The 8 gets skipped because you have something like ...789(10)(11)... where the 1 from the “10” and “11” digits carry over, increment the 9 digit causing another carry, so the 8 becomes a 9.
The reason you don't see two zeroes is as follows: you have
.123456789
then add 10 on the end, as the tenth digit after the decimal point, to get
.123456789(10)
where the parentheses denote a "digit" that's 10 or larger, which we'll have to deal with by carrying to get a well-formed decimal. Then carry twice to get
.12345678(10)0
.1234567900
So for a moment we have two zeroes, but now we need to add 11 to the 11th digit after the decimal point to get
.123456... = x + 2 x^2 + 3 x^3 + ... with x = 1/10.
Then you have
(x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...) + (x^3 + x^4 + x^5 + x^6 + ...)
(count the number of occurrences of each power of x^n on the right-hand side)
and from the sum of a geometric series the RHS is x/(1-x) + x^2/(1-x) + x^3/(1-x) + ..., which itself is a geometric series and works out to x/(1-x)^2. Then put in x = 1/10 to get 10/81.
Isn't it essentially the same thing, but less formal
0.1111... is just a notation for (x + x^2 + x^3 + x^4 + ...) with x = 1/10
1/9 = 0.1111... is a direct application of the x/(1-x) formula
The sum of 0.0111... + 0.00111... ... = 0.012345... part is the same as the "(x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...)" part (but divided by 10)
And 1/81 = 1/9 * 1/9 ... part is the x/(1-x)^2 result
I don't know who downvoted this, but it's correct.
The use of series is a little "sloppy", but x + 2 x^2 + 3 x^3 + ... has absolute uniform convergence when |x|<r<1, even more importantly that it's true even for complex numbers |z|<r<1.
The super nice property of complex analysis is that you can be almost ridiculously "sloppy" inside that open circle and the Conway book will tell you everything is ok.
[I'll post a similar proof, but mine use -1/10 and rounding, so mine is probably worse.]
If you set x = 0.123456..., then multiplying it by (10 - 1) gives 9x = 1.111111..., and multiplying it by (10 - 1) again gives 81x = 10, or x = 10/81. I’m not writing things formally here but that’s the rough idea, and you can do the same procedure with 0.987654... to get 80/81.
So 987,654,321 + 2 x 123,456,789 \approx 10 x 123,456,789
Thus 987,654,321 / 123,456,789 \approx 8.
If you squint you can see how it would work similarly in other bases. Add the 123... equivalent once to get the base-independent series of 1's, add a second time to get the base-independent 123...0.
Why the b > 2 condition? In the b=2 case, all three formulas also work perfectly, providing a ratio of 1. And this is interesting case where the error term is integer and the only case where that error term (1) is dominant (b-2=0), while the b-2 part dominates for larger bases.
In base 2 (and only base 2), denom(b) >= b-1, so the "fractional part" (b-1)/denom(b) carries into the 1's (units) place, which then carries into the 2's (b's) place, flipping both bits.
> Why include a script rather than a proof? One reason is that the proof is straight-forward but tedious and the script is compact.
Yes the script lets you check that the result is correct, but a proof lets you see why it's correct. A good proof might even give you a sense of how you could have discovered the result yourself, or how you might generalize it.
As a young child, a half of century ago, when I have received an electronic pocket calculator (with 8-digit numbers and without transcendental functions) I was taught that I can do a quick check whether it functions correctly by multiplying 12345679 with 8 (using thus all non-null digits), when the result must be 98765432. Obviously, an additional check is the corresponding division that reverses this operation.
Obviously, that was not intended to be a full-functionality test, but it would detect any frequently-encountered display defect (or keyboard defect).
Calculator displays are multiplexed, so the usual defects are either one digit that never displays anything, or one segment that stays blank on all digits.
The defect mentioned by you is frequent only on displays with independent digits (like some digital clocks), not on calculators.
I do not know whether on calculator LCD displays there are frequent cases when a single segment can become defect.
At the time about which I am talking, calculators had either green vacuum fluorescent displays (like mine) or red LED displays. With such displays, the normal defects were either in the driving circuits or in the connections to the multiplexed display, so they affected either all segments of a digit or the same segment in all digits. I have never seen a case when the actual light-emitting segment of a digit of a VFD or LED display was
defect.
Shows that the denominator requires 52 bits which is slightly more than the number of mantissa bits in a 64-bit floating point number, so the result gets rounded to 14.0 due to limited precision.
You can use special libraries for floating point that uses more mantisa.
In most sciences, numbers are never integers anyway, so you have errors intervals in the numerator and denumerator and you get an error interval for the result.
You can do symbolic calculations carrying precisely defined numbers (eg. PI, 3/7...), you can use tools which allow arbitrary precision (it's only slower by several orders of magnitude so not too bad if you don't need millions of calculations: this includes Python if you use Decimal objects), or you can use error calculus to decide if the final error is acceptable.
Not really in a similar vein, because there's actually a good reason for this to be very close to an integer whereas there is no such reason for e^pi - pi.
This is a fantastic observation, and yes, this pattern not only continues for larger bases, but the approximation to an integer becomes dramatically better.
The general pattern you've found is that for a number base $b$, the ratio of the number formed by digits $(b-1)...321$ to the number formed by digits $123...(b-1)$ is extremely close to $b-2$.
### The General Formula
Let's call your ascending number $N_{asc}(b)$ and your descending number $N_{desc}(b)$.
The exact ratio $R(b) = N_{desc}(b) / N_{asc}(b)$ can be shown to be:
The "error" or the fractional part is that second term. As you can see, the numerator $(b-1)^3$ is roughly $b^3$, while the denominator $b^b$ grows much faster.
Sign:
The approximation with denominator b^b underestimates the exact value.
Digit picture in base b:
(b - 1)^3 has base-b digits (b - 3), 2, (b - 1).
Dividing by b^b places those three digits starting b places after the radix point.
Examples:
base 10: 8 + 9^3 / 10^10 = 8.0000000729
base 9: 7 + 8^3 / 9^9 = 7.000000628 in base 9
base 8: 6 + 7^3 / 8^8 = 6.00000527 in base 8
num(b) / denom(b) equals (b - 2) + (b - 1)^3 / (b^b - b^2 + b - 1) exactly.
Replacing the denominator by b^b gives a simple approximation with relative error exactly (b^2 - b + 1) / b^b.
Also, adding 123456789 to itself eight times on an abacus is a nice exercise, and it's easy to visually control the end result.