When I was a graduate student, in the late 80s, I was waiting for a flight to Los Alamos in Albuquerque. There was only one other person in the lounge, obviously also going to Los Alamos. I started chatting and he introduced himself as Martin Kruskal. Idiot me had no idea whom I was speaking to. I said “The only Kruskal I know invented this amazing card trick having to do with dynamical systems.” He stared at me, obviously pleased. “Not many people know my brother and I are into magic.” He did not want to take credit for the trick and attributed it to his brother. He would come often to Los Alamos and I miss chatting with him (got lots of advice on bringing up girls).
One of the conversations was about reading. Let the kids read anything: comic books, fun novels, fantasy picture books. He re-assured me that they would eventually expand into other things. Martin was right.
this is super interesting! The deeper explanation[0] really gets to the essence of what's happening here.
An interesting corrolary: the deeper down the deck you go, the less number of paths are actually available. For example, if you have 10 cards on the first row, then there are at most 10 paths to the end, but those can merge. So if you have many cards, paths will merge and you might just end up with one path.
So most cards towards the end of the deck are totally unreachable! For example the next to last card is most likely not reachable through this walking algorithm in most distributions.
I'm not sure of the implications but it sure feels important for things like randomly generated worlds or whatnot.
> For example, if you have 10 cards on the first row, then there are at most 10 paths to the end, but those can merge
In fact they must converge, suppose that the first card on the first row is less than 10, then it will merge with another path on the first move (because it will land on the first row), so it must be 10 to have no merging paths. The next card will merge with another path on the first row if it's less than 9, but if it's 9 it will merge with the first path, so it too must be 10. Same reasoning goes for the third, fourth and fifth card, however there are only 4 cards with value 10, so at least two of the paths on the first row merge within one move.
the fact that they must merge is really dependent on the distribution of cards within the deck, though. For example, imagine if your deck was filled with 9s. You would get 2 paths merging but not much else.
Of course with the standard 52-pack you'll end up with a high degree of merging. I just think it's important to say that the necessity of merging is really dependent on the instruction set, so to speak.
I think the fact the deck doesn't have many repeats of 9s. It has lots of repeats of 5s which causes merges and the other numbers will eventually merge. You have 4 each of 6,7,8,9. If you repeat 9 4 times there will be no more 9's and it will merge.
Yes, the claim is 'at least two of the paths taken from the 10 cards on the first row must converge', not 'there are no paths taken from the 10 cards on the first row which do not converge'.
I don't really see an explanation for the crucial fact that walks merge, which together with the fact that walks can't diverge means that the 1/10 chance of picking the same card at the start invariably increases towards the end.
>. Their walker is equally likely to be on any type of card from the deck (it was shuffled), so there is at least a 1/13 chance of stepping to the spot your walk visited. Since you both use the same rule for taking steps then they will subsequently continue along the same route as your walker and so end at the same card
I wonder if this is related -- the description immediately made me think of the "avalanche" effect in hash functions. A hash function avalanches if a single bit change in the input produces a wildly different output. So avalanche would be the exact opposite of the funneling effect of this card counting trick. In hash functions, the funneling effect or "trap card" would be a sign that your counting function isn't good enough. (https://en.wikipedia.org/wiki/Avalanche_effect)
It's really unrelated - it's more about the hash domain being rather small, i.e. 52. I would think that if you substitute the counting (go to card N+value) to proper hashing (go to card md5(value)%52) you would get the same, if not stronger, result.
I put together a small simulation and I think you have an 82% chance of winning with the setup described (though I welcome repeats). Replacing the J/Q/K with a score of 10 and the win rate drops to ~68%.
Intuitively anything that makes you take shorter steps should increase the chance of merging with another path.
Try running it again with them set to 2. My bet would be that's also less good than 5 (but not as bad as set to 10). I'd be very interested to know if that's right or not.
The procedure I'm using is to follow the jumps starting at each top card until you can't go any further, and count how often each following card is landed on. Then taking the max count on the last 7 cards and dividing by 9 (the max you can get to). That should be the chance you both land on the same card in the final row.
Here's a table of the results with different face card values:
Decrease. Face cards are overrepresented, so the smaller the value of the face cards, the more likely they are to cause a "path" to "merge" with another nearby number's path.
There is a strong connection with rainbow tables. A common misconception is that rainbow tables are a complete set of hash values. What they actually are a method of saving storage space at the cost of making searches slower.
The idea is to make a function that maps hash values back to password-like strings. With that function chains like this are built:
I=initial password guess
P=new password-like string
V=hash value
H()=hash function
U()=unhash function (converts V to a new P)
I -> V0=H(I0) -> P0=U(V0) -> V1=H(P0) -> P1=U(V1) -> ... -> Vfinal
Make many chains with different initial guesses, and store the first and last values [I, Vfinal] in a database. When you find a password hash, you repeat the H(U(H(U(...)))) process until you it matches one of your saved Vfinal values. This tells us the hash value path from the password hash we're trying to break and hash from the known initial guess merged, just like in this card trick.
Re-creating the chain from the chain of hashes from our known initial value, we discover a string that might not be the actual password, but is a string that hashes to the same value. With modern cryptographic hashes, collisions are unlikely, but rainbow tables were invented when passwords used very poor hashes.
Rainbow tables are actually a slight revision on this, because the merging of paths is a bad thing as it leads to false positives. Specifically, I can detect that my hash eventually leads to the end of a chain, but it may very well be that it merged into that chain at some point and so not contained in my table.
Rainbow tables minimize the chance of merges by using a sequence of unhash functions in chains so a merge can only occur if it happens the same point in the sequence of unhash functions.
I learned about this from the "metamagical" section in a Scientific American magazine from the 80s, good times. Apparently Uri Geller did it on tv and didn't work.
Same here.. tried it twice.. did not end up on the target card.
Just clarifying if how I counted was correct..
5 x x x x 9
Above is a crude illustration.. If I first pick the card with "5", I start counting from 1 from the next card, till 5. And then I would land up on whatever that card is (9 in this example), and then I would repeat.
I also did two passes through the game and did not end up on the trap card. Worried that I was not following the rules correctly I came to the comments to check.
On the "how it works"[0] page, they explain that they choose a card in the first row and perform the walk. The point of the trick is, it's extremely likely that, regardless of the initial card chosen, the walks will merge (and thus step onto the trap card you chose when setting up).