I got a 5x5 cube a while ago and was reading about how to solve it, and came across a nice, concise description of how one might invent moves which can be combined into algorithms which let you make incremental progress on a cube. I can't find it again, now!
Some of it was along the lines of (1) Figure out how to do something A to a single layer (while messing up the rest of the cube). (2) Observe that if you do A, then rotate the layer R, then undo A (A'), you have an operation B = A R A' which does something to only one layer of the cube. I assume that most moves in most common algorithms can be expressed in terms of a couple of fundamental techniques like this (probably using the words "commutator" and "conjugate"). Does someone have a link or reference that gives you the general meta-technique (even if it involves incompletely-specified things like "figure out an interesting sequence of moves in terms of their effect on a single layer")?
I'm interested in this because I'm not really interested in learning (again) and forgetting (again) any particular existing method for solving the cube, but it would be fun to be able to fiddle around with it in a less-than-random way and eventually arrive at a method, based on some higher-level principles.
I spent a disenjoyable 45 minutes one afternoon trying to learn the basics of solving; somehow, I could never get the cube "solved". Eventually, I discivered my daughter had rearranged a few of the stickers, resulting in an unsolvable cube.
There's actually a parity to the positions of the pieces that most people don't seem to know about. If you flip an edge or twist a corner, the whole cube becomes unsolvable.
You can also make it unsolvable by swapping stickers because each piece is unique. There is only one red/white/blue corner and only one green/yellow edge. Assuming standard color arrangement, there is no piece with both yellow and white, or both blue and green, or both red and orange.
The guy I learned to speedcube from was so good that we would secretly flip a corner or edge on his cube, challenge him to a race, and he would solve it to the point where only one piece was flipped, curse at us, fix it himself, and still beat us all by at least 10 seconds.
Yep, it would be readily apparent by that point. You wouldn't even need to "solve" it first; you would see pretty much instantly that the last layer was in an impossible position.
Source: I got into speedcubing for a couple years in high school and averaged in the 30s, with a record of 26 seconds.
A friend of mine got into a Rubik’s Cube in college. As a prank, we swapped the stickers on his cube and were amazed when he noticed it in a few seconds and correctly identified what was swapped. Familiarity with the invariants
The Heise method might be a great fit for you. Unlike most methods, it requires no memorization. And, unlike most methods that require no algs, it can be quite fast. AND it forces you to learn about how the cube actually works and to think in commutators and conjugates: https://www.ryanheise.com/cube/heise_method.html
Do you mean the short-term memory needed to apply the commutators or the conjugates? Otherwise you only really need to "memorize" the steps of the method.
I learned this method around 10 years ago, and I can still solve the cube without applying any memorized algorithms because of it. It really is the only fun way for me to solve the cube after all this time.
This. I used to forget the algos whenever I didn't touch the cube for 6 months. Then learnt Ryan's method. Picked up the cube 4 months later and found I can still solve it.
There's a method entirely based on group theory, a version of the original computer algorithm created to solve the cube (the Thistlewaite algorithm) simplified for use by a human [1]. It iteratively solves the cube into simpler groups that need a smaller moveset to solve (the last step uses only half turns on each face, no quarter turns). It requires only a bit of memorization, and you can understand exactly how the algorithm works. It's not a great speedsolving method, but you can get relatively fast with it: in [2] there's a solve below 30 seconds.
Choosing a speedcubing method is an interesting algorithm design challenge. It's basically a tradeoff between speed and how many subalgorithms sequences you need to learn by heart. On the simplicity extremum there's the corner-3-cycle method, where you can solve it using zero hard coded sequence (pure reasoning). On the speed extremum, there's Fridrich or ZZ, which most top level speedcubers use but they require memorizing hundreds of sequences.
Are there statistics on how many moves people used for record runs? I've only seen them measure the time but from the video recording you could probably get a rough count of moves too.
I would love to see the trend lines for those. Are the world records getting faster due to quicker execution of same number of moves, or using fewer moves thanks to better algorithms?
There are two main 3x3x3 speedsolving methods.
Cfop and roux.
90% of world class solvers use cfop which has significantly higher move count (55 vs 45).
The cfop method has characteristics which make it considerably more finger friendly, allows for less regrips and makes look ahead easier which makes up for the extra 10 moves.
The world record progression is mostly due to top solvers being able to turn faster with fewer pauses.
E.g. the fastest cuber ever Max Park averages less than 6 seconds but his solves are not more sophisticated than those averaging 10s.
I chose Roux specifically so I didn't have to learn a bunch of algorithms. You do obviously have to know a few, but you get to a couple points and while you can use a specific algorithm, you can also basically do the same thing X number of times and it works out and gets you to the next step.
CFOP is definitely more finger friendly (no middle layer changes), but I think Roux has fewer regrips, at least at non-speedcube levels (maybe I'm wrong, but I never change the orientation). Roux is also considered more "intuitive", there are points where you can look at the cube and (after a while) just see what needs to be done.
I don't do it for speed, so I haven't bothered learning all the algorithms. I'm "fast enough", but I probably only do 10ish solves a day, and Roux makes it really easy to remember the 3-4 algorithms needed to solve that. It still amazes people when you can do it, and even taking 90 seconds to do it, but with hardly any pauses, still makes people think you're a wizard or something.
>> It still amazes people when you can do it, and even taking 90 seconds to do it, but with hardly any pauses, still makes people think you're a wizard or something.
I find this to be quite true. I learned a fairly slow method back in the 80's and can't remember all of the details so I have to figure it out. I don't run across a cube more than say once a year, so it may take 2-3 minutes to solve it. But part of the time I'm doing memorized sequences which allow looking up at a person while solving. You have a little conversation while doing it, and at the end you just put it down like it's not a big deal. The person looks at you not quite sure if there's any significance to what just happened and then you move on as if you were timing your nails or something. It's just a fun little thing to do from time to time.
ZZ is actually rather unpopular and seems to have decisively lost its spot as a top tier method a few years back. The tradeoffs vs CFOP are generally not considered worthwhile, because an entirely rotationless solve doesn't make up for a dramatically more complex EOLine and harder xcross. Only ZBLL exists as the extant remnants of ZZ.
Q: Wasn’t ZZ an attempt to avoid the crazy ZBF2L algorithms and came after ZB? People understood the value of ZBLL (hell, even I wrote out all the speed-oriented algorithms for it) but everyone doubted the viability of ZBF2L, so they tried to find other ways of orienting edges. Or was ZZ around before and people just realized they could apply ZBLL to it?
ZZ is definitely younger than ZB, and I think that story checks out. ZBF2L just tries to orient LL edges with the last pair, while ZZ moves it to part of the EOLine/EOCross. In both methods, ZBLL can be used to do LL.
At a high enough level it becomes a numbers game. In CFOP you will occasionally skip whole sequences due to a favourable permutation, a lot of high level cubing is just solving enough cubes that you get these rare and extremely fast permutations in a competition setting.
Particularly good solvers are able to see potential skips or favorable arrangements and alter their solve to "force" a particular arrangement in a later stage.
Preserving an F2L pair during the cross, or choosing a better OLL algorithm during F2L.
Most records have "reconstructed solutions", where people recreate the solution from start to end. You could get the number of moves that way, though I couldn't find a dataset of them.
> Are the world records getting faster due to quicker execution of same number of moves, or using fewer moves thanks to better algorithms?
My novice understanding is that it's the former. The algorithms haven't changed very much in the past 20 years or so, maybe a few variations that are more ergonomic. After learning all the algs, it's a matter of improving look-ahead (making sure you know what alg to perform next while your current is wrapping up) and otherwise increasing your turns per second (TPS).
The records dropped quite drastically a few years back, which is likely coming from advancements in cubing hardware. Adding magnets to prevent overshoot while turning helped people speed up quite a bit.
There is also a difference between knowing a sequence and being able to maintain it and keep execution fast. Two fast ones can be faster than one slow one.
Wow this looks cool, I learned CFOP as a teenager and from showing it to people the number of algos (150-200+) is definitely the barrier for entry for most.
Solving a cube with 3 algos means most people can probably learn in an hour or two.
If by "algorithm" you mean a fixed sequence of moves, then the (rusty math alert) I'd think the answer is "no", because then that algorithm would be an element of the Rubik's cube group, which you could apply compose repeatedly with any other element to get the identity. This would imply that the Rubik's cube group is cyclic, and I don't think it is (because it's non-Abelian).
Correct, the Rubik's cube group is neither cyclic nor abelian. It is generated by 6 elements, which correspond to quarter turns of each of the 6 faces.
It has to exist: it's a mindbogglingly large but finite number of possible permutations, and to go from any one permutation to any other takes a finite number of moves. Therefore, if you can enumerate all possible permutations in some way, any arbitrary way will do, you have a Devil's algorithm by going through them in that order. The question is not whether a Devil's algorithm exists but what the shortest one is.
Worth noting that the Devil's algorithm is not a sequence of turns you repeat over and over, but one that takes you through every possible cube state. You "abort" it partway through when you reach your desired state.
It is both: it is a sequence of turns that if you repeat them over and over, will take you through every possible cube state. But it is possible that the shortest such sequence is trillions of moves long.
There are 43 quintillion cube states, so "trillions" is understating by at least a factor of 10^6.
I'm not sure if the 43 quintillion are reachable in a loop of that many moves, or if you need to backtrack through some states to reach others and thus need more moves than that.
Somewhat related: I was playing with cube recently, and, starting with solved state, I started doing sequence of two moves that came to my mind: rotate right side down, then the bottom, clockwise. After maybe a 100-150 repetitions (I did not count, just did the moves mindlessly for couple minutes) I went back to solved state. I wonder if there are more such sequences, and how long it takes to go back to original state. The obvious difference from Devil's algorithm was that the 2x2 sub-cube remained untouched the whole time, only the edges were messed up.
Every sequence behaves this way (For all sequences S, there exists n_S such that S ^ n_S equals S^0, the starting state.) Proving this is an introductory problem in Rubik's theory. Try it!
There is the up-up-down-down method that basically uses only one algo over and over but you still have to rotate the cube after each up-up-down-down before applying the next up-up-down-down.
I am sure that is not quite what you meant, but this is what is referred to as the 'using only one algo to solve the rubiks cube' this also only works (AFAIR) for 3x3x3
Some of it was along the lines of (1) Figure out how to do something A to a single layer (while messing up the rest of the cube). (2) Observe that if you do A, then rotate the layer R, then undo A (A'), you have an operation B = A R A' which does something to only one layer of the cube. I assume that most moves in most common algorithms can be expressed in terms of a couple of fundamental techniques like this (probably using the words "commutator" and "conjugate"). Does someone have a link or reference that gives you the general meta-technique (even if it involves incompletely-specified things like "figure out an interesting sequence of moves in terms of their effect on a single layer")?
I'm interested in this because I'm not really interested in learning (again) and forgetting (again) any particular existing method for solving the cube, but it would be fun to be able to fiddle around with it in a less-than-random way and eventually arrive at a method, based on some higher-level principles.