As tromp has a meta-circular implementation of lambda calculus maybe that Turing Machine could be implemented in lambda calculus and we could have an objective measure of which one is "simplest"?
It takes 829 bits of binary lambda calculus to interpret BrainFuck, which is very simple programming language modeled after Turing Machines. A self-interpreter in BrainFuck takes well over a thousand bits though [2].
Lambda calculus is not only very simple, but also very expressive. While combinatory logic, with only S and K, is even simpler than lambda calculus, and also has a trivial binary encoding (00 for S, 01 for K, and 10 for application), the shortest known self-interpreter is 263 bits, appreciably larger than for lambda calculus.
My IOCCC entry [3] has more examples of the conciseness of binary lambda calculus.
How does this logic compare to say Forth, APL or Joy? As far as I'm aware, they are all combinatorial languages - are they effectively implementations of SK combinators?
FWIW, I think concatenative notation is the simplest useful computational framework. ( https://joypy.osdn.io disclosure: it's my project.) But I'm not mathematically sophisticated enough to make the formal argument.