Having something Turing-complete is surprisingly easy, and it hides everywhere. The repository have a small document that explains how you can use printf() as a computer : it can performs additions, logical union and negation, which is enough.
It was unintentional, but Ken Thompson being Ken Thompson, can't be 100% sure.
It was probably unintentional, yeah, I don't recall any mentions of early printf being overloaded to do stuff, nor is it clear why you would do that since you're using it in a much more convenient Turing-complete language already (C).
Don't look at the end destination, look at the journey to the destination
* Learn low-level details of a basic but real-world CPU
* Practice the brain gymnastic of programming an atypical Turing-complete computer
Your created new connections in your brain, put to use some of the old established connections. Having a machine spit-out the emulator would rob you of all that. Like, you can drive from A to B, but running for A to B can do you much good.
Hot take : the whole LLM craze is fed by a delusion. LLM are good at mimicking human language, capturing some semantics on the way. With a large enough training set, the amount of semantic captured covers a large fraction of what the average human knows. This gives the illusion of intelligence, and the humans extrapolates on LLM capabilities, like actual coding. Because large amounts of code from textbooks and what not is on the training set, the illusion is convincing for people with shallow coding abilities.
And then, while the tech is not mature, running on delusion and sunken costs, it's actually used for production stuffs. Butlerian Jihad when
My sophisticated sentiment analysis (talking to co-workers other professional programmers and IT workers, HN and Reddit comments) seems to indicate a shift--there's a lot less storybook "Ay Eye is gonna take over the world" talk and a lot more distrust and even disdain than you'd see even 6 months ago.
Self-plug here, but very related => Robustness and the Halting Problem for Multicellular Artificial Ontogeny (2011)
Cellular automata where the update rule is a perceptron coupled with a isotropic diffusion. The weights of the neural network are optimized so that the cellular automata can draw a picture, with self-healing (ie. rebuild the picture when perturbed).
Back then, auto-differentiation was not as accessible as it is now, so the weights where optimized with an Evolution Strategy. Of course, using gradient descent is likely to be way better.
I fixed my recurrent back pain with a 6 mn daily morning, ie. plank, side plank, reverse plank, 1mn 30 sec each.
Posture muscles are not very well known in the general public. Loss of strength due to aging and sedentary lifestyle makes standing, seating, etc uncomfortable.
Significantly faster compilation means less friction to iterate ideas, try things, which in the end lead to more polished results.
A nice interface is agreable, but maybe there are diminishing returns when you pay it with large compile time. I remember pondering about that when working with the Eigen math library, which is very nice but such a resource hog when you compile a project using it.
It was unintentional, but Ken Thompson being Ken Thompson, can't be 100% sure.