Then I should explain why "a blob" is, in a weird way, actually superior to a Dockerfile.
Step 1. You write a Dockerfile. You build it. You test with it. Hey, it works! You push it to production.
Step 2. Years later, you need to patch that container in production. So you change the Dockerfile, rebuild, and re-test.
Step 3. Uh-oh! The tests don't work anymore! What's going on? I changed one line in the Dockerfile but now the app isn't working (or perhaps the build isn't). What's going on?
What's going on is a reproducibility failure. Just having the instructions (or what someone thought was the instructions, or what they were years ago) isn't enough to ensure you get the same results a second time. A million little things you didn't think of may change between builds (and the more time in-between, the more things change), and they may end up breaking things. Without a truly reproducible build, you are likely to have problems trying to rebuild with just the instructions.
That's why with Docker containers, we rely on build artifacts. The container you had two years ago? However that was built, with whatever weird combination of arguments and files-on-the-internet-being-pulled-at-build-time and everything else, it was built successfully. If you want to be really sure you patch it correctly, you pull the old container image (not Dockerfile), modify that, and push that to production as a new container. No rebuilding, just patching. This avoids reproducibility failures.
That same idea is why you'd want to just download a blob and later re-apply it.
The blob was the state of things when it was working. If you tried to just write down the instructions to replicate it, it's likely you'd either 1) get it wrong (it was only working because of some unrelated changes somebody else made and forgot about) or 2) you'd get a reproducibility error.
So "the blob" I'm talking about doesn't have to be a literal binary blob. It could be whatever, we're talking about a theoretical idea here. It could be layers like a container, or metadata in a JSON file, or configuration/code that gets auto-generated, etc. I don't care what it is. It just has to describe the state as it was when X was working. It's then up to the tool to figure out how to get back to that state.
People already write this as "declarative code" for configuration management tools to do the same thing. I'm saying, I don't want to have to write the code. Just dump it out for me.
That’s not what they are saying. They are saying that the system where you have to declare everything manually is annoying (which it is), ideally it would record the changes while you make changes and then deduplicate them, remove unnecessary ones to arrive at the final playbook that can be replayed if needed.
Dockerfiles are basically this, but with a file documenting the different steps you took to get to that state.