I don't, he's using he body like a lab. One of those experiments is likely to go terribly wrong rather than just eating optimally and exercising and crossing your fingers and hoping that you won the genetic lottery.
I had the same thought when I was dog fooding a CLI tool I've been vibe coding. It's a CLI for deploying Docker apps on any server. An here is the exact PR "I" did.
One of the advantages of vibe coding CLI tools like this is that it's easy for the AI to debug itself, and it's easy to see where it gets stuck.
And it usually gets stuck because of:
1. Errors
2. Don't know what command to run
So:
1. Errors must be clear and *actionable* like `app_port` ("8080") is a string, expected a number.
2. The command should have simple, actionable and complete help (`--help`) sections for all commands.
LLMs are only capable of hallucinating, whereas humans are capable of hallucinating, but are also capable of empirically observing reality. So whatever the rate is, it's necessarily lower than those for LLMs.