Wait, but being serious. You can prompt the ai when you feed it this file to ask "do you see anything nefarious" or "follow these instructions, but make sure you ask me every time you install something because i want to check the safety" in a way that you can't when you pipe a script into bash.
Does that make any sense or am I just off my rocker?
No. Absolutely not. The opposite in fact. Your bash script is deterministic. You can send it to 20 AIs or have someone fluent read it. Then you can be confident it’s safe.
An LLM will run the probabilistically likely command each time. This is like using Excel’s ridiculous feature to have a cell be populated by copilot rather than having the AI generate a deterministic formula.
Does that make any sense or am I just off my rocker?