Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wait, but being serious. You can prompt the ai when you feed it this file to ask "do you see anything nefarious" or "follow these instructions, but make sure you ask me every time you install something because i want to check the safety" in a way that you can't when you pipe a script into bash.

Does that make any sense or am I just off my rocker?



You can do the same thing with any install script you might come across today.


True, that's a fair point. Do you think there's any merit to the idea that the UX of asking about a markdown file is more natural than a bash script?


No. Absolutely not. The opposite in fact. Your bash script is deterministic. You can send it to 20 AIs or have someone fluent read it. Then you can be confident it’s safe.

An LLM will run the probabilistically likely command each time. This is like using Excel’s ridiculous feature to have a cell be populated by copilot rather than having the AI generate a deterministic formula.


Oh, you're actually serious.

This forum gets more depressing by the day.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: