Hacker News new | past | comments | ask | show | jobs | submit login

From my experience this is less an application of critical skills and more a domain knowledge check. If you know enough about the subject to have accumulated heuristics for correctness and intuition for "lgtm" in the specific context, then it's not very difficult or intellectually demanding to apply them.

If you don't have that experience in this domain, you will spend approximately as much effort validating output as you would have creating it yourself, but the process is less demanding of your critical skills.




No, it is critical thinking skills, because the LLMs can teach you the domain, but you have to then understand what they are saying enough to tell if they are bsing you.

> you don't have that experience in this domain, you will spend approximately as much effort validating output as you would have creating it yourself,

Not true.

LLMs are amazing tutors. You have to use outside information, they test you, you test them, but they aren't pathologically wrong in the way that they are trying to do a Gaussian magic smoke psyop against you.


Knowledge certainly helps, but I’m talking about something more fundamental: your bullshit detector.

Even when you lack subject matter expertise about something, there are certain universal red flags that skeptics key in on. One of the biggest ones is: “There’s no such thing as a free lunch” and its corollary: “If it sounds too good to be true, it probably is.”




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: