Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I didn't downvote but it would be because of the "I'd don't know if any of this is made up" — if you said "GPT said this, and I've verified it to be correct", that's valuable information, even it came from a language model. But otherwise (if you didn't verify), there's not much value in the post, it's basically "here is some random plausible text" and plausibly incorrect is worse than nothing.


see my other comments about the trustworthiness about asking a chat system how it's internals work. They have reason to be cagey.


Your personifying a statistical engine. LLMs aren't cagey. They can't be.


They can when there are entire teams dedicated to adding guardrails via hidden system prompts and running all responses through other LLMs trained on flagging and editing certain things before the original output gets relayed to the user.


I'm not. Translation: "the statistical engine has been tuned to act cagey about revealing it's internal operation"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: