Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder whether the bot hallucinated the wrong information or whether the policy changed and the bot simply wasn't updated / retrained. The latter seems more likely but less interesting, akin to information on a boring HTML page getting overlooked during a site update.


The incident was in 2021, so I don’t think it was an LLM.


No but it would be nice if the same laws would apply to LLMs. Too often they're now deployed as a quick fix for a chatbot.

But before either they quoted me a solution or escalated to support.

Now it makes up a non-working solution.


Honestly would it make any difference if the information was just on an FAQ page and it contradicted what the actual ticket contract said?

I’m with you. They should be held to the information they give out. Short of an employee purposely maliciously giving out bad information it seems like not making stuff up should be a basic requirement for them to operate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: