"Hallucinations" (ie a chatbot blatantly lying) have always struck me as a skill issue with bad prompting. Has this changed recently?
to a skilled user of a model, the model won't just make shit up.
Chatbots will of course answer unanswerable questions because they're still software. But why are you paying attention to software when you have the whole internet available to you? Are you dumb? You must be if you aren't on wikipedia right now. It's empowering to admit this. Say it with me: "i am so dumb wikipedia has no draw to me". If you can say this with a straight face, you're now equipped with everything you need to be a venture capitalist. You are now an employee of Y Combinator. Congratulations.
Sometimes you have to admit the questions you're asking are unlikely to be answered by the core training documents and you'll get garbled responses. confabulations. Adjust your queries accordingly. This is the answer to 99% of issues product engineers have with llms.
If you're regularly hitting random bullshit you're prompting it wrong. Models will only yield results if they get prompts they're already familiar with. Find a better model or ask better questions.
Of course, none of this is news to people who actually, regularly talk to other humans. This is just normal behavior. Hey maybe if you hit the software more it'll respond kindly! Too bad you can't abuse a model.
to a skilled user of a model, the model won't just make shit up.
Chatbots will of course answer unanswerable questions because they're still software. But why are you paying attention to software when you have the whole internet available to you? Are you dumb? You must be if you aren't on wikipedia right now. It's empowering to admit this. Say it with me: "i am so dumb wikipedia has no draw to me". If you can say this with a straight face, you're now equipped with everything you need to be a venture capitalist. You are now an employee of Y Combinator. Congratulations.
Sometimes you have to admit the questions you're asking are unlikely to be answered by the core training documents and you'll get garbled responses. confabulations. Adjust your queries accordingly. This is the answer to 99% of issues product engineers have with llms.
If you're regularly hitting random bullshit you're prompting it wrong. Models will only yield results if they get prompts they're already familiar with. Find a better model or ask better questions.
Of course, none of this is news to people who actually, regularly talk to other humans. This is just normal behavior. Hey maybe if you hit the software more it'll respond kindly! Too bad you can't abuse a model.