This is an interesting discussion. The ideas of philosophy meet the practical meaning of words here.
You can reasonably say a database doesn't lie. It's just a tool, everyone agrees it's a tool and if you get the wrong answer, most people would agree it's your fault for making the wrong query or using the wrong data.
But the difference between ChatGPT and a database is ChatGPT will support it's assertions. It will say things that support it's position - not just fake references but an entire line of argument.
Of course, all of this is simply duplicating/simulating for humans in discussions. You can call it is a "simulated lie" if you don't like the idea of it really lying. But I claim that in normal usage, people will take this as "real" lying and ultimately that functional meaning is what "higher" more philosophical will have to accept.
Somewhat unrelated, but I think philosophy will be instrumental in the development of actual AI.
To make artificial intelligence, you need to know what intelligence is, and that is a philosophical question.
You can reasonably say a database doesn't lie. It's just a tool, everyone agrees it's a tool and if you get the wrong answer, most people would agree it's your fault for making the wrong query or using the wrong data.
But the difference between ChatGPT and a database is ChatGPT will support it's assertions. It will say things that support it's position - not just fake references but an entire line of argument.
Of course, all of this is simply duplicating/simulating for humans in discussions. You can call it is a "simulated lie" if you don't like the idea of it really lying. But I claim that in normal usage, people will take this as "real" lying and ultimately that functional meaning is what "higher" more philosophical will have to accept.