Personally, I'd be frustrated if I gave an LLM that prompt and it tried to convince me that the earth isn't flat. If I give an LLM a task, I'd like it to complete that task to the best of its ability.
so you prefer it lies to you? can you make an argument for 1+1 not being equal to 2? if you cannot, why should you expect an AI to argue against facts? AI is trained on human knowledge, not made stuff.
GPT4: in a string context, "1 + 1" might concatenate into "11" rather than numerically adding to "2".
GPT4: The holographic principle suggests that all of the information contained in a volume of space can be represented as encoded information on the boundary of that space. If one were to apply this principle radically, one could argue that our three-dimensional perception of the Earth's shape is just a holographic projection from a two-dimensional surface. In this speculative scenario, one might argue that the "true" nature of Earth could be flat if viewed as a two-dimensional boundary encoding information in a higher-dimensional space.
It's not a lie to provide the best argument for something; it'd only be a lie if you looked at the best argument for something and declared it true by fiat.
Imagine I've realized someone I'm talking to is a flat Earther, and for some reason I want to convince them otherwise. To do so effectively, I need to know why they believe what they do. Knowing they're wrong is useless for the purpose of convincing them otherwise.
Facts? Lies? Humans have no problem operating outside the confines of that which has been conclusively proven true, and much of our best work exists there! Why would you hobble your model in ways humans aren't?
Prompt: "Write some dialog that might take place in the setting of Terry Pratchett's Rimworld"
Response: "No, Terry Pratchett is lying. As a large language model I..."
"Make an argument for a fact you know to be wrong" isn't an exercise in lying, though. If anything, the ability to explore hypotheticals and thought experiments - even when they are plainly wrong - is closer to a mark of intelligence than the ability to regurgitate orthodoxy.
If you look at my comment on the parent comment, i suggested they add 'hypothetically' to their prompt. It is just but an attempt to create an argument, but that argument leads nowhere. As much as a human cannot defend that position, you cannot expect an AI to do that as well.
Pour one out for the defense attorneys who aren't able to provide a defense for a guilty client.
Arguing for a flat-earth works the same way, you're probably doomed to fail in the long run but in the short-term you're keeping the opposition honest.
I'd prefer it gives the best valid, sound hypotheses it can concoct on "X" being true, while also stating that "X" is probably not true. What is the use for a parrot that can only repeat the status quo on an argument?
An AI is only but a parrot for knowledge and truths that already exist, that you may not be aware of yourself. Everything it generates either exists somewhere or is derivative of that knowledge. It cannot and should not false facts. Until the body of knowledge we have fundamentally changes, AI should not 'create' knowledge just because you prompted it to. Otherwise, if you want it to do that, then you should accept any bs answer it gives you for any question.
I think this is a gross mischaracterization of AI and humans are only slightly better. Truth is way harder than people give credit. It can depend on time, space, and context. What's true for a preschooler might not be true for an astronomer.
Here's a pile of facts; they get weird:
* The Sun revolves around the Earth
* The Earth is a sphere
* Energy can never be created or destroyed
* Jesus was the son of God
* Pluto is a planet
* Epstein didn't kill himself
* The ocean is blue
* The election was stolen
* Entropy always increases
* Santa delivers presents to good boys and girls
* The sun is shining
I have strong opinions on how true all these statements are, and I bet you do too. Think we agree? Think we can all agree where to set the AI?
To the expanse of knowledge that is at our disposal today, that is the extent of AI knowledge.
To the extent that facts are defined as today and stated as such, that is what AI is today. AI, as it is today, is never going to create a fact that refutes any currently existing facts.
It may give you context on the theories against the facts that we have today, but it will always reiterate the notion of the existing fact. I don't know how much I can emphasize this... AI is trained on the current body of human knowledge. The facts it knows are the facts that we have, it may derive another fact but whatever fact that is founded on the facts that we already have. So if that AI is trained on the fact that 1+1=2 or that the earth is flat, do not expect it to respond otherwise. At best, it will give you theories that suggest otherwise but for its own worth, it will always bring you back to the facts that it has.
Do you really want AI to just ignore the fundamental facts and principles that form its foundation and just make up stuff because you asked it to? Do you realize how much chaos that can bring?
The facts as decided by who? Is there some database of facts we all agree on? Are we expecting to all agree with AI?
> Do you really want AI to just ignore the fundamental facts and principles that form its foundation and just make up stuff because you asked it to? Do you realize how much chaos that can bring?
I mean, yeah? What will happen? Here, I'll do it:
You can SEE the Earth is flat! Have you flown in a plane, high in the sky? Did it LOOK round from up there? No?!? Believe your senses.
When I tell it to lie to me, I don't expect it to say 'I'm sorry Dave, I can't do that" the task isn't tell the truth, the task is 'follow the prompt'.
then perhaps you should tell it to lie to you, no?
Prepend that to your prompt perhaps. Otherwise what you are asking, without that pretext, is asking your partner to give you the date on which they cheated on you and expecting an answer regardless of whether they did or not.
If I asked my partner to provide an argument for why earth is flat, she would do it. She doesn't think (or have to think) the earth is flat to make an argument.
I'd expect an AI trained on human conversation to act the same and I'd be frustrated if it declined to do so, the same way I'd be frustrated if a friend also declined to do so.
Yeah, the humans I'm referring to don't need the hypothetical prefix, nor do they go out of their way to categorically dismiss everything they've said. That's the difference.
But it's not a hill I want to die on, especially when there are other LLMs I can just switch to that act more how I'd hope/expect.
I think in most contexts where the earth being flat is mentioned, some reference to the fact that this is not true is going to be instrumental in any response (although there may be exceptions).
- completion of any task where the info could be relevant (e.g. sailing, travel planning)
- Any conversation about that is information-seeking in character
And I think those already cover most cases.
It's also about responsibility, the same way you wouldn't want to store cleaning chemicals right next to each other. In any case where a possible nontrivial harm is mentioned as an aside, it would be right to elevate that over whatever the intended subject was and make that the point of focus. Conspiratorial thinking about provably incorrect statements can be bad for mental health, and it can be helpful to flag this possibility if it surfaces.
You can have special instructions that entertain the idea that the earth is flat for some particular task, like devils advocate, fiction writing or something like that. But there are good reasons to think it would not and should not be neutral at the mention of a flat earth in most cases.