>In general, if you are trying to distinguish an AI from a human, you don’t want to ask it obscure trivia questions. GPT-3 is pretty good at a wide variety of topics.
Perhaps too wide a variety of topics. You could ask it a wide range of trivial questions about totally unrelated obscure topics that no one human would possibly happen to know.
It would make sense to train a different model, specifically to pass the Turing test that‘s built on top of GPT-3. Perhaps by having humans actually have those conversations. Perhaps somebody could make a game out of it where humans can pretend to be GPT-3 as well and you have a large number of conversations along with the outcomes.
It would learn to not know too much, to make the conversation fluent, to perhaps get bored after some time.
I imagine creating a system that watches TTs on real and artificial subjects, gets the humans guess as to if it is AI or not, whether this is actually the case or not, and feeds those results back into the test. I'm sure this isn't a novel idea of yours or mine.
Indeed. The Turing Test. as originally proposed, is about conversational ability, not the capacity of being infinitely cooperative in some sort of an obscure trivia test. More human answers to some of these would include ”I don’t know”, ”I don’t care”, and ”Why are you asking me these?”. It would be very interesting to see how the GPT-3 would behave if primed by the prompt text to be less cooperative.
Perhaps too wide a variety of topics. You could ask it a wide range of trivial questions about totally unrelated obscure topics that no one human would possibly happen to know.