As such, "The AI did not pass the Turing test because the interrogators were not sufficiently challenging" becomes a standard impossible to beat. The reductio on this is that in order for AI to pass the Turing test, it has to fool everyone on the planet which is not what I believe is intended.
Rather, we should set an upper bound on what a reasonable interpretation of "as challenging as possible" means.
Rather, we should set an upper bound on what a reasonable interpretation of "as challenging as possible" means.