This is a more specific version of the base rate fallacy, called the false-positive paradox. It's illustrated by a simple example.
Suppose you have a test looking for an extremely rare genetic condition (say, 1 in 1 billion people have it). That means that the test needs to be extremely accurate (in this case, accuracy of .999999999) to avoid having so many false positives that it's worse than a coin flip about whether you have the condition or not. Therefore, in most cases you would be better off making a fake test that just says "no" every time.
It's true that there are other options. However, in the real world that might not help, because sometimes the erroneous circumstances which cause the test to be inaccurate will continue to persist.
For example, genetic parental tests are very accurate... unless the parent being tested has chimerism, in which case you'll get the very confusing result that the child isn't your own, even under repeated testing.
Suppose you have a test looking for an extremely rare genetic condition (say, 1 in 1 billion people have it). That means that the test needs to be extremely accurate (in this case, accuracy of .999999999) to avoid having so many false positives that it's worse than a coin flip about whether you have the condition or not. Therefore, in most cases you would be better off making a fake test that just says "no" every time.