If I were to add just one thing to the list: metatest. Write a test that asserts that your generated test cases are "sufficiently comprehensive", for whatever value of "sufficiently" you need. In an impure language, this is as easy as having the generator contain a mutable counter for "number of test cases meeting X condition" for whatever conditions you're interested in. For example, say your property is "A iff B". You might want to fail the test if fewer than 10% of the generated cases actually had A or B hold. (And then, of course, make sure your generators are such that - say - A and B hold 50% of the time; you want an astronomically small chance of random metatest failure.)
(I did a brief intro to this in the Metatesting section of a talk I did two years ago: https://github.com/Smaug123/talks/blob/master/DogeConf2019/D... . On rereading it now, I see there's a typo on the "bounded even integers" slide, where the final `someInts` should read `evenIntegers`.)
I'd just check that when you're writing or changing the tests though; for nontrivial conditions it can take a very long time to get neglibible probability of any metatest failing in a given run, and flaky metatests are just as bad as the usual kind.
If this split is particularly important, we'd usually recommend just writing separate tests for data that satisfy A or B; you can even supply the generators with pytest.mark.parametrize if copy-pasting the test body offends.
If I were to add just one thing to the list: metatest. Write a test that asserts that your generated test cases are "sufficiently comprehensive", for whatever value of "sufficiently" you need. In an impure language, this is as easy as having the generator contain a mutable counter for "number of test cases meeting X condition" for whatever conditions you're interested in. For example, say your property is "A iff B". You might want to fail the test if fewer than 10% of the generated cases actually had A or B hold. (And then, of course, make sure your generators are such that - say - A and B hold 50% of the time; you want an astronomically small chance of random metatest failure.)
(I did a brief intro to this in the Metatesting section of a talk I did two years ago: https://github.com/Smaug123/talks/blob/master/DogeConf2019/D... . On rereading it now, I see there's a typo on the "bounded even integers" slide, where the final `someInts` should read `evenIntegers`.)