What do people think about writing explicit regression tests for bugs found through fuzz testing? I've tended to lean towards not writing them, depending on continuing fuzz testing to insure things stay clean. This may of course be somewhat naive, but I also fear cluttering my test suite with really obscure test cases.
I think the premise of your question confuses considerations, unless I've misunderstood.
On all projects I own, the policy is that a bug fix will not be merged into a codebase without comprehensive unit testing demonstrating the case in which that bug was discovered, and that it has been resolved.
I do not understand why it matters _how_ the bug was discovered. If fuzz testing discovered that function foo throws tries to dereference a null pointer given input "ABABAB", then I would expect the engineer who chose to address that bug to investigate what property of "ABABAB" is the unaccounted for property, account for it, and then write a unit test calling foo with input "ABABAB", along with several other inputs that share the same discovered underlying property.
Fuzz testing may be a different method of testing, but the end result is, regardless, that you have discovered an input that your application hasn't been designed to handle properly and that needs to be demonstrably fixed, whatever it may be in particular.
Wouldn't you want to write those explicit tests anyway, to run the troublesome input in isolation while fixing the bug? With the tooling to fuzz, they should be one-liners or close, hardly cluttering. One time, working on an extremely fuzz-friendly function that was crazy rich in corner cases I even made the error message of the fuzz loop include the one-liners that would execute the failing inputs, ready for copy&paste. Testing never felt more productive.
I actually don't think that heavy fuzzing has a place in an automated test suite at all. Test suites should be fast and 100% reproducible at all times. Then explicit regression tests for the discovered cases are the only way. (I do occasionally allow myself to include short fuzz loops with fixed RNG initialization, but those are more on the "shameful secrets" end of the spectrum)
Haven't worked with fuzz testing myself, but it sounds like something I'd lean towards writing. Those obscure test cases are exactly the thing you don't tend to find in manual testing, and AFAIK fuzz testing is random enough that you can't be sure that every run will exercise the same bug.