I would highly recommend using a specialized library like BenchmarkDotNet. It's more relevant for microbenchmarks but can be used for less micro benchmarks as well.
It will do things like force you to build in Release mode to avoid debug overhead, do warmup cycles and other measures to avoid various pitfalls related to how .NET works - JIT, runtime optimization and stuff like that, and it will output nicely formatted statistics at the end. Rolling your own benchmarks with simple timers and stuff can be very unreliable for many reasons.
Oh no argument on this at all, though I haven't touched .NET in several years, since I no longer have a job doing F# (though if anyone here is hiring for it please contact me!).
Even still I don't know that a benchmarking tool would be helpful in this particular case, at least at a micro level; I think you'd mostly be benchmarking the scheduler more than your actual code. At a more macro scale, however, like benchmarking the processing of 10,000 items it would probably still be useful.
It will do things like force you to build in Release mode to avoid debug overhead, do warmup cycles and other measures to avoid various pitfalls related to how .NET works - JIT, runtime optimization and stuff like that, and it will output nicely formatted statistics at the end. Rolling your own benchmarks with simple timers and stuff can be very unreliable for many reasons.