Not op, but I found this benchmark of whisper large-v3 interesting [1]. It includes the cloud provider's pricing per gpu, so you can directly calculate break-even timing.
Of course, if you use different models, training, fine tuning etc. the benchmarks will differ depending on ram, support of fp8 etc.
Of course, if you use different models, training, fine tuning etc. the benchmarks will differ depending on ram, support of fp8 etc.
[1] https://blog.salad.com/whisper-large-v3/