> I've seen plenty of works rejected because "why train a small model when you can just tune a large one?" or "does this scale?" I'd also argue that this is important because there's not infinite data nor compute.
Welcome to cloud world, where devs believe that compute is in fact infinite, so why bother profiling and improving your code? You can just request more cores and memory, and the magic K8s box will dutifully spawn more instances for you.
Welcome to cloud world, where devs believe that compute is in fact infinite, so why bother profiling and improving your code? You can just request more cores and memory, and the magic K8s box will dutifully spawn more instances for you.