I don't think the terminal-based multi-user time sharing model maps poorly to servers, at the very least. In fact if you're going to pick one model for the general class of servers to run under, that's probably going to be the most versatile. Sure supercomputers and data centers may stand to benefit from a model that ignores all the multi-user features and such, but in architectures where each server is acting at least semi-autonomously (i.e. not under the control of what is essentially some distributed operating system such as Yarn, SLURM, etc.) I think you'd struggle to come up with a better model. This shouldn't be surprising as this is basically the exact use case that UNIX was built for.