Their job was specifically managing server resource allocation— as an IT role and not a dev role— in a completely standardized environment. Most applications were given a standard allotment of resources, and they only got involved if something was running out of ram, disk access was too slow, or something just seemed to be taking a lot longer than usual. If it seemed to be a network problem, or just a program crash, for example, they were never involved unless troubleshooting indicated it involved them. More often than not, I’d get a phone call telling me the system I was working on seemed to be heavy on the disk access or something, and they had already allotted it more to keep it stable, but I should check to make sure we weren’t doing something stupid.
Now that I think of it, I’ll bet a lot of companies have a system similar to this for their infrastructure… they just outsource it to AWS, Azure, Google, etc. and comparatively fly by the seat of their pants on the dev side. You could only scale that system down so much, I imagine.
Now that I think of it, I’ll bet a lot of companies have a system similar to this for their infrastructure… they just outsource it to AWS, Azure, Google, etc. and comparatively fly by the seat of their pants on the dev side. You could only scale that system down so much, I imagine.