Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microsoft's video accidentally reveals that they have only ~200k servers (istartedsomething.com)
14 points by gaika on Aug 15, 2008 | hide | past | favorite | 12 comments


Here's an interesting statistic: Based on 148357 servers consuming 72500 kW of power, Microsoft's average server is using 488 W of power at any point -- so clearly Microsoft is not following Google's approach of using smaller servers which each draw 200W or less.

Is the Windows kernel better at using large boxes than the linux kernel? Is Microsoft worse at managing clusters than Google? I'm sure there's some reason why Microsoft uses fewer more powerful servers while Google uses more less powerful servers.


Google builds, Microsoft (as far as anyone knows) buys. It could be that Google simply can't economically source the components and/or make them work to use fewer larger servers. Or it could be that Google prefers to rip out an entire server when it fails rather than repairing it in-place, so they want each one as small/cheap as possible and don't really care about compute power per server.


It's not clear from the video, but the utility power number probably includes environmental stuff like cooling and lights. Just dividing one number by the other isn't going to give you the size of the power supply for each machine.


Environmental control is one of the largest power sucks of a data center. It's part of why data center space is so expensive per square foot.


I wouldn't make any assumptions about Google's servers. Public info is generally some mix of inaccurate and out-of-date.


Public info is generally some mix of inaccurate and out-of-date.

I was working based on this paper written by Google researchers in June 2007, which refers to one of Google's "typical" servers having a theoretical peak power of 213W and an actual peak power of 145W:

http://research.google.com/archive/power_provisioning.pdf

While I agree that in general information about Google's systems is a mix of inaccurate and out-of-date, I think it's probably safe to trust numbers given in research papers published by Google... especially when the numbers deal with how much power is being used, and the topic of the paper is power distribution within datacenters.


I think you're reading too much into the word "typical". It might be better understood as "example" or "typical of what is often found in a datacenter". Notice that all of the actual measured data in that paper is normalized to a value between 0 and 1, not given in watts.


145W is the heat generated by the box, including the work necessary to get the heat out of the box.

How much power does it take to get that heat out of the building?

I ask because the utility company numbers include both, while the "machine power" numbers only include the former.


Or you probably could trust Paul, who seemingly worked on Googles servers since just about near the beginning...


They can get away by using fewer servers because those servers do less work. Microsoft's on-line operations are a fraction of the size of Google's.

It's unlikely any OS is significantly better on resource management than any other. OS design books are quite old by now. I remember reading Tannembaum's MINIX while on college, some more than 20 years ago. And, during those 20 something years, desktop computers and x86 servers evolved pretty little. Quite frankly, when compared to what RISC promised in the late 80s and early 90s, it's quite disappointing.


That's mostly like the power requirements for the datacenter as a whole, which would include things like HVAC, lighting, network equipment like switches and routers that don't appear in the "server" list, losses from power redundancy features (flywheels and batteries aren't free to operate) etc...


Does it mean that google still has less than 1M?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: