Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looking at power use directly and making some educated guesses about average FLOPs/watt is probably the most effective way to estimate aggregate compute.

Even at Amazon I wouldn't be surprised if it's the primary way they do it, and I would be interested in some research. I'm trying to think of other ways, and accurately aggregating CPU/GPU load seems virtually impossible to do in a very rigorous way at that scale.

And yes, as an outsider you might have trouble knowing the relative distribution of ARM/x86, but that's just another number you want to obtain to improve your estimate.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: