Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> When determining what to use for development of my SaaS, I did a comparison of what you actually get from providers. The full article is at https://jan.rychter.com/enblog/cloud-server-cpu-performance-...

Your results (e.g. that z1d.xlarge with 4 vCPUs is only 10% slower than z1d.2xlarge with 8 vCPUs) shows that the "performance" you were testing was disk IO throughput (probably dominated by disk latency), not vCPUs.

> My takeaways were that many cloud provider offerings make no sense whatsoever, and that Xeon processors are mostly great if you are a cloud provider and want to offer overbooked "vCPUs".

> I haven't tested those specific setups, but I strongly suspect a dedicated server from OVH is much faster than a 4.16xlarge from AWS.

You seem to be implying that AWS/EC2 does CPU over-provisioning on all instance types; this is incorrect, only T-family instance types use CPU over-provisioning.



> the "performance" you were testing was disk IO throughput

In part, yes, but not entirely. I was very clear that my load isn't embarrassingly parallel, so it is not expected to scale linearly with the number of processors.

> You seem to be implying that AWS/EC2 does CPU over-provisioning on all instance types; this is incorrect, only T-family instance types use CPU over-provisioning.

If you think you are getting a Xeon core when paying for a "vCPU" at AWS, I have a bridge to sell you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: