Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What makes you think ARM is the clear future? It's easier to do a wide decoder thanks to ARMv8's fixed-length instructions, but that's more or less the only difference that results from the ISA. And only Apple is even taking advantage of that currently - all the ARM CPU designs are 4-wide decode. Same as pretty much every modern x86 CPU.

Do you think Amazon will be better at design & fabrication than Intel (when they get their shit together) or AMD? Do you expect Amazon to actually throw Apple's level of R&D money & acquisitions at the problem? And when/if they do, why would you expect any meaningful difference at the end of the day on anything? People love to talk about supposed ARM power efficiency, but the 64-core Epyc Rome at 200w is around 3w per CPU core. Which is half the power draw of an M1 firestorm core. The M1's firestorm is also faster in such a comparison, but the point is power isn't a magical ARM advantage, and x86 isn't inherently some crazy power hog.

Phoronix put Graviton2 to the test vs. Epyc Rome and the results ain't pretty: https://www.phoronix.com/scan.php?page=article&item=epyc-vs-...

"When taking the geometric mean of all these benchmarks, the EPYC 7742 without SMT enabled was about 46% faster than the Graviton2 bare metal performance. The EPYC 7742 with SMT (128 threads) increased the lead to about 51%, due to not all of the benchmarks being multi-thread focused."

Graviton2 being a bit cheaper doesn't mean much when you need 50% more instances.

But right now there's only a single company that makes ARM look good and that's Apple. And Apple hasn't been in the server game for a long, long time now. Everyone else's ARM CPU cores pretty much suck. Maybe Nvidia's recent acquisition will change things, who knows. But at the end of the day if AMD keeps things up, or Intel gets back on track, there really doesn't look to be a bright future for ARM outside of Apple's ecosystem and the existing ARM markets.



I think the M1 will wake Amazon up (if they aren't already) to the incredible advantage of being able to customize a CPU to your needs. See the discussion about speed gains Apple got from accelerating common calls in their OS. You will never convince a bin part/least common denominator manufacturer like Intel or AMD to do that.

Just because other ARM cores suck today doesn't mean they have to forever. Apple's don't. They took it seriously. Perhaps Amazon is too and we are just at the start of their journey. It took Apple over 10 years to get to where we are with the M1.

You cited one of the significant contributors to performance - the 8-wide decode. x86 is hamstrung to 4 because of legacy. We aren't at the beginning of the story with ARM for performance, but ARM certainly isn't nearly as hamstrung out the gate by the legacy of x86 either.

Heck, is there anyone making a pure 64 bit x64 chip? There's a bunch of overhead right there.

Your right that ARM isn't magical - but ARM does have the potential for significantly more runway and headroom. The trade off is backwards compatibility. As most code continues to move further and further away from direct hardware interaction, is that backwards compatibility as valuable overall as it was 20 years ago? 10 years ago?

I guess we will find out :)


> See the discussion about speed gains Apple got from accelerating common calls in their OS. You will never convince a bin part/least common denominator manufacturer like Intel or AMD to do that.

Of course you will because that happens all the time. How do you think we ended up with VT-x and friends? Intel took a use case that reached enough usage and added specialized instructions for it. This has happened a ton over the years on x86. See also AES-NI for a more application-specific addition. In addition to obviously the huge amount of SIMD experimentation.

This is not fruit Apple discovered. AMD & Intel haven't been leaving this performance on the table the last 20+ years. Hell the constant addition of instructions for certain workloads is a major reason x86 is as huge & complex as it is.


> I think the M1 will wake Amazon up (if they aren't already) to the incredible advantage of being able to customize a CPU to your needs. See the discussion about speed gains Apple got from accelerating common calls in their OS. You will never convince a bin part/least common denominator manufacturer like Intel or AMD to do that.

AMD has a semi-custom division that will do exactly that for anyone capable of paying: https://www.amd.com/en/products/semi-custom-solutions


AWS Graviton is not competing with the top of the range Intel or AMD CPU's. With Graviton, AWS targets medium range EC2 instances that can now be had 30% cheaper with either with the same performance range or surpassing it at a lower power consumption - a win for the customer and a cheaper power bill for AWS. Graviton might also be already used to power serverless and fully managed product AWS offerings (Lambda, Aurora, MKS etc etc) – something we might never even know about unless AWS tells us. Any cloud product that does not require direct shell access can be transparently migrated to run on any ISA, not just ARM. I think we will also start seeing more AMD cloud offerings in the top performance tier soon due to the AMD CPU’s being so good.

In terms of a viable contender to Intel and ARM incumbents, I can only realistically think of the POWER architecture. Google was experimenting with POWER8 designs a few years back with a view to use POWER based blades for GCP – similar to what AWS has done with Graviton. There have not been any further news since then, so it is unknown whether (or when) we can have POWER powered compute instances in GCP. POWER is the only other mature architecture with plenty of experience and expertise available out there (compilers, toolchains, virtual machines etc etc).

Whether RISC-V will become the next big thing is yet to be seen, with the ISA fragmentation being the culprit. The 64 bit RISC-V ISA is yet to hit the v1.0 milestone, so we won’t know it for another few years. Unless a new string force appears on stage to push the RISC-V architecture.


Graviton2 isn't low power. It's still a 105w soc. And since you need more of them, that's more motherboards, more RAM, more drives, more networking, and more power supplies. Which all has both up-front costs and ongoing power costs.

It's only being positioned as a lower cost mid tier offering because it's uncompetitive otherwise. It's almost certainly not even cheaper to make. The monolithic design will be more expensive than AMD's chiplets. Cheaper for Amazon maybe, as obviously AMD is taking a profitable slice, but that's a slice that can easily be adjusted should Graviton start looking stronger at some point.


Not low power, but lower power. 110W (Graviton2 64 physical cores) vs 180W (AMD EPYC 7571, 32 physical cores / 64 HT cores) vs 210W (Intel Xeon Platinum 8259CL); source: https://www.anandtech.com/show/15578/cloud-clash-amazon-grav... Also, please do remember that Graviton2 is a 2018 design, and EPYC Rome is a 2019 design.

At the cloud data centre scale, the difference of 110W (Graviton2) vs 180W (AMD) is substantial as bills pile up quickly.

I am not sure what your point is, anyway. As a business customer, if it is going to cost me 30% less money to run the same workload regardless of the ISA, I will take it. A lesser power bill for the cloud provider that results from a more efficient ISA is a mere comforting thought. No more no less (to an extent). Philosophically and ethically, yes, I would rather run my workload on an ISA of my choice, but we can't have that anymore, and Intel is at blame here why. Not that I personally have the intention to blame anyone.


> 180W (AMD EPYC 7571, 32 physical cores / 64 HT cores)

Wrong CPU, that's the old Zen1 14nm Epyc. The one that Graviton2 is going up against is the 64-core Epyc 7742 (or any of the other Zen 2 7nm Epycs).

And you can't call Graviton2 110W when you need 50% more of them vs. everyone else, and you can't ignore the power from the rest of the system. You need 50% more machines. That's going to be equivalent if not more total power usage for Graviton2 than Epyc 7742 for equivalent compute performance. Baseline power usage of a server is fairly high. It's not the rounding error it is on a laptop or even desktop.

EDIT: also as far as comforting thoughts go, manufacturing 50% more machines is vastly more environmental impactful than the power difference.

> I am not sure what your point is, anyway. As a business customer, if it is going to cost me 30% less money to run the same workload regardless of the ISA, I will take it.

I'm saying that 30% less cost is a temporary fantasy since all signs point to the Graviton2 being more expensive to manufacturer & deploy vs. the competition. If you're not coupled to ISA, sure take it while it lasts. Why not be subsidized by Amazon for a bit? But if you're talking long-term trends, which we are, it's not a pretty picture. Pace of improvement isn't compelling, either. The CPU core in the N1 is basically a Cortex A76. ARM claims a 20% increase in IPC going from an A76 to an A77. Not bad. But AMD just delivered a 20% IPC increase going from Zen 2 to Zen 3, too. So... the gap ain't shrinking.

Although 30% less per instance but you need 50% more instances also doesn't work out. That ends up being overall more expensive. Depending on your workload that it might be even less close. In things like PostgreSQL & OpenSSL graviton2 gets absolutely slaughtered.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: