> Put another way: if AMD (and especially Intel) don't do something about this they're going to get completely eaten alive by ARM.
AMD’s latest parts are actually quite close to M1/M2 in computing efficiency when clocked down to more conservative power targets.
They crank the power consumption of their desktop CPUs deep into the diminishing returns region because benchmarks sell desktop chips. You can go into the BIOS and set a considerably lower TDP limit and barely lose much performance.
Where they struggle is in idle power. The chiplet design has been great for yields but it consumes a lot of baseline power at idle. M1/M2 have extremely efficient integration and can idle at negligible power levels, which is great for laptop battery life.
People keep repeating that Zen4 and M1 are close in efficiency but what is the source with actual benchmarks and power measurements?
At any rate, using single points to compare energy efficiency isn't a good comparison, unless either the performance or power consumption of the data points comparable. Like, the M1's little cores are 3-5x even more efficient when operating in an incomparable power class, and Apple's own marketing graphs show the M1's max efficiency is also well below its max performance [1]
Those perf/power curves are the basis of actually useful comparisons; has anyone plotted some outside of marketing materials? It might even be possible under Asahi.
Their results are invalid because they used Cinebench. Cinebench uses Intel Embree engine which is hand optimized for x86, not ARM instructions. In addition, Cinebench is a terrible general purpose CPU benchmark.[0]
Imagine if you're testing how energy efficient an EV and a gas car is. But you only run the test in the North pole, where the cold will make the EV at least 40% less efficient. And then you make a conclusion based solely on that data for all regions in the world. That's what using Cinebench to compare Apple Silicon and x86 chips is like.
Cinebench/4D does have "hand-optimized" ARM instructions. It would be a disaster for the actual product if it didn't. That's what makes it interesting as a benchmark: that there's a real commercial product behind it and a company interested in making it as efficient as possible for all customer CPUs, not just benchmarking purposes.
Albeit for later releases this is less true since most customers have switched to GPUs...
Cinebench/4D does have "hand-optimized" ARM instructions.
It doesn't. As far as I know, everything is translated from x86 to ARM instructions - not direct ARM optimization.
Cinema4D is a niche software within a niche. Even Cinema4D users don't typically use CPU renderer. They use the GPU renderer.
The reason Cinebench became so popular is because AMD and Intel promote it heavily in their marketing to get nerds to buy high core count CPUs that they don't need.
Generally you see this in the lower class chips that aren’t overclocked to within an inch of instability. It’s not uncommon to see a chip that uses 200w to perform 10% worse at 100w, or 20% worse at 70w.
I can’t be bothered to chase down an actual comparison, but usually you’ll see something along those lines if you compare the benchmarks for the top tier chip with a slightly lower tier 65w equivalent.
It's actually this idling power which is what defines battery drain for most people. All these benchmarks about how much it can for a certain compute intensive task is not that important considering that most of the time a laptop is doing almost nothing.
We just stare at an article in a web browser. We look at a text document. We type a bit in the document. An app is doing an HTTP request. The CPU is doing nothing basically.
Once in a while it has to redraw something, do some intense processing of an image or text, but it takes seconds.
It's the 99% in idling that counts and there most laptop CPU's suck.
Even when watching a video the CPU is not (should not be) doing much as there are HW co-processors for MPEG-4 decoding built in.
It's quite embarrassing how AMD and Intel have screwed up honestly.
And that's why so far AMDs mobile processors have been monolithic and not chiplet-based. That is supposed to change with Zen 4's Dragon Range, however most of the mobile lineup will still be monolithic and these high-power/high-performance processors should go exclusively to "gaming" notebooks.
I care a lot about idle power, even on my desktop PC. It seems crazy to me that in 2023 I still need to consider whether maybe I should shut down my computer when I'm not using it.
What should I be buying to not have to ask myself that question?
AMD’s latest parts are actually quite close to M1/M2 in computing efficiency when clocked down to more conservative power targets.
They crank the power consumption of their desktop CPUs deep into the diminishing returns region because benchmarks sell desktop chips. You can go into the BIOS and set a considerably lower TDP limit and barely lose much performance.
Where they struggle is in idle power. The chiplet design has been great for yields but it consumes a lot of baseline power at idle. M1/M2 have extremely efficient integration and can idle at negligible power levels, which is great for laptop battery life.