Hacker News new | past | comments | ask | show | jobs | submit login
Apple A17 Pro SoC single-core benchmark score close to Intel i9-13900K,AMD 7950X (techspot.com)
38 points by thunderbong on Sept 15, 2023 | hide | past | favorite | 32 comments



Of course. When you buy out all the capacity of the TSMC 3nm node for a year and use it to make huge, low-yield chips and charge a small fortune for them, you can get some performance advantages.

Intel's fabs are catching up, though, and have more money invested in them again. Ideally, we'll see a more competitive market soon.


>use it to make huge, low-yield chips and charge a small fortune for them, you can get some performance advantages.

1. It is not huge, by CPU Core or by SoC Die Space.

2. It is not low-yield either.

3. Comparatively speaking, what Apple include their SoC as BOM / COGS isn't anywhere as high as Qualcomm charges, hardly a small fortune.

4. It is performing close but at a much lower wattage. That is not just coming from a node advancement but mostly microarchitecture.


Do you have a source for these claims?


Which one? All of these are nearly publicly available information or can be directly inferred. Except they are either from Annual Report, Investing Meetings, Court Documents, or actual Geekbench numbers. The problem is people read Macrumours or WCCftech and believe they have informed.


Literally every industry publication and BOM analysis since... the first iPhone?


Another factor is that Apple designs big cores exclusively for consumer apps on iPhones. There is no compromise for server workloads, high clock targets, chips with modest die areas, slow memory, dies without efficiency cores, AI SIMD and so on.

I don't think any other company in the world can financially justify such a specialized big core, except maybe Samsung (who has failed in their ventures so far).


Do they? It seems their system architecture between the A-series for phones and the M-series for their pads/laptops/etc is largely the same. It's certainly the same design facility.


The applications they target are very similar.


Editing 8k video and running a social media app on a phone are vastly different workloads.


But editing 8k video in a social media app on a phone is a similar workload. And really, TikTok’ers are doing stuff like that with their phones these days.


> Editing 8k video

I would hope the GPU would be handling this.

...But honestly, they are not that different. Both are heavy media processing, and you would build the same kind of CPU core for either as opposed to (for instance) a database workload.


Video encoding if you need the best quality still is done on the CPU.


Video encoding also favors Apple's wide, SMT free design.

In theory chunked encoding gives you the best quality, and the best CPU for that is a bunch of little e-cores, but the only app I've really seen implement chunked encoding as a "max quality" feature is av1an.


https://news.ycombinator.com/item?id=37515625

Yesterday's discussion on this topic.


I'm curious if such great perf/watt is something that arm architecture is allowing or the same is possible in x86? if that's because of risc instruction set does it mean that riscv could achieve something similar in the future?


I don’t think it’s risc-the-philosophy per se, but a difference of priority. X86 historically prioritized clock speed (ultimately culminating in the utter dead end that was Pentium 4).

Then Intel back tracked to The Core/iN/M series, which actually reverted to essentially an updated Pentium 3 design, and has focused on raw power, power draw, heat, and cost be damned.

ARM, in the other hand focused on power-per-watt from day 1.

My M1 studio, fully wide open, draws less than a quarter of what my old i7/3080 beat did at idle.


Is this peak or sustained? I can't imagine an SoC inside a water tight (IP68?) enclosure having better heat management.


Of course it’s peak. The A series are notorious for throttling.


On single-core workloads?


That gives some hope that Moore's law isn't quite dead yet.

That said, the score came from Geekbench, which I believe isn't considered to be very representative of real application performance.


Geekbench is not so bad, like Passmark or those SEO spam benchmark sites you see on Google.

But the best benchmark is always the apps you are CPU constrained in.


What are some of the criticisms against Passmark scores? I maintain some documentation that cites them as a basis for recommending PCs for specialized applications, so if that's a bad idea it'd be good to know.


There is some controversy over "fixing" the results in the past, as well as weird personal behavior from the owners: https://www.reddit.com/r/buildapc/comments/ykx4yd/i_knew_use...

But bottom line is you want a benchmark that represents your actual application rather than some unrelated synthetic test. I do like geekbench more because its a big bucket of "real" applications without the associated controversy, but its still less than ideal.


AFAIK, Passmark and UserBenchmark are not the same thing. UBM is the one that's so bad it's a meme.


Ah I think you are right, but still: https://www.reddit.com/r/Amd/comments/fhzn0e/passmark_follow...

I remember controversy from both, but theres a lot of noise when I try to go back and look outside of Reddit (which is not always the most reliable source).


The app that stresses my CPU the most is Geekbench lol


mohrs law has been dramatically outpaced in the last 6 months are you serious?


Why does it matter in a phone? And please don’t say future proof. A lot of Apples newest software features are only available on the newer phones. Even though the older ones could easily handle it.


Online browsing for one. Gaming... If you don't think you'll profit from the performance, don't buy it.


good question, this is so overkill for like most of the users


[dupe]





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: