Of course. When you buy out all the capacity of the TSMC 3nm node for a year and use it to make huge, low-yield chips and charge a small fortune for them, you can get some performance advantages.
Intel's fabs are catching up, though, and have more money invested in them again. Ideally, we'll see a more competitive market soon.
Which one? All of these are nearly publicly available information or can be directly inferred. Except they are either from Annual Report, Investing Meetings, Court Documents, or actual Geekbench numbers. The problem is people read Macrumours or WCCftech and believe they have informed.
Another factor is that Apple designs big cores exclusively for consumer apps on iPhones. There is no compromise for server workloads, high clock targets, chips with modest die areas, slow memory, dies without efficiency cores, AI SIMD and so on.
I don't think any other company in the world can financially justify such a specialized big core, except maybe Samsung (who has failed in their ventures so far).
Do they? It seems their system architecture between the A-series for phones and the M-series for their pads/laptops/etc is largely the same. It's certainly the same design facility.
But editing 8k video in a social media app on a phone is a similar workload. And really, TikTok’ers are doing stuff like that with their phones these days.
...But honestly, they are not that different. Both are heavy media processing, and you would build the same kind of CPU core for either as opposed to (for instance) a database workload.
Video encoding also favors Apple's wide, SMT free design.
In theory chunked encoding gives you the best quality, and the best CPU for that is a bunch of little e-cores, but the only app I've really seen implement chunked encoding as a "max quality" feature is av1an.
I'm curious if such great perf/watt is something that arm architecture is allowing or the same is possible in x86? if that's because of risc instruction set does it mean that riscv could achieve something similar in the future?
I don’t think it’s risc-the-philosophy per se, but a difference of priority. X86 historically prioritized clock speed (ultimately culminating in the utter dead end that was Pentium 4).
Then Intel back tracked to The Core/iN/M series, which actually reverted to essentially an updated Pentium 3 design, and has focused on raw power, power draw, heat, and cost be damned.
ARM, in the other hand focused on power-per-watt from day 1.
My M1 studio, fully wide open, draws less than a quarter of what my old i7/3080 beat did at idle.
What are some of the criticisms against Passmark scores? I maintain some documentation that cites them as a basis for recommending PCs for specialized applications, so if that's a bad idea it'd be good to know.
But bottom line is you want a benchmark that represents your actual application rather than some unrelated synthetic test. I do like geekbench more because its a big bucket of "real" applications without the associated controversy, but its still less than ideal.
I remember controversy from both, but theres a lot of noise when I try to go back and look outside of Reddit (which is not always the most reliable source).
Why does it matter in a phone? And please don’t say future proof. A lot of Apples newest software features are only available on the newer phones. Even though the older ones could easily handle it.
Intel's fabs are catching up, though, and have more money invested in them again. Ideally, we'll see a more competitive market soon.