I honestly think that audio quality is consistently poor on Apple stuff and they try to patch over it with EQ boosts and marketing.
AirPods are better than EarPods, but they're still in $25 earphone range.
AirPods Pro are nice, but you can get wired IEMs for <$100 that sound just as good. In that price range you can get Pinnacle P1 ($200) or ER4XR ($250) which dump all over them. I use AirPods Pro daily, not for quality, but for convenience.
HomePod is probably the biggest disappointment I've ever heard. $300 and it sounds like a plastic box, despite "computational audio".
At $550, you're solidly in headphone big leagues. Beyer DT770 or DT990 are close to perfect and they're <$200. Beyond that point you're hitting diminishing returns in audio quality; double price is going to get tiny marginal improvements in quality.
I'm eager to hear them but I can't imagine them outperforming DT990s, despite costing twice as much.
(Yes, none of these options have bluetooth or ANC. Get an ES100 for BT. If noise is a problem, get IEMs or AirPods Pro.)
When it comes to audio quality and red wine I am completely unable to appreciate the quality differences over a certain minimum threshold, which is quite low (at least as measured by price). Maybe I would be able tell the medium level stuff from the very high end if I was allowed to compare them to each other in an undisturbed environment, but if I then had to guess which was the more expensive, I would likely get it wrong a good chunk of the times. I usually tell people that my senses are just not acute enough, but secretly I suspect that most people who claim to be able to appreciate the differences that I am unable to, really aren't either and are at best just experiencing the placebo effect. The fact that you report to be disappointed by the audio quality of the HomePod, a product that a lot of reviewers praised first and foremost for its audio quality, does nothing to relieve me of this suspicion.
Who knows whether or not I am right or not? What I am quite certain of, is that I'm not likely unique when it comes to not being able to appreciate the small differences in quality that cost the most. Which is to say that for me and a lot of other people, it seems pointless to get hung up on whether one pair of very good headphones sound marginally better than another pair of very good headphones. Unless you're working in a studio as a sound producer, at a certain point the audio quality simply ceases to be the most important attribute of a pair of headphones. I don't know if it's true that you can get a pair of wired IEMs that sound better than the AirPods Pro, but even if it is, it's irrelevant, because the AirPods Pro sound fine to most people who buy them.
Through the years I've owned several different brands of headphones in the price range $100–$150 that I bought because reviewers claimed they were extraordinarily good value for the money when it came to sound quality. And maybe they were, but I've hated most of them for getting mostly everything else wrong: Wrong length of wire, wrong placement of microphone and buttons, and just horrible build quality.
I use the AirPods Pro not because they sound the best, but because they sound good enough and the convenience factors make them worthwhile.
I dump on the HomePod because it lacks any convenience factors that are meaningful to me [1] and it doesn't even serve the "speaker that sounds good" purpose [2].
So I really want to know what the value prop for AirPods Max is. They're not convenient or useful for travel because they're too big. They're not "best audio quality", because that's been done at lower price points. Spatial audio and ANC? Already solved, better, by AirPods Pro, at half the price. They're not even usable for critical listening or gaming because of Bluetooth.
So what are they for? Fashion? (Nothing wrong with that, but I'm sure as hell not going to spend $550 for it.)
[1] Siri can't understand me and it false triggers constantly.
[2] It wasn't even like, "hey its good but there are better speakers". They sounded like a cheap plastic box. They were better than my $100 Google Home. They're worse than the $120 soundbar I put on a TV. It's a low bar.
Have you tried the Homepod in a stereo pair? To me it makes a huge difference to the point that I assume that they have completely different profiles for them when running in stereo mode. Assuming a different audience than a single Homepod.
You're going to continually misunderstand the market if you think $200 wired headphones with a line running from your head into a $100 ES100 Bluetooth receiver clipped to your belt is an analogue for the wireless cans people want.
I do misunderstand. Please educate me! What makes these better than other headphones or the AirPods Pro, which are looking downright good value right now?
Besides the additional cores on the part of the CPUs and GPU, one main performance factor of the M1 that differs from the A14 is the fact that’s it’s running on a 128-bit memory bus rather than the mobile 64-bit bus. Across 8x 16-bit memory channels and at LPDDR4X-4266-class memory, this means the M1 hits a peak of 68.25GB/s memory bandwidth.
Later in the article:
Most importantly, memory copies land in at 60 to 62GB/s depending if you’re using scalar or vector instructions. The fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before.
Anandtech is comparing M1 vs A14. It's high performance for a cellphone part.
Dual channel DDR3L or DDR4L also has a 128 bit bus. 4200MHz DDR4 is clocked on the high side for most laptops, sure, but it's hardly unusual.
Run the numbers and you get the exact same throughput figure as for M1, which isn't surprising, because we're just taking width * rate = throughput.
So I'll repeat my assertion, downvotes be damned: the memory on the M1 is not special. The packaging and interconnect is interesting. It might reduce latency a little; it probably reduces power consumption a lot. But there's nothing special about it. The computer you're on right now probably has the same memory subsystem with different packaging.
Anandtech is comparing M1 vs A14. It's high performance for a cellphone part.
That’s where they started, but their conclusion was beyond that.
Did you miss the part where they said the fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before?
This isn’t only about A14 vs M1.
It’s not that LPDDR4X-4266-class memory is special; it’s been around for a while. What is special is that the RAM is part of SoC and due to the unified memory model, the CPU, GPU, Neural Engine and the other units have very fast access to the memory.
This is common for tablets and smartphones; it’s not common for general purpose laptops and desktops. And while Intel and AMD have added more functionality to their processors, they don’t have everything that’s part of the M1 system on a chip:
* Image Processing Unit (ISP)
* Digital Signal Processor (DSP)
* 16 core Neural Processing Unit (NPU)
* Video encoder/decoder
* Secure Enclave
There’s no other desktop such as the M1 Mac mini that combines all of these features with this level of performance at the price point of $699.
I don't think that's notable, sorry. I would expect that of any modern CPU.
> it’s not common for general purpose laptops and desktops
Well, yeah, because "memory on package" has major disadvantages. You (laptop/desktop manufacturer) are making minor gains in performance and power and need to buy a CPU which doesn't exist. Apple can do it, but they were already doing it for iPhone, and they must do it for iPhone to meet space constraints.
I think unified memory is the right way to go, long term, and that's a meaningful improvement. But as you point out, there is plenty of prior work there.
> they don’t have everything that’s part of the M1 system on a chip
They actually do! The 'CPU' part of an Intel CPU is vanishingly small these days. Most area is taken up with cache, GPU and hardware accelerators, such as... hardware video encode and decode, image processing, security and NN acceleration.
Most high-end Android cellphone SoCs have the same blocks. NVIDIA's SoCs have been shipping the same hardware blocks, with the same unified memory architecture, for at least four years. They all boot Ubuntu and give a desktop-like experience on a modern ARM ISA.
> There’s no other desktop ... at the price point of $699
You can't do what you do on a desktop on a laptop, not even a good one
Who cares if an M1 consumes less energy than a candle if I can buy 64GB of DDR4 3600 for 250 bucks and render the VFX for a 2 hours movie in 4k?
Another 300 bucks buy me a second GPU
When I deliver the job I put aside another 300 bucks and buy a third GPU
Or a better CPU
vertical products are an absolute waste of money when you chase the last bit of performance to save time (for you and your clients) and don't have the budget of Elon Musk
The M1 changes nothing in that space
Which is also a very lucrative space where every hour saved is an hour billed doing a new job instead of waiting to finish the last one to get paid
You can't mount your old gear on a rack and use it as a rendering node, plus you're paying for things you don't need: design, thermal constraints, a very expensive panel (a very good one, but still attached to the laptop body, and small)
So no, M1 is not comparable to a Threadripper, it's not even close, even if it consumes a lot more energy
When I'll see the same performances and freedom to upgrade in 20W chips, I will be the first one to buy them!
Then there's the 92% (actually 92.4%) of the remaining market that is not using an Apple computer that will keep buying non Apple hardware
Even if Apple doubled their market share, it would still be 15% Vs 85%
How is it possible that on HN people don't realise that 90 is much bigger than 10 and it's not a new laptop that will overturn the situation in a month is beyond me
And does it really matter to have a faster car if you can't use it to go camping with your family because space is limited?
That's what an Apple gives you, but it's not even a Ferrari, it's more like an Alfa Duetto
It's not expensive if you compare it to similar offers in the same category with the same constraints (which are artificially imposed on Macs like there's no other way to use a computer...)
But if you compare it to the vast amount of better configurations that the same money can buy, it is not
>You can't do what you do on a desktop on a laptop, not even a good one
Yeah… no, those days are over. The reviews clearly show the M1 Macs, including the MacBook Pro outperform most "desktops" at graphics-intensive tasks.
>So no, M1 is not comparable to a Threadripper, it's not even close, even if it consumes a lot more energy
Um… nobody is comparing an M1 Mac to a processor that often costs more than either the M1 Mac mini or MacBook Pro. However, the general consensus is the M1 outperforms PCs with mid to high-end GPUs and CPUs from Intel and AMD. Threadripper is a high-end, purpose build chip that can cost more than complete systems from most other companies, including Apple. However, it's at a cost of power consumption, special cooling in some cases, etc.
>Who cares if an M1 consumes less energy than a candle if I can buy 64GB of DDR4 3600 for 250 bucks and render the VFX for a 2 hours movie in 4k. Another 300 bucks buy me a second GPU
The MacBook Pro has faster LPDDR4X-4266 RAM on a 128-bit wide memory bus. The memory bandwidth maxes out at over 60 GB/s. And because the RAM, CPU and GPU (and all of the other units in the SoC) are all in the same die, memory is extremely fast.
From AnandTech; emphasis mine [1]:
"A single Firestorm achieves memory reads up to around 58GB/s, with memory writes coming in at 33-36GB/s. Most importantly, memory copies land in at 60 to 62GB/s depending if you’re using scalar or vector instructions. The fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before."
It can easily render a 2-hour 4k video unplugged in the background while you're doing other stuff. And when you're done, you’ll still have enough battery to last you until the next day if necessary. According to the AnandTech review [1], it blows away all other integrated GPUs and is even faster than several dedicated GPUs. That's not nothing; and these machines do it for less money.
>vertical products are an absolute waste of money when you chase the last bit of performance to save time (for you and your clients) and don't have the budget of Elon Musk
>The M1 changes nothing in that space
This is not correct… seeing should be believing.
Here's a video of 4k, 6k and 8k RED RAW files being rendered on an M1 Mac with 8 Gb of RAM, using DaVinci Resolve 17 [2]. Spoiler: while the 8k RAW file stuttered a little, once the preview resolution was reduced to only 4k, the playback was smooooth.
The M1 beats low end desktop GPUs from a couple of generations ago (~25% faster than the 1050ti and RX560 according to this benchmark [0]). Current high end GPUs are much faster than that (e.g the 3080 is ~5 times as powerful as a 1050ti).
Don't get me wrong - this is still very impressive with a ~20w combined! power draw under full load, but it definitely doesn't beat mid - high desktop GPUs.
(This is largely irrelevant for video encoding/decoding though as you can see - as that's mostly done either on the CPU or dedicated silicon living in either the CPU or the GPU that's separate from the main graphics processing cores.)
You're missing the point. I'm not trying to argue about which system is better, I'm just saying that the comment I'm replying to is saying incorrect things about GPU performance. I'll answer your question anyway though:
You could build a complete desktop system including a GPU that's more powerful than the one in the M1 for ~$1000, but certainly not a 3080. They're very expensive, and nobody has any in stock anyway.
An RX 580 or 1660 would probably be the right GPU with that budget. (Although you could go with something more powerful and skimp out on CPU and ram if you only cared about gaming performance).
- a 3080 costs > $750 . Good luck buying one, I would if it wasn't out of stock. On the other hand the gtx 1050 mobile that is on the M1 can be easily found on eBay for < $50
- yes, you totally can. The best thing is that with a 1k.entry level you can start working on real-life projects that have deadlines and start earning money that will let you upgrade your gear to the level you actually need, without having to buy an entire new machine. The old components can serve as spare parts or to build a second node. You don't waste a single penny on things you don't need.
Even though, it's true, you can't brag with friends that it absorbs only 20 watts full load and the heat of the aluminium body is actually pleasant
Well... yeah... but they also dropped clock rate from 2.5GHz-ish to 1.7GHz-ish. That could equally well explain the increase in core count at the same TDP. You're gaining about 15% IPC improvement from Kaby->Whiskey [1].
It's an overall improvement, but not as dramatic as "2x cores for 2x perf at the same power"
"Base" clock dropped, but boost clocks remained fairly high. In practice performance gains were quite good (except, notably, for that time when Apple used old power control firmware and had 6C/12T processors underperform their 4C/8T predecessors).
The point of this is that the significant improvement from Kaby Lake to Whiskey Lake involved only small architectural refinement and updates to an existing process, so much larger performance/power improvements should absolutely be expected from an entirely new process plus architecture refinement.
I hate the "M1 vs Intel MacBook" comparison. Every Intel MacBook back to 2016 has broken thermals. They're all running at maybe half their rated clock speed. 13" MBP is a 4GHz part running at 1.4GHz. 16" MBP is a 4.8GHz part throttled to 2.3GHz. You're comparing M1 vs. a broken design which Apple broke.
Don't congratulate Apple for failing to ship trash.
There's an argument for efficiency on a laptop, no doubt, but that's not what the parent commenter is talking about.
M1 is the highest perf-per-watt CPU today, no question. Ignoring efficiency, there are plenty of faster CPUs both for single-core and multi-core tasks. That's what "my Hackintosh did the build in 5 minutes" is showing.
You're misunderstanding Intel's specs. If you want the chip to run within TDP you can only expect the base frequency across all cores, not the ridiculous turbo frequency. The best laptop chip Intel has right now is the i9-10980HK with 8 cores at a 2.4GHz base frequency and a 45W TDP. Apple's laptops are more than capable of dissipating the rated TDP and hitting the base frequencies (and often quite a bit higher), although the fans can be a bit loud. So Apple's designs are not broken, at least not by Intel's definition.
You can relax the power limits and try to clock it closer to the 5.3GHz turbo frequency. But how much power do you need? I can't find numbers specifically for the i9-10980HK, but it seems like the desktop i9-9900K needs over 160 watts [1] to hit a mere 4.7GHz across all cores, measured at the CPU package (ie. not including VRM losses). Overall system power would be in excess of 200 watts, perhaps 300 watts with a GPU. Good luck cooling that in a laptop unless it's 2 inches thick or has fans that sound like a jet engine.
You've got it backwards. Apple chooses the TDP. Intel provides the CPU to suit. Apple is choosing TDPs which are too small and then providing thermal solutions which only just meet that spec. They could provide better thermals without hurting anything else in the machine and get a higher base clock.
I assume they do this for market segmentation; see 2016 Touch Bar vs. non-Touch-Bar Pro. One fan vs. two.
The TDPs look appropriate for M1 parts. They're too small for Intel. I'm guessing that (a) Apple predicted the M1 transition sooner and (b) Apple designed ahead for Intel's roadmap (perf at reduced TDP) which never eventuated.
So, unfortunately, Apple have shipped a generation of laptops with inadequate cooling.
> very Intel MacBook back to 2016 has broken thermals. They're all running at maybe half their rated clock speed.
Are you saying it's an unfair comparison? The Intel Macs are operating in the same environment as the M1 Macs. It doesn't matter if the Intel parts could be faster in theory, because you're still dealing with battery and size constraints. If you want unthrottled Intel CPU in a laptop, your only options are 6 pound, 2 inch thick gaming laptops with 30 minutes of battery life. Now comparing that (or worse, a desktop) to M1 is unfair.
> 13" MBP is a 4GHz part running at 1.4GHz. 16" MBP is a 4.8GHz part throttled to 2.3GHz.
Apple's thermal solutions could be better, but they are designed within Intel's power envelope specs. e.g. The i9-9880H in the current 16" MBP is only rated for 2.3Ghz with all cores active at its 45W TDP. The
i9-9880H is a 2.3Ghz @ 45W part that can burst up to 4.8Ghz for short periods, not the other way around.
That's one of my biggest sources of skepticism with the M1 on the long-term — in lieu of improving thermal management, they instead reinvented _everything_ to generate less heat. Which is great! The current state of thermal management at Apple will work great at low TDPs, but they've procrastinated instead of improving. If they don't ever learn how to handle heat, this arch will still have a hard ceiling.
There's nothing in M1 that indicates that Apple learned how to improve thermal management, but lots to indicate that they'd still rather make thinner/lighter devices that compromise on repair, expansion, or sustained high-end performance — the even deeper RAM integration, offering binned parts as the lower-end "budget" option instead of a designed solution, or offering Thunderbolt 3, fewer PCIe lanes, and a lower RAM cap as being enough for a MBP or Mini.
Under some constraints [1] [2] if you halve the frequency and double the number of execution units, the overall power consumption drops.
Therefore, you could gain performance or reduce power by stacking layers of silicon.
This is also true for things like power LEDs; within a certain range of their operating curve (current vs. output) you can reduce current by X% and lose less than X% of output. Put down two LEDs, then, and you get more output at the same current.
[1] architectures that scale efficiently to more execution units, like GPUs
[2] you're in a suitable region of the frequency-power curve
Yes, the last 10 percent increase in clock speed causes a 30 percent increase in power, or something like that. If you care about total performance instead of single thread it's probably better to add more cores.
If you can admin your own Jira instance and keep the fields and plugins to exactly what you need, it's pretty nice. I keep going back to it for the customizable workflows and fine-grained access control.
Unfortunately, every corporate Jira instance I've ever worked with has been overrun with every possible field possible, an unnecessary and constantly shifting mishmash of plugins and horribly slow access. You're paying the price for all of that cruft that you don't need and which can't be removed because "maybe someone wants it."
> every corporate Jira instance I've ever worked with has been overrun with every possible field possible
I've seen multiple different fields for the same value, each of them created for their specific team. The instance is slow AF most of the time due to the bloat and often I can't even assign a sprint to a JIRA because the field doesn't load anymore.
I don't think retain/release perf has anything to do with memory consumption, but I have seen a bunch of reviews claiming that 8GB is perfectly fine.
This is fascinating to me, because:
(a) every 8GB Mac I've used in the past has been unusably slow
(b) since upgrading my 32GB Hackintosh to Big Sur, my usual 40GB working set is only about 20GB.
(c) My 2015 16GB MBPr with Big Sur is also using about half as much physical memory on the same workload. Swappiness is up a little, but I haven't noticed.
So my guess is that something in Big Sur has dramatically reduced memory consumption and that fix is being commingled with the M1 announce.
Seriously, I'm utterly baffled by all the people claiming that 8 GB isn't enough for the average user.
The only situation I ever ran into where it was a problem was in trying to run multiple VM's at once.
Otherwise it's just a non-issue. Programs often reserve a lot more memory than they actually use (zero hit in performance) so memory stats are misleading, and the OS is really good at swapping memory not touched in a while to the SSD without you noticing.
Yes, sometimes it takes a couple seconds to switch to a tab I haven't touched in Chrome in days because it's got to swap it back in from the SSD. Who cares?
> people claiming that 8 GB isn't enough for the average user
I'm not claiming anything of the sort.
My point is that memory consumption seems to be greatly reduced in Big Sur, and that might make 8GB machines much better to use than before. All of my testing is on Intel machines. It's not exclusively an M1 phenomenon.
I would still recommend 16GB to anyone, and if the extra $200 was a factor, I would recommend that they buy last year's Intel with 16GB of RAM.
Nah, sorry, but you're wrong. I had to upgrade my laptop because I wanted to run Firefox, IntelliJ IDEA and an Android emulator on the same machine. Nothing else. This was not possible on 8GB ram.
So it's not like multiple VMs are needed and above scenario is pretty average for a common mobile developer (but still not an average user, I admit)
Second thing is, lots of games require 16 GB RAM. Maybe gamers are still not average users, I don't know.
For me with 16GB in an MBP, there is currently 20.5GB used + swap, and I haven't even started Firefox today, that would add another ~6GB or so.
Usually if I'm running Safari, Firefox and my 4GB Linux VM, that's 16-18GB used up in those. At the moment I have a few other things open, PDF viewer, Word, iTerms, Emacs etc, but nothing huge.
Most of the time this level of usage is ok, but I've had times where I've had to wait 30+ seconds for the UI to respond at all (even the Dock or switching workspaces) and wondered if the system had crashed.
For that reason I'm generally waiting for the next 32GB model before committing, that's assuming I stick with Apple instead of switching back to Linux (which I used for ~20 years before trying the MBP).
> Programs often reserve a lot more memory than they actually use (zero hit in performance) so memory stats are misleading, and the OS is really good at swapping memory not touched in a while to the SSD without you noticing.
The stats are absolutely reliable because no physical memory page is allocated until it is actually used to store something. So allocating a large chunk of unused memory wouldn't show in the (physical) memory usage stat.
I pretty much daily have to do a closing round to not run out of my 24GiB. That’s all web browsers
(Usually 100-200 tabs), vs code with some extensions and 2x4k display.
But what do you even mean "run out"? This is what I don't get.
If you have multiple browsers with hundreds of tabs, the majority of those tabs are probably swapped out to your SSD already.
With swapfiles and SSD's, physical memory is less and less relevant except when you're performing very specific computational tasks that actually require everything to be simultaneously in memory -- things like highly complex video effects rendering.
How do you measure "running out" of your 24 GiB? And what happens when you do "run out"?
As a human, when I have many tabs open, I observe that everything gets really slow. All applications get slow, but especially the browser.
So I put on my engineering hat and pull up Activity Monitor and further observe (a) high memory pressure, (b) high memory consumption attributed to Chrome or Firefox, (c) high levels of swap usage, (d) high levels of disk I/O attributed to kerneltask or nothing, depending on macOS version, which is the swapper task.
I close some tabs. I then observe that the problems go away.
Swap isn't a silver bullet, not even at 3Gbytes/sec. It is slow. I haven't even touched on GPU memory pressure which swaps back to sysram, which puts further pressure on disk swap.
It's the equivalent of having 50 stacks of paper documents & magazines sitting unorganized on your desk and complaining about not having space to work on.
A bigger desk is not the solution to this problem.
If your tabs are swapped out to SSD, your computer feels incredibly _slow_. SSD are fast, yeah, but multiple orders of magnitude slower than the slowest RAM module.
You can run 4GB if you're fine with having most of your applications swapped out, but the experience will be excruciating.
Physical memory is still as relevant as it was 30 years ago. No offense but if you can't see the problem, you probably have never used a computer with enough RAM to fit everything in memory + have enough spare for file caching.
I don't swap. You can do all your arguments about why I should if you want but yes, there are legit reasons not to and there is such a thing as running out of memory in 2020.
4GB MBA user here, don't have any problems either running Chrome or Firefox with 10-20 tabs and iTerm (Safari does feel much faster than other two and my dev enviroment is on a remote server though).
iPhones and iPads also have relatively small amounts of RAM compared to Android devices in the same class, so I wonder if Apple is doing something smart with offloading memory to fast SSD storage in a way that isn't noticeable to the user.
This is most probably more linked to Java/Kotlin vs Objective-C/Swift. Want an array of 1000 objects in Java ? You'll endup with 1001 allocations and 1000 pointers.
In Swift you can add value types to the heap-backed array directly, in ObjC you can use stack allocated arrays (since you have all of C) and there are optimizations such as NSNumber using tagged pointers.
> Theoretically Java should be more memory efficient because it makes fewer guarantees and can move memory around.
Java makes a lot of memory guarantees that are hard to make efficient. Specifically in that it becomes extremely hard to have a scoped allocation. Escape analysis helps, but the nature of Java's GC'd + no value types means it's basically never good at memory efficiency. Memory performance can be theoretically good, but efficiency not really. That's just part of the tradeoff it's making. And nearly everything is behind a reference, making everything far larger than it could be.
Compaction helps reduce fragmentation, but it comes at the cost of necessarily doubling the size of everything being compacted. Only temporarily, but those high-water spikes are what kicks things to swap, too.
Big difference is that Objective-C is a superset of C. Any Objective-C developer worth his/her salt will drop down to C code when you need to optimize. The object-oriented parts of Objective-C are way slower than Java. But the reason Objective-C programs can still outcompete Java programs is that you have the opportunity to pick hotspots and optimize the hell out of them using C code.
Object-oriented programs in Objective-C are written in a very different fashion from Java programs. Java programs tend to have very fine granularity on their objects. Objective-C programs tend to have interfaces which are bulkier, and larger objects.
That is partly why you can have a high performance 3D API like Metal written in a language such as Objective-C which has very slow method dispatch. It works because the granularity of the objects have been designed with that in mind.
For those, Apple's favored approach to memory management (mostly reference counting) absolutely _is_ an advantage over Android's (mostly GC). That's not relevant when comparing an Intel and ARM Mac, tho.
AirPods are better than EarPods, but they're still in $25 earphone range.
AirPods Pro are nice, but you can get wired IEMs for <$100 that sound just as good. In that price range you can get Pinnacle P1 ($200) or ER4XR ($250) which dump all over them. I use AirPods Pro daily, not for quality, but for convenience.
HomePod is probably the biggest disappointment I've ever heard. $300 and it sounds like a plastic box, despite "computational audio".
At $550, you're solidly in headphone big leagues. Beyer DT770 or DT990 are close to perfect and they're <$200. Beyond that point you're hitting diminishing returns in audio quality; double price is going to get tiny marginal improvements in quality.
I'm eager to hear them but I can't imagine them outperforming DT990s, despite costing twice as much.
(Yes, none of these options have bluetooth or ANC. Get an ES100 for BT. If noise is a problem, get IEMs or AirPods Pro.)