Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AirPlay and Touch Bar = Network Disaster (mnpn.github.io)
423 points by davidbarker on June 11, 2022 | hide | past | favorite | 164 comments


I wonder if this is related...

When I upgraded to Monterey (well, a beta, actually), I was met with a horrible bug that made me want to trash my laptop: my AirPods would randomly "cut out" for a couple of seconds for seemingly no reason. Sometimes it would take 5 minutes, sometimes 2 h. It was infuriating.

I spent dozens of hours trying to figure it out. Disabling apps, enabling other apps, tracing logs, killing random applications, seeing if it was load-related, trying reboots. At one point I was half-convinced it only happened after coming back from hibernating (not suspending). I tried desperately to "fix" the bluetooth module, disable and re-enable handoff, delete internal settings, copy old bluetooth modules from older versions, working mostly tethered... Nothing seemed to fix it, until I found a message deep in the logs that led me to try to disable "AirPlay Receiver".

Voilá! It was instantly fixed. I documented my workaround here: https://www.reddit.com/r/MacOSBeta/comments/qjgqjx/i_think_i... and it seems it's spread a bit like wildfire through the internet. I should have put up a donation link :D

What that journey taught me was that AirPlay, Wifi, Bluetooth and the like are serious messy beasts on macOS. I work with BLE sometimes while developing and I'm aware of how messy it is, but it seems that macOS has so many features that it makes it far worse.

In any case, my workaround is still working and I occasionally still get people thanking me for finding it. I doubt they really know how much time I spent, and how insane I nearly went, just to find that (un)ticking a little box would solve all of my problems.


> AirPods would randomly "cut out" for a couple of seconds for seemingly no reason

It is for this sort of reason that I prefer dumb analog devices such as headphones to "smart" devices with as lot of complexity built in. Also, they're cheaper.


Amen. I use a wired set to simplify the audio out between my docking station that I share with multiple machines. Bluetooth pairing is a nightmare.


> how much time I spent, and how insane I nearly went, just to find that (un)ticking a little box would solve all of my problems.

Ain't that the way.


When I upgraded to Monterey, my AirPods started giving me robot voice on any call. It appears to be a conflict with my Logitech keyboard and mouse. I’ve switched to USB connection for those, but it’s not an ideal solution.


it’s interference between then USB 2.0 dongle of the devices and the USB 3 port. use a usb 3 cable to put them a distance away from the hub/ports.

https://www.reddit.com/r/technology/comments/136g7y/usb_30_h...


I ran into this the other day and did a video about it - not the best quality video but shows it happening in realtime with analysis and commentary: https://www.dropbox.com/s/hra2uxx66kf7z0j/PXL_20220609_21480...

The cause in this case was AWDL (Apple Wireless Direct Link.) Holding the Option key while clicking the Wi-Fi icon and clicking "Enable Wi-Fi Logging" and then checking /var/log/wifi.log will show AWDL scans starting and ending randomly, and when the scan is active it causes latency spikes every 1s like clockwork. Unrelated to AWDL, but if a process is requesting a Wi-Fi network scan (different from an AWDL scan), /var/log/wifi.log will also tell you the name of the process, such as "locationd" when the Location Service needs your location. (Tangential, but the locationd process rarely causes these latency spikes for me - on a default macOS install it very rarely requests scans in my experience, backed by my analysis of the log.)

AWDL has to be used for things like AirDrop, so it's expected to have this latency increase while you have the AirDrop window open scanning for nearby devices / sending files to other devices. There are other uses of AWDL (AirPlay, Auto Unlock, Universal Clipboard at the very least)[0], but I don't know what was triggering it so actively in my case... and why it wasn't happening on my M1 Air. It also wasn't always happening in the background like this, it just started that day.

The "fix" was to disable the awdl0 interface, but that may also cause AirDrop/AirPlay and related services to not function (I did not test.) It's easy to re-enable it though.

To disable: ifconfig awdl0 down

To enable: ifconfig awdl0 up

Upon disabling, the latency spikes go away permanently.

[0] https://owlink.org/wiki/


There’s a ton of things on macOS that just randomly start network scans and just destroy Wi-Fi performance. I helped Martin find on in the post, but personally I dealt with another one for several months on my own computer and eventually just gave up. I just went ahead and rewired my house with MoCA and now I get the speeds I was looking for, continuously and without the hassle, whenever I’m at my desk. I’d recommend everyone else to do the same, honestly. (Unless you’re at Apple. Then you should not do this and instead send in Radars when your network performance drops.)


> (Unless you’re at Apple. Then you should not do this and instead send in Radars when your network performance drops.)

Which unfortunately will get moved to NMOS/Future with a P6 priority and ignored forever. At best it'll get sent back to you "for more details". If you play the game an attach logs and investigation it'll get filed make into a black hole milestone or closed as a duplicate. There's a cabal that seems to only want new features and never tackle existing bugs in the OS.


> There's a cabal that seems to only want new features and never tackle existing bugs in the OS.

Huh. That sounds strikingly similar to the culture breakdowns I've observed be associated with Google - the whole focus on newness and landing features in the name of solving hard problems, as opposed to maintaining stuff. Apparently the feedback/peer review/bonus system is completely broken.

...Maybe this sort of scaling problem is a "$T+ market cap" thing that we've just never had to figure out before?


With Apple there's a pathological need to release new devices and OSes on a yearly cadence. Landing a new feature rewards middle management and the top quintile of ICs with cash/stock bonuses and reflects on their end of year review. Fixing bugs does not get those rewards so there's little incentive to fix old bugs that 1) don't prevent a new feature's implementation or 2) aren't called out in a security release.

> ...Maybe this sort of scaling problem is a "$T+ market cap" thing that we've just never had to figure out before?

I don't know in all honesty. I'm sure part of it has something to do with the fact these companies have billions of customers. It's a mind numbing number of users. Even a single percentile change in the number of customers or revenue per customer makes for huge revenue differences. So given a finite amount of developer effort, a new feature which is likely to increase revenue is incentivized over a stupid AirPlay bug that isn't likely to increase revenue.


The market responds to newness, and if you can sweep issues under the rug you get two very lucrative demographics:

1) Customers who barely do anything with their devices and therefore never have issues

2) Fans who experience issues, but accept them as a bump in the road towards the dream they’ve been sold

And this is at a cost-savings: less in R&D (because you can patch in-field), less in maintenance hours, and less opportunity cost for having “10x” devs do that maintenance.

There’s also a fear-based reason for companies to chase features: they worry that the market will see them as stagnant and see the competitors as exciting.


It depends on if that aligns with your management's views. In aws, ive seen teams spend some effort on keeping their backlog low (but no zero) since if it balloons out of control you know you're gonna get called out on it.

i recall a team that had all its feature releases struck down for the coming year so they would work through their backlog.

Opinions are my own, bla bla bla


Do you know if the grade/power of wireless router makes any difference for this?

I've not had too much issue with wifi performance with several Apple devices on the network, but my wireless router is also a fairly high end consumer model and probably not at all representative of the average.

I do still intend to figure out some sort of hardwiring solution (maybe MoCA, but man those boxes are expensive) but the performance of wifi in the interim has been good enough for it to not be pressing.


Intel and Qualcomm wifi chipsets have some secret sauce in them that make them better than Realtek, Ralink, etc. I haven't personally tested Broadcom, so I don't know where they fall. In low congestion environments it doesn't matter, but in high congestion environments they work better. Basically, Intel and Qualcomm chipsets are better able to receive frames successfully even when there's a collision.

Transmit power doesn't matter in that more isn't necessarily better. Same for antenna gain. They are variables you can change if you know you have a specific problem that would be solved by it. That's hardly ever the case, though.

Why not just hardwire CAT6? MoCA is nice if you already have the coax, but otherwise it seems silly to put in.


> Why not just hardwire CAT6? MoCA is nice if you already have the coax, but otherwise it seems silly to put in.

My house is already wired with unused coax. I’d prefer CAT6, but don’t know how involved doing that would be. I need to figure out if there’s wiring conduits in the walls, for example.


I have a pair of Google Wi-Fis that aren’t particularly powerful, for reference.


I wasn't familiar with MoCA. Seems like a lot of the advantages of power line networking, but higher throughput / reliability I imagine. Thanks for the tip!


Yep, MoCA gives me a gigabit+ backbone which is definitely more than my ISP gives me :)


For anyone else reading this, the one tip I have is to reorder your network interfaces to put your wired interface first. There's a three-dot menu somewhere in the network panel to do so. macOS will use the first active interface and ignore the rest.


Technically macOS will use all active services but only communicate with the topmost interface’s router address to leave the local network unless you have a VPN connection that’s set to send all traffic.


Omg! This may very well solve my ping issues in webliero!


Update: yes, after `ifconfig awdl0 down` my ping was no longer shooting up to ~200ms randomly, and I was able to play a 2-hour deathmatch with my wife without a single lag!


You would be mistaken if you thought switching to ethernet would fix everything in macOS. There are three options for USB Ethernet chipsets that are supported in macOS without drivers: RTL8153, AX88178, AX88172A (not B or C). When using a RTL8153, I've experienced extremely high CPU usage (it's a shitty user-mode driver), and I've had the adapter drop out after transferring >20GB of files over it. I ended up buying the AX88178 and AX88172A adapters which are supported by the kernel. Based on my limited testing, only the AX88172A chipset is stable enough for 24/7/365 connectivity. Unfortunately the AX88172A is only 100Mbit, so consider it if you value stability > throughput

The thread that sparked this rabbit hole:

https://discussions.apple.com/thread/252387604

TLDR: 2022 and we are still stuck with 100Mbit adapters on macOS


A bulky but reliable option is the $29 Apple “Thunderbolt (1) to Gigabit Ethernet adapter” connected to the $49 Apple “Thunderbolt 3 (USB-C) to Thunderbolt 2 Adapter”. It’s gigabit and PCIe using the rock-solid BCM570x chip.


So you need two daisy-chained dongles to get a reliable Ethernet port on your $2000 laptop? Take my money please!


Should it be the other way around? 95% of people who haven't thought about ethernet cables for the past year should have a bulky port on the laptop that eats into PCB + battery space


The assertion that something as basic as a functioning Ethernet port in a "professional" laptop - the high end model of which exceeds $6000 - constitutes a waste of PCB and battery space, is utter delusion. At that price they should throw in a Saks Fifth Avenue bag to carry your dongles.


I’d argue for the MBPro keeping the eth port (and even going latest gen, like 2.5 or 10Gb), and the Air not including it.

Professionals need all the stable connectivity they can get.


The same company has been selling a tiny 10Ge machine that can saturate this plus both TB ports without breaking a sweat. You'd think they could make a single dongle with 1/10th of that capacity.


There are collapsible ports that some laptop manufacturers include. Robustness might be a worry but I’m sure Apple would be able to engineer something suitable.


3Com was doing this[1] over 25 years ago stuffing a retractable RJ-45 into a PCMCIA card. The concept is not new. I'm sure if Apple ever did such a thing it would be lauded as pure genius, an invention so incredible we would not be worthy of it.

[1] https://en.wikipedia.org/wiki/XJACK


Wifi is still not on par with ethernet and probably wont be until wifi 7 is the norm. Even then 10gbps ethernet will likely be more common and still out perform wifi especially in urban areas of high interference


Wifi will become on par with ethernet when it would switch the media to copper or fiber optic.


Aren't closed ecosystems wonderful?

I was elated to hear that Apple is being forced to abandon Lightning on iPhones, and I'm normally not much for government meddling.


Reliable, meh. I've had two die on me. One lasted a few years of VERY light usage, the other died barely a year or two in. Also almost no usage.

In both cases it simply stopped appearing on the bus, but was clearly "running" to some degree - it would get warm to the touch, just as they normally do.

I've also found that the Thunderbolt connector's lack of a "click" engagement to be a serious issue for storage, network, and even display - every Mac I've seen it in use, the connection has been flaky. It really fucking sucks to whip out the thunderbolt adapter and plug into ethernet for "reliability", set up a transfer of a ton of data, and half-way though shift the machine slightly aaaaaaaaand then the ethernet adapter disappears and your transfer is fucked.

Happy to see them drop that idiotic connector for USB-C, as at least that has some sort of physical retention mechanism other than "hopes and dreams" (ie: rely on proper clearance tolerances between different vendors, on surfaces that will wear with use.)


My first (2011?) MBP had very sturdy Thunderbolt connectors, and dongles wouldn’t suddenly fall out. The lauded 2015 model I had had very loose connectors with the problem you’re describing. Strange design change IMO.


Yes, this adapter works well in my experience, too. A little clunky since Apple still hasn't updated it to native Thunderbolt 3 so it needs an adapter, but that doesn't really impact how it functions.

I've also had good luck with whatever Caldigit uses in their Thunderbolt Mini Docks (not at my desk to check and see what chipset's in mine).


This works great for me too, and is what I use for multiple laptops. Ridiculous there isn’t something better though at this point…


Recently discussed on Hacker News was this article: https://khronokernel.github.io/macos/2021/11/22/PCIE-ETHERNE...

I’ve had MUCH better results with the Belkin Thunderbolt 3 Express Dock HD (INtel i210) than my OWC Thunderbolt 4 Dock (Realtek RTL8153).


After issues using various (cheap) USB 1Gbps ethernet adapters and an Intel MBP, I ended up getting one that uses a RTL8156B and seems ok. These are 2.5Gb adapters that use NCM driver so shouldn't cause high CPU.

I don't have 2.5Gb network equipment but have tested with iperf between and get around like 900 Mbps and no high CPU, unlike noticeable CPU usage with the cheap 1Gbps USB adapters that use ECM drivers

See also https://gist.github.com/MadLittleMods/3005bb13f7e7178e1eaa9f...


I use the Belkin one that Apple sells and have never had much issue with it. What chipset is that?


As detailed in this post, RTL8153 with Mac also regularly hops down to 100 mbps and doesn't return back to 1 gbps: https://overengineer.dev/blog/2021/04/25/usb-c-hub-madness.h...

A pertinent question, however, is: how do you know what chips an adapter uses, before buying? I'm in utter dread regarding the prospect of buying a usb hub, since just like in the above post, they're black boxes to me which of course turn out to repackage the same Alibaba junk with 10x markup.


Yeah, despite all the hype, MacOS is one of the least technically sophisticated operating systems in common use. Their main advantage over other systems is power management, and a large part of that is attributable at least as much to control over the hardware as to technical excellence.

The fanboyism and reality distortion field is very strong. I remember when they came out with timer coalescing and were hyping it as a major accomplishment and selling point. Of course, Windows supported timer coalescing for years before MacOS, but that didn’t stop Jobs from convincing a bunch of developers that this was a novel breakthrough.


> main advantage over other systems is power management

I can understand that appeal.

In the last 5 years I don’t think I’ve had a windows laptop sleep or hibernate as I would expect, not randomly wake up in the middle of the night, in my bag, and just sit there fans at full speed…

I’m tired enough of it to try a Mac…


Same happens occasionally on my mac. And even if it doesn't stay on, it wakes up often enough to drain >50% overnight. My only machine to reliably sleep, wake up and power manage is Linux.


Did you turn off "powernap" in settings? By default, a mac would wake up to download mail, run backups, and do other things in the middle of the night.


Yeah, that's with powernap off.


I've experienced this with Mac laptops that I've got loads of stuff running on, but not on stock configurations, so I suspect it's some piece of software rudely waking it up every so often for its own inscrutable reasons.


My new M1 MBP runs out of battery over a 3 day weekend of sleeping without charge ( the only way to make it sleep is to disconnect the charger, otherwise it refuses, probably due to the external screen connected).


i'll counter your anecdotal data point with mine, my m1 stayed alive in sleep for 3 weeks while i was on holiday and had hours of battery life left when i opened it up.


Maybe we’re all doomed to just power off forever…


I bought a high-end lenovo and the fans would turn on and the machine would get hot when it was in my bag.

But it's not Microsoft's fault. The Surface Book I replaced it with works perfectly.


I had a problem like that a lot with my MBP. Except it would do it while fully closed, in an insulated sleeve, inside my backpack. Roasting hot. Just awful.


Yeah my Dell(s) have done that.

I reach inside and it is like a toaster “oh yeah this guy is dead…”


I've got bad news, I've been extremely unimpressed with my Macbook's so-called smart sleep. Literally the first day I brought it into work I went to go take it out of my bag only to find that it was too hot to touch. Neither selecting "Sleep" nor closing the lid for 30 minutes had convinced the fickle OS to enter a low-power state.

You know what Windows and Linux offer that Mac doesn't? A friggin' Hibernate option. Please, just let me have a button to power off my computer while persisting its state to disk. No, I don't want to have to shut down my computer every time I put it in a bag, that's completely ridiculous and an utter waste of my time having to reset all my tmux panes, vim windows, shell histories... what a UX nightmare. They've made a laptop that's a terror to actually take anywhere with you. Even when the "smart" sleep does (sometimes) finally decide to kick in, it invariably costs 20% of the battery.

When people say that Macs have good power management, what they mean is that Safari is optimized for power consumption relative to Chrome and Firefox.


Macs have a hibernate option that completely powers the machine off after saving the contents of RAM to disk, but you need to set it via the terminal. Find the current hibernate mode via: pmset -g

The default is hibernatemode 3 (RAM stays powered on until battery drops below some threshold, so that wake from sleep is very fast).

The version of “hibernate” you want is mode 25: sudo pmset -a hibernate mode 25

If you routinely keep your computer 'hibernating' for days at a time without plugging it in, you’ll save some battery life this way at the expense of a slower wake-up time because RAM is not kept powered.

See https://en.wikipedia.org/wiki/Pmset for more details.

> Safari is optimized for power consumption relative to Chrome and Firefox

This is a very big difference. Chrome has zero respect for battery life.


You have either a hardware issue (lid sensor), an SMC issue (try an SMC reset, it's painless), or software you run is preventing sleep (which you can find by looking in the Energy tab of Activity Monitor. Google search for "os x sleep prevented") or you have some hardware plugged in (external displays and keyboards can sometimes prevent a lid-shut from triggering sleep; I forget the 'rules' around this.)

Also, you can set the power manager's hibernatemode to your liking (Google search "os x hibernate mode") but there usually no reason to adjust the default (sleep for 3-4 hours, then suspend to disk) given how fast storage is in macs these days.

Macs have been famous for decades for having the best sleep/hibernate functions in the industry and when yours didn't work properly maybe you should have investigated? Or at least not be whinging about Apple over it?


> maybe you should have investigated?

I did / have on multiple devices.


Report a bug in Feedback Assistant.


>not randomly wake up in the middle of the night, in my bag, and just sit there fans at full speed…

At this point I always globally disable wake timers in Windows. They're mostly used for automatic updates. (For more fine-grained control, look at the various wake timers in Task Scheduler.)


My work Mac has certainly turned on in a bag overnight. No escape from that issue there, I'm sorry to say


We’re doomed.


> Their main advantage over other systems

is that some of us just really don’t like Windows and really like our Macs.

We’re not fanboys. It’s just a preference. Chocolate vs. vanilla, football vs. cricket, boys vs. girls.


There are some usability affordances in macos that are quite nice.

For example something as simple as copy and paste is command-c, everywhere.

In Ubuntu? control-c or control-shift-c. It is pretty annoying being in autopilot and killing the command line program you are in because you reflexively hit control-c.

Also, readline shortcuts work through out, so control-a will send you to the start of a line. Not with Ubuntu.


This is a common complaint, but this issue is that Mac users and Linux users basically talk past each other on this point. Just speaking for myself, I find that I accidentally kill a process in the Mac terminal whenever I try to copy anything because I can't keep Ctrl-C and Cmd-C separate (both Windows and Linux use Ctrl exclusively, which means you don't have to tell them apart).

If you're going to use Ctrl for shortcuts, you necessarily run into the issue needing a separate shortcut for copy in the terminal, because Ctrl-C has meant "send an interrupt signal to the process" since at least the 60s.

Fortunately, for people sufficiently annoyed by this, most Linux terminals do allow you to change keyboard shortcuts arbitrarily, so you can have unified copy shortcuts if you want. For a variety of reasons, I find this more trouble than it is worth and prefer sticking with the default.


The Linux way is to copy text anywhere just by selecting it and paste it with the middle mouse button.


it is the "primary" buffer, not clipboard.

macos also have it. that is part of application handling. iterm, alacritty, and bunch of other apps can behave the same. Including Xquartz.


> MacOS is one of the least technically sophisticated operating systems in common use

Having a driver for a given Ethernet controller is hardly a good proxy for sophistication…


I think I need to write a plugin that searches the current HN page for terms in a configurable flame fodder string list and adds `opacity: 0.2` to the row. Fanboy would definitely included by default.


I don't really care about my laptop being sophisticated as long as it doesn't crash.


Maybe let's wait until Windows allows scrolling of a non-active window before saying this.

Or moving/changing the name of a file while it's open.

Or having a slash in a file name.


Pretty sure that you can scroll non-active windows. Just tried it to be sure.


Wow, didn't realize thank you!

I still think being able to move/change the name of a file while it's open is impressive


Yes, works fine on Windows 11


Windows 10, too. I think it didn't work in Windows 7, though. If my memory is correct, that means that it's not a super-new feature, but still relatively newish.


Windows 10 (at least the first version) came out seven years ago. It's not even new anymore, it's well on its way to just becoming... old?


I guess my memory is somewhat skewed by having skipped across Windows 8, true.


As a Linux user, why would you want a slash in a filename?


Because a very large fraction of people - greater than 90% - use forward slashes in dates, and people like putting dates in filenames.

Nearly all of those people have never seen a filename path written out in text, and wouldn't care if they did.


> Because a very large fraction of people - greater than 90% - use forward slashes in dates

Citation needed


A forward slash is a very common separator character.

I did this myself the other week. Mac user. Folder was called ‘Lessons/episodes’ from memory. I only noticed it was weird when my Synology didn’t display the character. Renamed to ‘Lessons & episodes’.


It absolutely is. That doesn't mean that 90% of the world population use forward slashes for dates.


MacOS has really strong userland software, stuff like the Quartz compositor is genuinely quite hard to beat (neither Windows, x11 nor Wayland can live up to it's featureset and stability).

However, MacOS as an operating system really is a mess. Especially the XNU kernel, which is still an unbelievable amalgamation of disagreeing technology. Remember, MacOS is not natively a UNIX-certified machine: all of it's UNIX compatibility comes from a BSD-based compatibility layer that hasn't really been changed since the late-90s. Oh, and the coreutils? Notoriously garbage. MacOS ships with all sorts of outdated, downgraded, vulnerable and otherwise broken shell utilities. pico instead of nano, zsh instead of modern bash... hell, even something as simple as installing git is a 700mb installation with a mandatory reboot.

I'll give MacOS credit where credit is due (Apple had good design philosophies in the 2010s), but the actual operating system (see: functional network of software components) is truly awful, arguably just as bad as Windows if not worse. Just about it's only redeeming qualities are the things that Apple didn't make (like pf and process management. If you forced me to pick something that I found impressive, I'd have to choose Grand Central Dispatch, but even that isn't terribly impressive. It's mostly as if some Apple engineers decided to iterate on the fairly lackluster Linux process management. It would have been a miracle if they managed to make something worse.


> even something as simple as installing git is a 700mb installation with a mandatory reboot

Mandatory reboot? I've never experienced that with the Xcode command line tools.


I could be wrong, last time I seriously used MacOS (for both personal/work uses) was Mojave. Either way, it's installing a lot more than just the 30mb of the git binary, so I learned my lesson and just installed all the GNU stuff with my package manager. Annoying to be sure, but somehow better than dealing with Apple's default way of handling it.


Yeah, brew requires the CLT because it’s built on git, so I assumed that’s what you were talking about.


> pico instead of nano, zsh instead of modern bash

nano is a GNU clone of pico. pico is OG nano.

bash was replaced with zsh because Apple purged their OS of all GPL3 software (including nano).


Mach 2 (and thus XNU) was built on BSD 4, which was derived directly from Bell Labs UNIX V7. It has been “natively UNIX” as long as the other ancient V7 and SysV derivatives, like AIX and SunOS.

Maybe what you really mean is you don’t like UNIX and expect it to work like Linux?


Yes, Linux is better than any certified Unix and that's one of the big reasons macOS is worse for Unix-like things than Linux.


No doubt my dude, but thats not the point. BSD in MacOS is not a 'compatibility layer'

When CMU made Mach, it was a BSD OS. When Next made NextStep on top of Mach, it was a BSD OS. When Apple made OSX with NextStep, it was still BSD.


You seem to be under the impression that because you haven’t looked up if anything’s changed since 2010, nothing has.

The command line tools (except GPL ones) do get updated from FreeBSD and there’s nothing “non-native” about how the kernel works.


Switching to gnu coreutils is trivial on a Mac.


Plenty of scripts will break in mysterious ways if you replace the sed binary with GNU sed.


I'm literally using a 2.5Gbps RTL8156(B?) without adding any drivers on an M1 MBA as I type this, and I've never had any stability issues with it. I also have an RTL8153 that works flawlessly too.

I transfer tons of large videos and RAW photos imported from my mirrorless camera over these network adapters to a NAS.

I've been using these for quite a few months.

I'm not stuck on 100Mbps adapters, but maybe you are.

Usermode vs kernelmode seems irrelevant... if anything, I want the benefits of usermode isolation for more things. Monolithic kernels aren't great for security.


I will have to try these RTL8156 adapters eventually, but according to the resource from the comment below, they don't support AirPlay 2. Not exactly a rock-solid chipset.

https://gist.github.com/MadLittleMods/3005bb13f7e7178e1eaa9f...


Could you please explain how this could possibly be a chipset problem? The chipset sends and receives packets without problem. It's a pretty rock solid chipset in my testing. This sounds like an "Apple didn't implement AirPlay support in the driver" problem, which sounds crazy, because it is kind of crazy. AirPlay should just be packets on the network like anything else.

You also seem to be digging pretty hard to try and justify your original position, which was very extreme. No, Apple computers are not stuck at 100Mbps. This would be a very big deal, as tons of creative workflows rely on having multi-gigabit network connections. The outcry would be enormous.

I had never tried to use AirPlay 2 from my laptop over ethernet (I can probably count on one hand the number of times that I've used AirPlay from my laptop at all), but I tested, and it is true that the Music app won't connect to AirPlay devices over this Ethernet adapter (but it will over the 8153). I also have no problem redirecting the system sound to an AirPlay device, even over this Ethernet adapter, as that comment says.


Why is 2.5Gbps (copper) Ethernet even a thing? 10Gbps (copper) Ethernet was standardized first, and available first. Is this just another case of planned obsolescence?

https://en.wikipedia.org/wiki/Planned_obsolescence


Why did you feel the need to leave this comment? How could 2.5Gbps ever be construed as planned obsolescence? 10Gbps ethernet stayed too expensive for too long, so everyone was stuck on 1Gbps for like a decade. Eventually, manufacturers decided 2.5Gbps (and 5Gbps) would be more cost effective options for the time being, and would allow increases in network performance beyond 1Gbps.

It's a completely positive outcome for end users, since they now have more options, and pretty much all 10Gbps ethernet ports are compatible with "multi-gig" 2.5Gbps and 5Gbps connections as well.

10Gbps is effectively the end of the line for copper anyways... 10Gbps is already really hot and power hungry, and AFAIK, datacenters never really bothered to deploy any copper faster than that. Fiber is the present reality in datacenters. What I really want is for SFP+/QSFP+/QSFP28 to make their way into home networks and consumer devices.


It's true that newer 10GBE ports mostly support 5 & 2.5, but not always, and the older ones don't. It's also true (unfortunately) that the 2.5 & 5 GBE hardware mostly does not support 10GBE. You can get a 10GBE SFP+ module for less than $50 (and dropping), so why not manufacture hardware that is future compatible? I am seeing more and more 2.5GBE hardware on the market that does not support 10GBE.

This is the basis for my speculation about planned obsolescence.


$50/port is still prohibitively expensive for consumer networks... and that's just the module, it doesn't include the cost of the SFP+ cage (and supporting hardware). Spot checking one OEM, they charge $100 to upgrade to copper 10GbE on a desktop computer. I'm sure others charge less, but 1GbE has been effectively "free" for forever, and 2.5GbE has been becoming "free" over the last year or two.

Nothing presented so far even comes close to making me believe this is a conspiracy. People just got tired of waiting for 10GbE to come down in price. Cost and impatience are the driving factors. Gigabit ethernet was introduced in the late 90s... it was time for some increase in speed, and basically no one has been able to justify the continued high cost of 10GbE in home networks yet, 20+ years after 1GbE, so 2.5GbE it is.


I did not claim a conspiracy, I only raised the question of whether the observed behavior amounted to planned obsolescence.

I think the current dilution of 10GBE products by 2.5GBE hardware is a bad thing. I don't want to upgrade my switches every few years when the higher speed interfaces become more common.

10GBE hardware prices have not dropped in the same manner as most other computing hardware because the market is largely "enterprise" customers, and they'll willingly pay more.

Perhaps you are right and most SOHO consumers want 2.5GBE more than 10GBE. I am not one of them.

As an annoying related topic, it's also interesting that although there has been a PoE standard for 10GBE copper for a long time, there doesn't seem to be ANY hardware that supports it. There are however, plenty of 2.5GBE PoE products available. This trend also supports an argument for planned obsolescence, but does not prove anything by itself.


"Planned obsolescence across the entire industry" is by definition a conspiracy. Without a conspiracy, any one of the involved companies would be striving to meet customer demand in order to gain an edge over their competition... if it were practical. As it turns out, they are doing what they consider practical.

You keep using the term "planned obsolescence", but it doesn't mean what you seem to think it means. Intel producing better processors every year isn't planned obsolescence... it's just the progress of technology, which naturally makes old technologies obsolete. It can't "amount to the same thing" as planned obsolescence. It's either planned or it isn't. 10gig has not reached the consumer space yet. It doesn't matter that it was standardized a long time ago. It's not a conspiracy or planned obsolescence... it's just price vs benefit, and 2.5gig is cheaper because it uses simpler technology.

Calling something "planned obsolescence" is a fairly serious accusation of intent. It obviously annoys me when people make statements like this without evidence.


The evidence I cited:

1) Poor affordability/availability of 10GBE despite its big head start over 2.5GBE. (The 10GBase-T standard was released in 2006, while the 2.5GBase-T standard was released in 2016.)

2) The flurry of 2.5GBE products we are now seeing. The number of 2.5GBE products available now far exceeds the number of 10GBE products available.

3) Zero availability of PoE on the few 10GBE products that do exist despite the generic PoE standard being in place for nearly two decades.

4) The abundance of 2.5GBE PoE products.

Your explanation is "market forces", or in other words, it's cheaper to deploy 2.5GBE than 10GBE because you (may) need to upgrade your cabling when you switch to 10GBE. But is it really cheaper? 10GBE is the end game and we could be there now, but the "market forces" that would normally bring down costs aren't happening because of the artificial scarcity of available products. Cable is the least expensive element of networking hardware, and it may not even need to be upgraded in many cases. (Note that this situation is nearly identical to the transition from 100Base-T to 1000Base-T about 20 years ago.)

Paying for 2.5GBE infrastructure that will be obsolete (or is already obsolete) and then paying again just a few years later for 10GBE infrastructure does not save anybody money. Both SOHO and enterprise consumers will end up paying more. The winners are the network hardware producers.

https://www.eetimes.com/debunking-10gbase-t-myths/ (published 10 years ago)

https://www.microsemi.com/document-portal/doc_view/136209-ne...

https://en.wikipedia.org/wiki/10_Gigabit_Ethernet#10GBASE-T

https://en.wikipedia.org/wiki/2.5GBASE-T_and_5GBASE-T


> Your explanation is "market forces", or in other words, it's cheaper to deploy 2.5GBE than 10GBE because you (may) need to upgrade your cabling when you switch to 10GBE.

I never talked about the price of Ethernet cable, or whether people would need to upgrade it. It’s honestly irrelevant, and a strawman.

10gig is expensive because 10gig is expensive… the switching hardware, the chipsets, the PHYs, everything but the cable. Device manufacturers and customers don’t care about the cost of the cables, especially when 10gig hardware is backwards compatible with lower speeds. They can keep using 1gig if they don’t want to upgrade, just like people continued using 100Mbps networks for quite awhile after gigabit Ethernet became a thing.

My argument is that everyone got tired of waiting for 10gig to be affordable. They waited decades, hoping the price drop was just around the corner so they could jump straight from 1gig to 10gig, and it never happened. After literally 20 years since 10gig was standardized, I’m extremely glad that we’re seeing an abundance of Ethernet hardware that is faster than 1gig. That stagnation had to come to an end.

You can start your own company providing affordable 10gig hardware and prove that the industry players are wrong. For the same price, everyone would snap up 10gig hardware in a heartbeat.

Your argument would have made sense 20 years ago when it seemed like Ethernet standards were rapidly evolving and 10gig adoption was just around the corner. 10gig will be affordable eventually… but 20 years was too long to wait for it, and it still didn’t happen.

If you want 10gig, it has been attainable for years… for the right price. But there’s no need to come in and bother other people who are benefiting from the low cost rollout of better-than-1gig technology.

I would love to have a 10gig or 25gig network, if someone wants to pay for it. Datacenter-class networking hardware (like ConnectX-7) operates at 400+ Gbps today. The sky is the limit, so why bother stopping at 10gig?

> Zero availability of PoE on the few 10GBE products that do exist despite the PoE standard being in place for nearly two decades.

This is incorrect. 10GBASE-T only gained support for PoE in 2018 with the IEEE 802.3bt-2018 standard.


Pretty simple really, 10Gbps copper needed new cabling which turned out to be a big hurdle in the real world.

2.5GBASE-T and 5GBASE-T do run over existing Cat5e cabling reliably.

https://en.m.wikipedia.org/wiki/2.5GBASE-T_and_5GBASE-T


10Gbps copper Ethernet runs just fine over cat 5e/6 cable, but not at the same distances possible with 1000Base-T.


It'll generally work for a few metres with Cat5e, but I think it's always been too problematic to be officially supported. Though looking around it seems some manufacturers do say it'll work?

It works fine over Cat6 yeah, but if you've not got that deployed then 2.5Gb or 5Gb is a good step up from 1Gb.


Is AX88178 compatible without 3rd party drivers? Plugable says no, and IIRC I needed a driver that never got updated for my gigabit ASIX, but it might been a slightly different chipset...

At any rate, a big issue is that for USB, macOS only has generic ECM and NCM drivers. ECM as a protocol sucks and was barely suitable for 100Mbit, let alone gigabit, plus Realtek's implementation of ECM is quirky to say the least.

RTL8156 implements an NCM endpoint, so that's probably the best USB option these days.


Nothing crappy about user mode drivers. If you look at the new Ventura beta you’ll find a lot more of them, like the wifi stack.


True, I guess I'm more annoyed that the rtl8153 driver was doubly inefficient (ECM + usermode).


> TLDR: 2022 and we are still stuck with 100Mbit adapters on macOS

That’s something of an exaggeration. There are a bunch of macs that ship with perfectly good 10GigE ethernet adapters - my Mac Pro's ethernet has been rock solid since the day I bought it.

It seems like you’re more specifically concerned with the quality of support for USB<->ethernet adapters, which is always going to be kind of a crapshoot if you’re looking for 24/7/365 connection stability (on a laptop?)


This is wild! Duet Display, an app built by ex-Apple engineers has a curious toggle in the settings menu to turn off Airdrop, Handoff etc as well since it apparently screws up the remote desktop experience.

Apple also has introduced a beta feature called "Universal control" which allows you to use your keyboard and other peripherals across all devices nearby (ipads, other macs etc) I wonder how much of a performance tax these other features levy and if Apple tests the regression explicitly.


Apparently having location services enabled causes all manner of problems with Zoom calls over WiFi as well. Locations services invoke periodic WiFi scans like you get when you click the WiFi drop down.


Yeah, I recently disabled all location services on my M1 Mac mini after suffering over a year with constant wifi dropouts, not even poor latency but just no network for a few second every minute. Not sure who at Apple thinks this is an OK experience.

Generally their network stack is a mess. Trying to bridge ethernet to wifi just fundamentally doesn't work. The forwarding is randomly reconfigured every time the machine wakes up. I think they aren't paying attention to system level functionality.


> Not sure who at Apple thinks this is an OK experience

I'm sure nobody at Apple thinks it's an OK experience but obscure bugs that happen to a small fraction of users are hard to debug, especially when it comes to radio


It's funny, a lot of Apple's networking code is cribbed straight from the notoriously wonderful OpenBSD codebase. How did they manage to screw that up?


Has there ever been a time period where a BSD was known for having a notably better WiFi stack than Linux? I know the BSDs have a great reputation for their network stack overall and in particular the utilities for configuration (especially if you are willing to ignore the missing features), but I've never had the impression that there was any advantage specifically with regards to wireless. I seem to recall the BSDs historically having much narrower driver support and being years behind on features like 802.11ac.


OpenBSD was/is notorious for having terrible wireless drivers, but decent networking capabilities overall. You're right that the WiFi stack was never really any better than Linux, but BSD networking has long been a choice for high-traffic servers.


bridging uses physical layer afaik, which is why wifi + ethernet are not compatible with each other. especially when wifi is encrypted...


Bridging is a layer 2 operation, and both WiFi and wired ethernet are identical in that regard (and bluetooth ethernet encapsulation). It does work, on other operating systems.


Similar issue on linux: https://blogs.gnome.org/dcbw/2016/05/16/networkmanager-and-w...

Just ran into it using a fresh Debian install on a Lenovo T14. My old laptop used an intel wifi card. Normally that is all I ever use, but figured I'd leave the stock one (Qualcomm 6855 (NFA765)) in and try it out. Worked fine except random lags every minute with a correlated increase in pings times. Fix was to lock Network Manager to a single AP so it won't background scan (something that apparently intel cards are better at doing).

Googling shows this to be an issue going back 15+ years.


Search for "intel 7260 drops" says 162k+ results on Google.

No matter the drivers (though for me the stock Windows ones worked best), AP, whatever.

It just drops the connection once in a while.


> After having desperately achieved seemingly nothing over far too many hours of troubleshooting and feeling out of reasonable options, I decided to […] just randomly start killing […]


"I was always willing to be reasonable until I had to be unreasonable. Sometimes reasonable men must do unreasonable things."

- Marvin Heemeyer, shortly before making an armor plated bulldozer and going on a rampage


Touch Bar was an absolute failure on every level, thankfully Apple backtracked on it.


I always thought it was quite nice to have what's effectively an Elgato Stream Deck (https://news.ycombinator.com/item?id=31528895) built into my keyboard. It was certainly much better than fixed icons, which are in turn far more discoverable than numbered mystery keys.

Apple's only real mistake was not keeping ESC a fixed hard key.


> Apple's only real mistake was not keeping ESC a fixed hard key.

They’ve added that back for a few years now, but I agree. I used to own one of the 2016 MBPs with a Touch Bar and the lack of a physical escape key drove me insane. Even though I sold that laptop, I had actual while-asleep nightmares about the horrifically bad keyboard for a while too.


Even with a physical escape key, the touch bar is still a frequent source of frustration. I think it could only be made acceptable by integrating the haptic elements used in their trackpads, so that accidental light touching of an active region of the touchbar would no longer trigger unintended actions. Haptic feedback might have even made it possible to disambiguate which virtual button your finger landed on before applying enough force to trigger an action.


It was more than the ESC key imo, though that was my biggest complaint. It was far too easy to mistakenly hit the touch bar and trigger something unintended. I also found zero real use for it aside from looking nice.


I actually found it nice when using the debugger in VScode. Having the buttons for play/pause, skip line, etc make life easier.



I think they’re just selling out that hardware because they made a quantity of the cases and Tim insists on running them out.

I’d be stunned if any new Touch Bars had been manufactured in the last 18 months.


I don't understand though how a "network scan" (I assume sending a broadcast packet and receiving a response) can raise ping time from 3 to 150 ms.


If you understand how TV used to be (before streaming), a network scan is more like you are trying to watch a program but your annoying brother has the remote and periodically cycles through all the channels to see if there is something more interesting to watch. He often does it during a commercial break but sometimes does not get back before the break ends and sometimes he just does it in the middle of a part of your program that he doesn't understand or thinks is boring.


A Wi-Fi network scan involves listening for beacons on all Wi-Fi channels which involves a frequency change. Thus during the operation the Wi-Fi card essentially leaves your current frequency (and stops being able to transmit/receive packets) and scans all the other ones.


This is a good question but I don't think any of the other four replies have understood it. Judging by the original article, this problem appears when AirPlay devices are on the same network and the same subnet as you. All that should be required to scan for devices on your network is sending a broadcast packet, as you have indicated.

What others are claiming effectively amounts to saying that the Macbook needs to disconnect from your network, scan for access points, and then reconnect to the network. Which would certainly cause latency issues, that much is true, but that's surely not the kind of scanning that is actually being done here? Why would a scan for access points be necessary?


I am not a Mac user, but as I understand the topic, it is using the radio for location and proximal device detection. It is not a LAN feature that would work via broadcast in the current WiFi LAN nor would it work if using only a wired LAN with WiFi disabled.

It puts the radio into a different mode to listen for nearby devices, whether to talk to them or to use them as landmarks for location sensing. This requires scanning radio channels, not just continuing to listen to the channel where you expect to receive further packets from an associated access point.


That certainly seems like a plausible thing that could be happening, but the specific claim of the article is that this is an issue with Airplay, which to my understanding involves devices that are connected to your router / access point, not free standing devices with their own access points that have to be scanned for.

There might very well be latency issues involved with the WiFi radio scan modes on Macs, for all I know, but I don't think they could be related to this specific issue with AirPlay?


The problem seems to be that they bundle all these features into AirPlay with the intention of making it easy to use. It uses device discovery that is not bound to the current LAN.

See https://support.apple.com/guide/deployment/use-airplay-dep91...

"When looking for other devices, an Apple device broadcasts a very small Bluetooth advertisement indicating that it’s looking for peer-to-peer services. When any peer-to-peer-capable device hears this BTLE packet, it creates or joins a peer-to-peer network directly between the devices. The devices concurrently switch between this temporary network and any infrastructure networks they were on before in order to deliver both the AirPlay video stream and provide existing internet service. The temporary network typically operates on Wi-Fi channel 149+1, but depending on the hardware involved, may also include channel 6, or channel 149,80. The devices follow the same frequency use rules on the temporary network as they do with any other Wi-Fi connection to avoid disrupting any existing infrastructure networks that might already be using those channels."


Apple is really big into custom proprietary WiFi protocols (see also: AirDrop). So while AirPlay was originally limited to the local network, it now also apparently works over WiFi direct - and presumably that requires some sort of WiFi scan, in turn requiring the radio to switch channels which causes latency to tank.


As weird as it sounds, $3000 machines still only come with one radio so it can only tune to one frequency at a time. Similar latency spikes happen when roaming between access points. WiFi is a mess and will be for the foreseeable future. Personally I just use a 10G ethernet card and some cheap optics.


rule 1: medium "air" shared with everyone else rule 2: it can use wifi direct, eg: doesn't have to be in the same network rule 3: your wifi card may have single antenna for a given band (2.4 or 5 GHz). Such that, you can only transmit or receive. rule 4: interference causes packetloss (eg: 2 people pressing talk in a radio)

given those rules and wifi 2.4 band having 14, 5GHz band having 190 or more channels. I would say having been able to send things back and fort is still pretty damn good :)


side note: each channel may have different width (20, 40 or 80mhz)

frames may have other phsical properties that need to match. so, too many options and just a physical limitation...


I have a non-Apple TV that supports AirPlay and sometimes when I try to get my iPhone to play something on it while the TV is disconnected (completely off) it causes my ASUS router to drop the WiFi signal completely for a few seconds.

I have a feeling that the iPhone keeps searching the network and my router cannot keep up with the requests or something? Not sure but that's a theory I didn't get a chance to investigate


I suspect there are many edge cases that cause this.

It’s not just the WiFi scanning or Airplay / Touchbar. I noticed this delayed ping pattern as far back as 2016. I spent some troubleshooting time and suspected it was something in MacOS but gave up.


Is there any sort of lower-level logging and monitoring in MacOS that would help with diagnosing such issues? I'm spending some time on a wonky wifi network, and so far it seems that it's high time to finally read about networking internals starting with ip and wifi protos, before I can even theorize about where to look and what to fiddle.

In related news, behavior of bluetooth headphones likewise seems mysterious and impenetrable, especially with Android.


This goes to show why Activity Monitor is such a great tool for troubleshooting and killing suspicious processes that hog resources.


It's especially useful on the new M* macs. You can narrow down and see what apps or processes are using Rosetta.


Yet another reason why the Touch Bar is 100% Epic Fail!


I hit a touch bar function accidentally at least once or twice a day. It looks cool, but the functionality is garbage.


I couldn't get used to it at all. Every few minutes I kept increasing the volume or decreasing the screen brightness, and at first I never realised why until I looked at how I rested my hands on the keyboard. A few days after I got the laptop I just got rid of it by taking out all of the touchbar buttons in the system preferences. I would have prefered the function keys a lot more.


This is also one of the many cases of Apple not following their own guidelines. They suggested that Touch Bar items should act statically like keyboard keys and not be used to display status, etc. (In reality, plenty were abused for that, I guess the allure of a dynamic colorful display was too great.)

So in this case, all they had to do was make it key-like and it wouldn’t have had any of the features that could trigger this problem.


Apple no longer has the hunger to get things right. It's sad as I've enjoyed the ride since 1995 but without Steve and Jony, and without having a "early acceptance stage" leaders like, but instead having a "late acceptance stage" leader (Tim the accountant), this is par for the corporate lifecycle - nothing lasts forever and Apple is now just as bureaucratic as IBM, Sperry, DEC, etc. The key thing about bureaucracies that reach this point is they become more interested in preserve bureaucratic power and privilege than focusing on customer mission. It's been repeated so many times you'd think someone would finally figure out and implement the formula to prevent it, but not ever Apple has.


Apple: Completely resets bar for computing efficiency, has PC industry scrambling.

HN: "Apple no longer has the hunger to get things right."


That's pretty embarrassing. Perf teams at apple play second fiddle to any "Manager" with "A vision". This is the result.


On a tangential note, I’m a fan of the support cycle of macOS. I consistently stay a major version or so behind to avoid the various issues of the major upgrades while getting security updates, but I’m also happy they are able to put out major upgrades yearly (which I also get each year — just a year behind).

I don’t know if this will be consistent, but I was also able to do this with iOS, staying on iOS 14 for a long time while receiving security updates. Hadn’t noticed that ability in previous major versions of iOS and I’m not sure if it was due to device support of iOS 15. I hope that continues.


> I’m a fan of the support cycle of macOS.

Their what? They have no support cycle. Their cycle is "we support it until we don't". It's 2022 and it's an absolute joke. Windows, Linux, BSD -- basically everyone -- has published support dates aligned with all OS releases. For example from official documentation you would know that Windows 10 LTSC 2021 is supported until 1/12/2027[1] and FreeBSD 13 is supported until 1/31/2026. Apple is the only one that refuses to publish any dates. The mystique bit is getting old. The Jobs-era magic is gone. macOS will continue to be a toy until they take support seriously and that includes not breaking anything and everything with reckless abandon in every new OS release. No one who does IT with even a modicum of professionalism is okay with looking at Apple's latest critical security updates and suddenly finding out that N OS version was no longer included in the patch set. But that would take the mystery aspect out of it!

[1] https://docs.microsoft.com/en-us/lifecycle/products/windows-...


> Apple is the only one that refuses to publish any dates.

I don’t mean to defend Apple here, but one reason Apple may do that is because they essentially consider themselves a hardware company, and so the things they support are device models, not software versions.

For me, a good-enough approximation has been that you can consider a macOS version [major dot minor] to be supported until the day [major dot (minor+1)] comes out, or the day [(major+3) dot 1] comes out, whichever happens earlier [1].

(Full disclosure: I’m the author of that merge request.)

[1]: https://github.com/CISOfy/lynis/pull/1006


> so the things they support are device models, not software versions.

Except they don't publish any support dates with hardware either. Devices support the latest macOS/iOS/etc until they don't. At which point devices become not only obsolete overnight, but are vulnerable as well because the OS suddenly becomes unsupported due to Apple's insistence of bundling feature updates with security updates. Remember, this caused a lot of controversy when Windows 10 moved to this model, but Microsoft had the foresight to publish support dates for when they would stop releasing security patches for the previous release. Unfortunately, Apple's cult-like fan base frequently treats them with kid gloves in this regard, and insists we should just be happy we got free updates until now and we're not running abandoned-support-by-design Android.

I don't expect a device to be supported forever nor do I expect free feature upgrades. All I'm asking for is a published date that "X device or OS will receive security patches until Y date". That's it. It's a relatively simple request that is pretty much standard in the rest of the IT world so you can plan adequately.


I fully agree.


> while getting security updates

Apple releases security updates properly for the current (latest) version only. Older releases get security updates that sometimes don't fix all of the known vulnerabilities.

https://www.youtube.com/watch?v=o5KUvgXHOFU

https://www.intego.com/mac-security-blog/apple-neglects-to-p...


My problem with that is, Apple doesn’t offer upgrade paths from n-2 to n-1.

You need to either remember to upgrade before the next major release comes out, or stockpile the installation image before it’s gone from System Preferences.

Fail to do either, and you’re effectively forced to upgrade to the just-released version against your will.


They do, literally first hit of "older macos download" is this:

https://support.apple.com/en-us/HT211683

For latest 4 versions, you need to use appstore. Before those 4, you can directly download ISO/DMG from a "S3" bucket...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: