Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
"ERROR: could not open temporary file" after upgrade to Mac OS Big Sur (github.com/postgresapp)
227 points by devops000 on Jan 30, 2021 | hide | past | favorite | 256 comments


It appears to be a PostgreSQL bug from my reading. The return code causing the problem has always been legitimate, even though you never expected to see it in practice.

I did a quick check of the source of two other storage engines that run on both Linux and macOS to see how common this oversight is. They both had code for handling this case correctly for years. It was probably never used, but it was implemented because the syscall docs allowed for the possibility.

This is the kind of thing I would have assumed PostgreSQL handles correctly as a matter of pedantry. Given how syscall heavy PostgreSQL is, it will be a fair amount of work to fix it.


While open[1] is documented for macOS to be able to return EINTR, the documentation for sigaction[2] seems to specify (in confusing wording) that setting SA_RESTART prevents that. It seems to me like a bug in macOS, since they're going against their own documentation.

[1]: https://developer.apple.com/library/archive/documentation/Sy...

[2]: https://developer.apple.com/library/archive/documentation/Sy...

specifically:

> If a signal is caught during the system calls listed below, the call may be forced to terminate with the error EINTR, the call may return with a data transfer shorter than requested, or the call may be restarted. Restart of pending calls is requested by setting the SA_RESTART bit in sa_flags. The affected system calls include open(2), read(2), write(2), sendto(2), recvfrom(2), sendmsg(2) and recvmsg(2) on a communications channel or a slow device (such as a terminal, but not a regular file) and during a wait(2) or ioctl(2). However, calls that have already committed are not restarted, but instead return a partial success (for example, a short read count).


The Apple documentation matches POSIX so if open() is returning EINTR for reasons other than a signal, that is a standards conformance bug.


PostgreSQL isn't alone in this, VirtualBox (and I'm sure other virtualization products too) ran into the new API errors head first all over the place initially as well. The common use cases more or less work now, but there are still specific setups that you simply can't run on Big Sur.


I wonder if EINTRs are now returned from syscalls that were always documented to (sometimes) return them in POSIX/SUS/other documentation but PostgreSQL simply ignored that possibility with Mach, or if they are returned from some completely unexpected places?


This seems to be the case. On the other hand mostly everybody ignores the fact that open(2) is documented as possibly returning -EINTR, because it essentially never happens (in particular, network filesystem are usually implemented in a way that it will not happen even in situations when it clearly should happen).



This from that PostgreSQL mailing list link you give is interesting:

> AUTH means that the system call is blocked (on that condition variable I mentioned about), and the user mode daemon is asked about the generated event: "postgres, pid 999, open() on file /path/file with flags 0x400003" - something like that. The usermode can either allow or deny the event by replying.

The use of a user mode daemon to make decisions about whether file access is allowed reminds me of the way TOPS-10 on DEC PDP-10 did access control lists over 40 years ago. It's a method I'd like to see for ACLs on modern systems.

On modern systems the ACL is part of the per file metadata. On TOPS-10 the ACL information was in a separate file that contained the ACL information for multiple files. I don't remember if is was one ACL file per user or one per directory.

An ACL entry could specify file name (wildcards allowed), account and group requesting access (wildcards allowed), program that was requesting access, and what kind of access. The wildcards and the centralization of the ACLs for multiple files is what made this system excellent to use.

When an access was attempted the system first checked if it was allowed by the ordinary per file permissions. If it was access was granted and the ACL daemon was not consulted.

If the ordinary permissions denied access and the caller had not set a flag indicating it did not want to check the ACLs the access was denied. If that flag was not set the kernel would consult the access daemon, which would find the ACL file and check the ACLs within to find the ones that applied and make its decision.


Funny how we are still learning from what we built 40 years ago.


That documentation says "[EINTR] A signal was caught during open().", and the documentation there for sigaction with SA_RESTART says "If set, and a function specified as interruptible is interrupted by this signal, the function shall restart and shall not fail with [EINTR] unless otherwise specified.", so as long as all signals were installed with SA_RESTART (which is probably the most common case since nobody likes dealing with EINTR), open() should never fail with EINTR.


I believe EINTR was always a possibility per the docs. This case has always been handled correctly in a few other storage engine code bases, and that code was unlikely to have been written for no reason even if EINTR never occurred in practice.

This seems to be a case of not checking or allowing for all possible return codes.


Does macOS respect SA_RESTART? [1] Apple's manpage[2] for sigaction is confusing...

Regardless, doesn't anybody at Apple use postgres? Breaking Postgres should be hard to miss.

[1] https://pubs.opengroup.org/onlinepubs/009695399/functions/si...

[2] https://developer.apple.com/library/archive/documentation/Sy...


"Regardless, doesn't anybody at Apple use postgres?"

This problem occurs if you use anti-virus that delays file activities, and is 100% API consistent. Anyone saying "well it didn't do it before!" is being idiotic, similar to saying you don't need to check the return of malloc because it's usually going to succeed.

pgsql runs perfectly fine.


Quite interesting to read old stackoverflow questions to this topic which recommended “not to care about it”.

https://stackoverflow.com/questions/4959524/when-to-check-fo...


That question is about Linux, which doesn't have this macOS-specific issue.


Yes, those things are usually dealt with a "don't care about it..." attitude (not condoning it!) until you eventually have to.


Ouch.

> The issue seems to be caused by some new security APIs in Big Sur. Apple has apparently started returning with errno=EINTR from some system calls that previously never returned this error. Fixing this problem on the PostgreSQL side is not trivial, since it seems to require changes in both PostgreSQL itself and possibly in extensions as well, so it's unlikely to be fixed quickly.

> These are the possiblities:

> Try to disable security software (eg. Antivirus products) that might trigger the problem Downgrade to macOS 10.15 Send feedback to Apple that they should fix the issue. PostgreSQL is unlikely to be the only affected software. Hope that someone fixes the error in PostgreSQL

https://github.com/PostgresApp/PostgresApp/issues/610#issuec...


Those proposed fixes don't make any sense for macOS. Nearly no one runs antivirus software on a Mac, and downgrading to prior OS version is basically impossible for most people.


We've been made to use Macs at work now and they have McAfee on that we can't disable, causes no end of performance issues


We also had anti-virus software on every Mac in large enterprise. Took a lot of effort to at least except source directories.


Oooh I wonder if I could get that here - things like npm install(presume any package manager, certainly homebrew) or git commands take obscenely long times.


Commands that download random 3rd party code and dependencies from the internet are precisely the kind of thing that should be included in a virus scan, surely?


Especially since any npm package can execute any code at install time in npm's default configuration.

On Linux this is extra juicy since you cannot globally install any package without root unless you explicitly changed the directory's permissions and/or location.


By default, npm installs into the local project directory, as it should. Only the OS package manager should touch system directories. Before using the "--global" flag, think about what you're actually trying to do, and what the better way to do that would be. One conventional workaround is to install commonly used tools to "~/bin". Root is not required for that.

Of course, it's a good idea to keep the username used for development separate from the one used for browsing the web. It wouldn't be surprising for rogue npm packages to search for e.g. credit card details. I'm sure the browsers try to obfuscate that somehow, but how much can they really defend against code that is allowed to read the disk?


On Windows npm is using %AppData%\npm by default (which is wrong btw, it should use %LocalAppData%\npm as %AppData% is meant for configuration files).

This means I don't require any elevation to install packages for my user without additional configuration.

Installing tools like yarn or @angular/cli globally is not uncommon.


I have no idea about windoze. I disagreed with the above incorrect statement about Linux.


It wasn't incorrect, you misunderstood. npm just uses silly defaults on Linux while using saner defaults on Windows and mac os. I am a Linux user and having to configure npm to use another path for global packages that I use in literally all my projects is just bad UX.


Sure, npm has bad UX. (For an actual example, see the stubborn refusal to follow XDG spec.) This is not an example of that. Defaulting prefix to "/usr/local" is what every well-behaved Linux package does.

It seems odd to install so many packages at the "global" (but not owned by root!) level, that using the "--prefix=~" flag would be a hardship. I just checked; I have three. You can't fault npm for this.


Only if you believe apps like McAfee are benign.

I'd say excepting source directories is less about engineers believing those are safe directories and more about engineers wanting to except their entire machine and source directories being the compromise management went for.


Only if you believe apps like McAfee are benign.

If sysadmins have installed McAfee on your workstation, then presumably they want to use it. Installing it and then excluding code downloaded from the internet defeats the whole point. (The effectiveness/safety/whatever of antivirus is a completely separate issue.)


> The effectiveness/safety/whatever of antivirus is a completely separate issue.

Struggling to understand how this is a separate issue?


If you have antivirus software installed, you presumably want it to scan stuff that is downloaded from the internet.

On the other hand, if you don't believe/trust in the efficacy of antivirus software, then there's no point taking half measures and excluding some things from its scans, instead why use it at all?


> If you have antivirus software installed, you presumably want it to scan stuff that is downloaded from the internet.

I don't think you understood the original proposition. This is about corporate-controlled machines. Engineering teams didn't install this AV on their own machines. The point is they didn't install it; it's a company-mandated install. So no, there's no presumption that they want the AV to scan anything.

> if you don't believe/trust in the efficacy of antivirus software, then there's no point taking half measures and excluding some things from its scans

I 100% agree. Half measures / excluding some things IS pointless. But as I said in my above comment, that pointless half-measure may just have been the only compromise management would agree to.

> why use it at all?

Because it's mandated by company policy...


The problem is usually that AVs hook file operations to scan files. Unfortunately, software development performs a LOT of file IO by package management and compilers, and in the case of compilers those files are internally formatted as files containing code (eg obj files, libs or executables), even if they are only temporary during the build.

Because of this, an AV product could work fine for every department of the company, but have an extreme negative performance impact on software devs. To give you an idea, it could mean the difference between a 5 minute and a 1 hour build. These issues are inherent to a generic AV product so often the fix is simply to add those folders to an exclusion list.

Does it provide security for those folders? Nope. But the alternative could make it impossible to get work done.


I'd agree. We had Java and used and enterprise package proxy to reduce the risks. Wouldn't do that for npm.

Also these machines were on a seperate network with no share access for example - not on the corporate network with all the other machines.


That’s hilarious. John McAfee himself has a useful video that you can search for on YouTube with instructions on how to disable it.



That isn't unique to macs though, most of the anti-virus crap is.. crap...


> Nearly no one runs antivirus software on a Mac

There are companies which put antivirus software (e.g. mcaffee) on the macs they provide do employees.

From what I've heard, they're even more detrimental there than on windows.


I got to use this combination a few months ago when I changed jobs. It was just incredibly unstable, MacOS would just crash all over the place, requiring reboots every other day or so. Macbook used to overheat too, spining up ventilator while just sitting idle. Windows ME level of instability while overheating lead to battery failures on some Macbooks.

While McAfee somewhat improved I now consider Windows 10 with WSL2 a better solution for people who don't want a desktop Linux but need/want Linux compatible development environment.


I mean, the fix is really to stop running McAfee on macs, but that aside

> Linux compatible development environment.

macOS is not, has never been, and will never be linux compatible. It's a unix, linux is a unix, if you're looking for a unix it will work just fine. It will not work if you're looking for a linux, you'll have to go through a VM with all that entails.


WSL2 is a Linux VM, and macOS also supports Linux VMs.


> WSL2 is a Linux VM

That is a singularly useless statement. WSL2 is a first-party deeply integrated VM-based system.

"All that entails" is very different between WSL2 and non-WSL2 hypervisors, even on windows.


[flagged]


This behavior is not acceptable here.


[flagged]


We've banned this account. Please stop creating accounts to break HN's guidelines with.

https://news.ycombinator.com/newsguidelines.html


Running docker with homebrew and on docker, no problems. I am not using antivirus as they produce many problems on mac (and are hellish slow), i guess the problem is really on triggered by antivirus software.


.


[deleted]


s/Postgres/macOS/


I run Postgres in docker containers on my Mac but I know that won’t be viable for a lot of people.


M1 docker is still in preview, so that's not an option for many folks running newer Macs.


I wonder what happens if you volume mount your MacOS files into the container, then the Docker daemon (I guess) would still have to do an open() in MacOS that could get the EINTR. Does Docker process handle it properly?


i run linux and i prefer running all these kind of services (postgres, redis, rabbitmq and the likes) on docker containers. it's much easier to manage, easier to add/remove/clean and it doesn't pollute my local system with random stuff.


I have been running Postgres in Docker on my M1 MBP for a while now. No issues so far.


Yeah, if you are running Postgres natively on OSX instead of in Docker you are behind the times. I’m sure this will be downvoted, but it needs to be said.


No it doesn’t. There are several Mac-native apps, such as QGIS, for which Postgres is a sensible local datastore. Running it in a container for this is an entirely unnecessary complication.


> you are behind the times

Seriously? What is wrong with running programs natively without any layers in between?

Also, IIRC PostgreSQL isn't recommended to be run in docker due to a possible data loss?


Because your average Unix program will basically spread itself all over your system, and is nearly impossible to remove. I don't want to pollute all my system directories with random gunk.

Keeping things in containers keeps it from making a mess, and that alone makes it worth it, to say nothing about the advantages for handling dependencies or running multiple copies or versions.


The original issue is about Postgres.app which very deliberately _doesn’t_ spread itself all over your system.


your average unix program, esp. something like postgres which has been developed for literally decades, will basically spread itself into well defined compile-time locations which are easily removed by any competent package manager or even competent manual build (touch package.stamp && make install && find $prefix -cnewer -type f package.stamp > package.file-list )

It's only 'random' if you don't know what you are doing. Containers are fine and often good, but their existence/availability doesn't 'supercede' real knowledge.


How does permeance compare? Also that doesn't look like a good option for M1 mac users.


... alternately, if you think running in docker is suitable for and solves all problems, you are a noob.


.NET (Core) has also an issue with this. [0]

Apple seems to intentionally break quite a lot of core APIs lately.

0: https://github.com/dotnet/runtime/issues/47584


Returning EINTR is valid behaviour.


Yes, it is valid. But there is a difference between valid and consistent/backward-compatible. Microsoft goes out of their way to be backward-compatible (but doesn't always succeed, of course) and Linus (some guy who wrote some hobby OS, that won't be "big or professional like gnu") is very adamant not to break user space if not strictly necessary and unavoidable.

To me, it looks like Apple played fast and loose here, changing the expected behavior of something as fundamental as "open", without even mentioning it anywhere (and if they didn't even notice their changes would affect this behavior, that's another problem).

A change like this does not just break PostgreSQL, dotnet (and thereby anything running on dotnet), VirtualBox, and whatever else was mentioned by name, it will break almost everything in subtle ways, because almost nobody ever implemented correct EINTR behavior in the context of these APIs.


Why should Apple care about downstream apps not being coded properly?


Because they want people to consider their OS reliable? If you have software that runs just fine on the previous release of an OS, and breaks on the next one, are people going to consider the software buggy, or the OS? The perception will largely be "I upgraded to <x> version of the OS and <y> program stopped working" which quickly morphs into "<x> version of the OS is buggy because it breaks <y>, avoid it if you want to be able to do work", and if sustained long enough it then morphs into "<x> OS sucks because they constantly break software".

Take away good software and all that's left are Apple's "shinies". If it weren't for the potential game-changer that they have with the M1, I'd almost say that they wanted to kill off the Mac as anything other than an iOS development box.


When using TimeMachine as backup, rolling back to your previous OS version before the update is a rather easy thing to do once you encounter that your required App X is not yet updated to support latest version. I personally do never upgrade my macOS version before 6-12 months being released, and there is no need to since the previous version(s) is supported for several years after the release.


I'm very far out of my depth here, but from a quick read it seems like there is a commonly used flag SA_RESTART that should automatically retry these system calls instead of returning the EINTR error.

I've no idea whether the behaviour Apple introduced is allowed by the standard, but it seems that it is at least quite common to not explicitly handle this EINTR error because there is an alternative way to just tell the OS to retry instead of returning this error. So I wouldn't be so quick to claim that the downstream apps are coded improperly.


Oddly enough, POSIX does not state that opendir and readdir can return EINTR ;) - though I'm sure this must be an oversight? I doubt this is possible to guarantee in practice:

https://pubs.opengroup.org/onlinepubs/9699919799/functions/o...

https://pubs.opengroup.org/onlinepubs/9699919799/functions/r...

closedir may:

https://pubs.opengroup.org/onlinepubs/9699919799/functions/c...


Even if it's valid behavior, it's still a breaking change. The function never set this error code before, now it does, and at least 3 large projects are not working anymore.


Could Apple be better at communicating changes like this? Absolutely. Is Apple in the wrong here? No. This is a documented return code that has been ignored. What's the point of a standard if they can't use it?


Apple isn't following POSIX by ignoring SA_RESTART.

What is the point indeed.


I don’t think violating Hyrum’s law should be considered a breaking change, but I suppose that that’s a matter of opinion.


Hyrum's law doesn't apply here. Tons of software is liable to be broken by this and at least three big projects have been broken by it. Furthermore, Apple isn't even obeying POSIX since it is ignoring SA_RESTART.

It's a breaking change.


Returning EINTR is valid behaviour, but not when SA_RESTART is set. The POSIX documentation for sigaction[1] is quite clear about that, even if Apple's corresponding docs[2] are confusingly worded.

[1]: https://pubs.opengroup.org/onlinepubs/9699919799/functions/s...

[2]: https://developer.apple.com/library/archive/documentation/Sy...


Seems like they're not handling EINTR for some syscalls.

I remember when I was first learning Unix programming, and read about this stuff about signals and EINTR, I was baffled how much of a steaming pile of crap Unices are. Later learning about the inability to do truly non-blocking I/O was another similar moment.


I'm amazed how many things Big Sur broke. Arduino development just doesn't really work any more because they broke the kernel extension for virtual com ports. Also one app locking up seems to break the entire system (ive had dev tools freeze up and then activity monitor couldn't kill them)


The fact that kexts were going away was not a secret, and has been a thing generally known about for YEARS.


My work computer upgraded itself to Big Sur last month and killed the network card: neither WiFi nor wired worked after the upgrade, no matter what I tried. IT believes I had something installed already that conflicted with the new OS, but they weren’t able to figure out what it was and ended up having to wipe the OS and downgrade back to Catalina. Say what you like about M$ or Linux, but I’ve never seen an OS upgrade with so many problems in 30 years.


Apple is right here and people complaining about the new behavior are wrong. If a system call can return EINTR, you must retry or deal with the error some other way. A retry is trivial too. It's irresponsible to depend on an API happening not to EINTR when it's documented to do so.


Huh, I’ve been running PostgreSQL (version 10 tho) on an M1 since they came out and have had no problems (knock on wood).

Maybe it’s only the case when importing large datasets? Or when running antivirus?installed?


macOS and importantly Homebrew's direction (which now doesn't support brewing from a sha-addressed github link) are leaving me at a crossroads.

I value stability and autonomy over everything else - I expect the setups I work hard on scripting to work for years, freeing me of this continuous keeping up with the magical incantation du jour. As a hacker, it's particularly frustrating to feel at the whims of someone else.

I think my next setup will be using my (company-issued) macbook as a thin client against a beefy unix machine. That way I'd keep a number of affordances (trackpad, iTerm, fancy Emacs etc) while having my core tooling absolutely stable.

(I have the vague impression that Linux distros have also dramas and instability of their own... looks like the BSD family is more minimalistic?)


That's a good way of verbalizing how I've felt with the last few iterations of macOS as well as Homebrew's perpetually changing behavior. Most of it feels like change for the sake of change. I've also used FreeBSD for several years, and it's the opposite approach - everything that used to work still does, and things are well-documented. Usually the man page is enough, or the FreeBSD handbook.

It's still all just Fire and Motion [0].

[0] https://www.joelonsoftware.com/2002/01/06/fire-and-motion/


The interesting thing about Homebrew is that there were perfectly good systems before it that were more stable and better designed (Fink and Macports, neither of which install to /usr/local) and apparently the only reason people switched to Homebrew is that it was "cool" because it was written in Ruby.


When I switched to MacOS (about 11 years ago) I tried Macports first, but ended up with Homebrew because Macports required me to learn more things than Homebrew required of me before I could successfully install the 10 or so small command-line packages I wanted to install. (At that time, Fink was lacking a maintainer IIRC.)

Note that I didn't want to take the time to try to determine which one had the better design and engineering.

(I didn't not know or want to learn Ruby when I made the decision.)


Also, it displays a cool beer emoji when you install a package. And colors! Fancy colors and progress bars. And no sudo, except when needed of course.

And then goes the Ruby factor, of course.

It ticks all the important dopamine checkboxes of a MacBook-wielding, Starbucks-resident hipster type who has just purchased an .io domain and is ready to code their world-changing app.

Took me a while to figure out how to disable this stuff.


After years spent with Linux, then with macOS, my solution has been to finally move to Windows 10. My linux subsystem is as stable as I want it to be, I have full control over it. It takes a bit of time to get used to the Windows world and find the good GUI tools, but once you get more familiar with it things work ok. And Windows doesn't break my system with every upgrade the way macOS does (the things that change over time are mostly UI updates, not fundamental changes).

That's not for everybody, but after ~1.5 year I can say I'm satisfied with my choice, WSL has really been a game changer.


> As a hacker, it's particularly frustrating to feel at the whims of someone else.

Isn't this the very definition of what a hacker is - somebody who finds creative ways to get things done with systems they didn't create and don't control. If you are pushing the boundaries of a system, you have to expect that sometimes the underlying system stop supporting you

A guarantee that the system you're "hacking" on will always continue to support your customizations in perpetuity as it updates seems to me to be the opposite of the hacker ethos. That's not hacking, that's using a system in a supported manner.


Before going to Windows, I left Homebrew and went back to Macports for a couple of years.

It did everything I needed and while you might argue that writing a Ruby formula is easier than packaging something for Macports, it was a feature I never really needed compared to the ability to easily install specific versions of software going as far back as I needed. Much more akin to the Debian approach, but with faster updates for newer versions.

But, as a hacker, I agree. If it's not neatly wrapped up for you by a package manager then you're perfectly equipped to pull down a prebuilt binary from a repo or compile it from source. If I felt at the whim of the apt package manager I'd still be complaining about the lack of XDG support in tmux, instead of wget'ing a tarball and doing a good old make && make install.


I like this approach and thought about it as well.

The only thing holding me back is that it's just so much effort. It's frustrating that we need to work-around MacOS. That's why I decided to refuse to update and I want to sit out Mojave until EOL. There are a few known problems, but the workarounds are well known. My experience has been good so far.


> I think my next setup will be using my (company-issued) macbook as a thin client against a beefy unix machine. That way I'd keep a number of affordances (trackpad, iTerm, fancy Emacs etc) while having my core tooling absolutely stable.

You mind going into a little detail on how you’ll go about this?


So I'd have th unix box and macbook both at home, networked locally.

In the macbook I'd try hard to have no npm/rubygems/maven/... because these are all a security risk.

The unix box would be heavily firewalled so that no bad code could not talk home or reach my macbook.

Communication would happen primarily via ssh (e.g. macos iTerm -> unix box), but also as a NFS share so that I can edit files from my IDE flawlessly (I use Emacs which has a thing called Tramp which makes this unnecessary, but I'm hesitant about its effectiveness).

So, in my macbook I'd have a small number of trusted tools like iTerm and Emacs while everything else, like Ruby/node.js/Java/postgresql processes would happen in the unix box. My workflow would be typically:

- edit files to the NFS share;

- issue commands to the unix box via ssh (an IDE can abstract this away); and

- have some port forwarding for accessing e.g. the webapp I'm working on from macOS chrome.

In the end it would be a pretty vanilla approach and not tremendously different from using a virtual machine via Vagrant. The difference would be that the macOS machine would be thinner, and that the unix box would have greater focus on security and stability.

And of course performance - with a real box I don't have to worry about Vagrant/Docker/... making my macbook overheat.


Not OP, but I've dabbled doing just that with a Digital Ocean droplet and using VS Code's Remote SSH extension. It's not lag free, but it's nice being able to access the same dev environment from multiple computers.


Good luck getting this new feature worked out while on a long flight. Remote dev setups are nice, until internet is not being nice to you (either non-existent or wobbly)


Ignoring the last year, how often do you need to code while flying. I can't even use a laptop in a seat unless I pay extra for the extra leg room seats.


As someone who used to live on another continent as where my employers office was, I coded while flying at least 6 times a year, sometimes even more.

And while most would not fly that often, being able to code when internet is not available or in a bad shape is not only a 'nice to have' option, but almost a 'must have'


Thanks for sharing!


Obligatory reference to "Worse is Better"

https://dreamsongs.com/RiseOfWorseIsBetter.html

specifically the section discussing MIT vs BSD system call return codes.


what a shitty title...

correction: "PostgreSQL needs to be updated to comply with New macOS Big Sur security API"


I can only speak for myself, but when I see a headline that some software update breaks another program, I don’t assign blame. I consider it shorthand for “X doesn’t work on Y, but it works in other environments.” I have to dig a little to determine if Y has a good reason for its behavior.


As a long-term Linux user, I have been forced on occasion over the last few years to use MacOS. I am at an almost complete loss as to what anyone sees in it. Standard utilities that work seamlessly on Linux are regularly broken, there's a weird security system that requires me to do all sorts of boot-time nonsense just to have control over my own machine... and now, apparently, they're violating the first rule of kernel development by breaking userspace.

The hardware is nice, but as a platform, I just don't get it.


Please don't take HN on generic flamewar tangents. Platform flamewars, like programming language flamewars, quickly suck up all the oxygen in a thread, but they're fundamentally repetitive and boring.

https://news.ycombinator.com/newsguidelines.html


They are, and I apologize for starting one. It wasn't my intent.

For what it is worth, I found quite a few of the replies enlightening as to why others choose to use the platform. But: I take your point that it derailed the conversation.


Appreciated!


I've been using Linux regularly for 20+ years at this point, since Debian Slink. I've used it on gigantic servers, workstations, laptops, and embedded devices – it's wonderful.

But my day-to-day development machine is a Mac, and has been for a long time. This was originally because I was both a graphic designer and a developer, and needed access to tools like Photoshop and InDesign while still having a UNIX development environment, and the Mac was great at both of these. This basically still applies – I have access to lots of really well-designed native applications that I use all the time, and a full UNIX development environment that I use every day.

I really haven't encountered the issues you talk about in a serious way – the most frequent thing is forgetting that I don't have the GNU version of some CLI tools and need to use different flags. Certainly no encounters with a "weird security system". I have a working development environment, running on nice hardware, that gives me access to everything I want – it's hard to find a reason to make a change to this.

I do sometimes come back to working with Linux in a desktop capacity – most recently Ubuntu on a NUC when I shipped my MacBook back for repair. It is broadly fine, but it's still just a bit more irritating. Multiple monitor support was buggy, as was high-DPI support and Bluetooth. No doubt a different distribution and/or some tweaking could resolve these issues and leave me with a development environment that was equivalent to what I already have – but I'd still be missing other tools that I like, not to mention all the niceties of the wider ecosystem.

Honestly it's more likely to be the hardware that makes me change. I don't have the same complaints as others do about recent product lines, but I'm definitely one of those people in Apple's market gap in the "high-end-non-workstation-non-integrated-desktop" area. I'd fork quite a lot of money for something like a 16-core desktop machine, but probably not the £8k Apple wants for the Mac Pro. We'll see what happens over the next few years though.


> Certainly no encounters with a "weird security system"

You didn't use mac during the notarization system outage? I coulndn't even use my keyboard and mouse (the one on macbook, NOT bluetooth one) for 2-5 minutes after lid was opened.

Never had such issue with Linux, yeah, bluetooth ones might stop working but wired keyboard (or laptop open) works always.

Yesterday out of the blue my wifes AirPods Pro stopped working with macbook AND iPhone at the same time - they rejected connection (or actually on mac it looked like they are connected but the sound was coming out of speakers not the pods) and the same situation with iPhone. I had to reset AirPods to make it work.

And don't get me started on the completly stupid difference between ctrl+c and cmd+c. One (the standard for everything except macos) is used in terminal and the other in every other app.

For me macbook would be great for one thing: installing Linux on it, if it only had the ctrl key in the correct place (that is at the left corner of the keyboard not next to Fn).


> And don't get me started on the completly stupid difference between ctrl+c and cmd+c. One (the standard for everything except macos) is used in terminal and the other in every other app.

This is completely backwards.

In Linux, Copy/Paste is Ctrl+Shift+C/V in various terminal programs, but Ctrl+C/V in other GUI programs

In MacOS/OSX/macOS since 1984, Copy/Paste has always been Cmd+C/V. ...because Copy/Paste are OS/GUI shell actions, and Control characters are not the same thing.

Microsoft broke this separation in Windows 1.0, and Linux has stupidly aped Microsoft ever since.

See also: Microsoft/Linux dialog boxes where the "Continue forward" option is located on the LHS (which in most locales notably including the original context, means "move backward").


The difference was that Mac keyboards had an extra modifier (pcs had ctl alt and shift, Macs had opt ctl cmd shift) key and pcs generally did not until the windows95 era.


I have linux (pop os) running directly on a macbook (basic dev work), and the different key combo for copy paste in terminal is one of my very few issues.

debating with myself whether to remap keys (and then fumble on other linux terminals), or to just learn the new key habit.


Checkout my project before going around and remapping all of your keys.. Kinto doesn't do anything like that. It will actually do all of the remappings you want via a single config file that you can easily add on to if you really need (I doubt it though) & you can disable it or uninstall it at any time. https://github.com/rbreaves/kinto or http://kinto.sh


Copy/Paste in terminals is one of my greatest Linux peeves. It’s one of those things that just doesn’t stick… I can swap between Ctrl and Command all day but throwing Shift in there screws it all up. Eventually I just started right-clicking in Linux terminals for that particular functionality.


Same, and as I mentioned above, writing this was my solution. https://github.com/rbreaves/kinto or http://kinto.sh


Question: does Ctrl+Insert (copy) and Shift+Insert (paste) work in MacOS? (reason: I mostly copy/paste that way in Linux&Windows, in both shells and apps - I don't have a Mac)

Unluckily nowadays some notebooks don't have an insert-key, which makes me sad.


Question: does Ctrl+Insert (copy) and Shift+Insert (paste) work in MacOS?

No. Mostly because Apple keyboards don't have an Insert key.

(They do have a "Clear" key, which I wish more developers would support. But I think so many are on the laptop or small keyboard they don't realize all the buttons that are available.)


That's a convention per the app mostly and apps on macOS just don't need to use it.


Windows 1.0 was 1985, so a difference of a year is a bit silly. The majority of the world uses CTRL-C/V and has for decades, so that's the standard.


The use of Ctrl+C as a control character, to mean "ETX/end of text" goes back to ASCII at least, so 1960. And as Cancel/Interrupt to the late 1960s at DEC and of course Bell/UNIX.

Apple (really Xerox/PARC in this case) needed a key combination for "Copy" in their first GUI, and decided not to break the usability of that preexisting standard (and all of the other Control chars, some of which are still useful today).

Microsoft Windows came along shortly afterward. They copied Apple/Xerox's choice of C/X/V for Copy/Cut/Paste in the GUI, but they broke the well-established standard meaning of Ctrl+C in the process. (IIRC, this is especially ironic because they were also breaking the meaning of Ctrl+C from MS-DOS.)

And sure, Microsoft then went on to become dominant, and by virtue of their dominance created a huge number of people who believe that the misuse of Ctrl+C for Copy is "standard".

But it still breaks functionality. Then and today, 35+ years later.

"Copy" should be the same action in all applications. If it is not, the OS/GUI is failing the user.


As per usual though, Microsoft mostly will let you do what you want the way you want to do it by giving you an option, while Apple is busy breaking your things and trying to convince you that theirs is "the one true way". Here's the Microsoft option for letting you use Ctrl+C to copy in their terminal - https://www.howtogeek.com/howto/25590/how-to-enable-ctrlv-fo...

If you want the same on Linux, it's available.


But Ctrl+C to Copy is the broken part, so I do not want it! :)

Ctrl+C is a signal to the running application inside the terminal, and should be passed without interference by the terminal emulator.

> Apple is busy breaking your things and trying to convince you that theirs is "the one true way".

If by that you mean "doing things the reasonable way and not needing to offer workarounds for their failures", then we are in complete agreement. :)

I want OneModifierKeyButNotCtrl+C to Copy in all GUI applications including terminal emulators. I would settle for SomeModifierKeyCombinationNotCtrlAlone+C to Copy in all GUI applications including terminal emulators.

I don't know if Microsoft offers that, but Linux does not. Copy/paste between a web browser and a terminal window is an exercise in frustration.


> But Ctrl+C to Copy is the broken part, so I do not want it! :) Ctrl+C is a signal to the running application inside the terminal, and should be passed without interference by the terminal emulator.

No, detecting when some text is selected and performing a copy instead of sending the signal is the correct way to do this in 2020.

If you don't like that, every OS lets you remap keys.

> If by that you mean "doing things the reasonable way and not needing to offer workarounds for their failures", then we are in complete agreement. :)

I guess next you'll be telling me that never implementing a real "maximize" feature for application windows in your OS is reasonable. :) (And no... Alt + click on the green button does not maximize every window.) As usual, with a Mac you have to continually work around Apple by using apps that will probably not work after a big update if you want to do anything out side of the "Apple way".

> I want OneModifierKeyButNotCtrl+C to Copy in all GUI applications including terminal emulators. I would settle for SomeModifierKeyCombinationNotCtrlAlone+C to Copy in all GUI applications including terminal emulators.

And you can absolutely have that in Windows or Linux [0]. That's because both of them are easily orders of magnitude more flexible and have more options than any OS offered by Apple. Heck, I can't even turn off that idiotic marketing tactic of old, the startup chime, in my old 2012 Mac Pro.

Anyway - whatever you think is correct about Ctr+C, there's no denying that Macs are completely inflexible in many, many, many ways and the only reason you're on one if you've got strong opinions is because your opinions happen to align with what Apple thinks. If your opinion differs at all, you'll be the one bending. Enjoy that!

[0] https://www.google.com/search?q=change+copy+paste+shortcuts+...


> No, detecting when some text is selected and performing a copy instead of sending the signal

Ugh, that sounds awful! The Copy action should always be the same action, and it should always copy only, and never do unexpected, possibly harmful, things.

Re: Maximize. FWIW I have always preferred the OSX implementation -- it allows me two different window sizes that I can choose myself, and toggle between them. But I'll concede that if you come from Windows, having one of those choices taken away from you feels more natural. :)

Re: OneModifierKeyButNotCtrl+C. I can't speak for Windows, but this does not work on Linux. You can screw around with xmodmap and some apps to your heart's content, but you will never configure a GUI/DE-wide key combination for Copy that works in all apps. This is what you get, for free, on all versions of MacOS, ever.

Unless your argument devolves to "source code is available". In which case: OK I guess.

Re: macOS inflexibility. I mean, sure. And sometimes flexibility is critical. Other times, carefully-chosen defaults and strong consistency is more important. Anyway Linux's flexibility does not provide a path to solve this specific UX design error, so it doesn't help me here.

Don't get me wrong. Linux is great, and I choose it and use it happily for many things. I choose other OSes (all Unixes though!) for other purposes.

I have specific requirements for my desktop/console/terminal environment which might not match yours. MacOS meets them with default configuration plus some minor tweaks (e.g. focus follows mouse). Linux is not flexible enough to meet those needs, so I do not choose it for that purpose. All good.


> This is what you get, for free, on all versions of MacOS, ever.

You're trading one slight discomfort (in your opinion) for a thousand others because the rest of macOS is abysmal. There's hidden, undiscoverable functionality everywhere, the window management is absolutely disgusting, the file management is sorely lacking, the amount of tweaking you have to do to get decent terminal utilities is sickening, the hardware choices are extremely limited and the tyrannical company that builds it works regularly to own you.

No amount of rationalization will convince me that it's a good OS for anything other than building iOS apps.

Thankfully at least I can run a better OS on the hardware once Apple stops supporting it. Unfortunately I still must jump over many Mac hardware obstructions just to get something resembling a normal BIOS so I can even boot something else. It's just disgusting when I have to do it. At least I never gave Apple any money though since I buy it all used.

They are the China of the technology world and I wouldn't be proud to support them at all even if their desktop OS was actually desirable. Still, they own half (or more) of the phones in use in the US so I have no choice but to deal with them.


So let's not convince you, you can still experience what it is like to have a potentially better hotkey layout via my project if you want to try it out.

https://github.com/rbreaves/kinto


great post! ty


You didn't use mac during the notarization system outage? I coulndn't even use my keyboard and mouse (the one on macbook, NOT bluetooth one) for 2-5 minutes after lid was opened.

True that this is a good example of a security system that affected people negatively – though ironically I was unaffected by this because I was on-site with a customer, working on an annoying network without internet connectivity. But yes – it's fair to say the ecosystem is not perfect.

And don't get me started on the completly stupid difference between ctrl+c and cmd+c. One (the standard for everything except macos) is used in terminal and the other in every other app.

This is not "completely stupid" – it's just a difference between platforms. The command key sends commands, and the control key sends control codes. The cmd+c combination will copy text in both the terminal and all other applications; ctrl+c will send an interrupt. If anything, this is one of the things that keeps tripping me up working on a Linux desktop after so long – I keep inadvertently sending ctrl+c instead of copying, because the control key is overloaded with both text manipulation commands and control characters! But this is just a muscle-memory and familiarity thing.


I have found it convenient that in both Linux and Mac terminals, highlighting text copies it, and middle-click pastes. I expect this is one reason why ctrl-c-for-copy isn't as much of a problem as you'd expect on Linux: most people copy/paste with the mouse only.


> You didn't use mac during the notarization system outage?

This has happened literally once in the history of macOS.

> And don't get me started on the completly stupid difference between ctrl+c and cmd+c.

Ctrl-C and Cmd-C are completely different actions. What is weird is that some OSes use the same keyboard combination for them.


You know, this is about the one thing I do like about macOS. Cmd+C/V should indeed be the universal sequence for copy/paste.

I particularly hate the way GNOME terminal uses Shift+Ctrl+C/V for this, because if you accidentally hit Shift+Ctrl+C in Chrome then it opens Developer Tools.


This is classic Linux - a herd of small furry developers all trying to pull a large heavy object in different directions while not listening to what anyone else wants because pulling on things is fun.

MacOS is a large heavy object being pulled by a mysterious force in a direction no one is sure they want to go, but at least it mostly works, kind of, on good days. Except when it suddenly changes direction and runs you over.

Windows is a large heavy object with an embedded malevolent eyeball and a huge trail of wreckage behind it. People keep signing up for more wreckage, and it's really mystifying.


I hope I am not shilling too much here , but hey you can try out my Kinto app over here if you'd like. You'll get the keymaps you want. https://github.com/rbreaves/kinto


Yeah I switched to Linux on my primary work machine a few months ago and I am still flubbing the copy paste combos between the shell and gui apps.


It’s also much more ergonomic on hands and fingers.


Go figure... I actually like that the terminal has its classic Ctrl+ characters unimpeded by the GUI layer, which has its own namespace behind Cmd+

How do you copy text out of your virtual terminal?

The Fn key placement on the other hand... I’m with you.


You select it. No further steps required.


So every time you slip slightly with the mouse while clicking, you destroy the clipboard contents.


True, but if you mess up the selection step, you won't be able to copy to the clipboard what you wanted with a keyboard shortcut, either.


No, they mean that selecting text is a common activity that doesn’t necessarily imply I want to copy it into my paste buffer. If I want to keep the contents of my paste buffer, I have to force myself to remember not to highlight anything, or I have to go back and find the original text to copy again.

The number of times I’ve accidentally obliterated important and hard-to-relocate paste buffer contents on Linux by a wayward mouse action easily numbers in the thousands.


> copy it into my paste buffer

Selecting text never copies anything. Selections are just metadata that points back to the X client app that "owns" the selection and presumably also has the selected data. Selecting text simply sets the owner of the PRIMARY selection to the current X client, and "pasting" with middle click is asks the current owner of the PRIMARY selection to copy the selected data into the window targeted by the click.

From ICCCM[1]:

>> Selections communicate between an owner and a requestor. The owner has the data representing the value of its selection, and the requestor receives it. A requestor wishing to obtain the value of a selection provides the following:

    The name of the selection
    The name of a property
    A window
    The atom representing the data type required
    Optionally, some parameters for the request 
>> If the selection is currently owned, the owner receives an event and is expected to do the following:

    Convert the contents of the selection to the requested data type
    Place this data in the named property on the named window
    Send the requestor an event to let it know the property is available

> If I want to keep the contents of my paste buffer, I have to force myself to remember not to highlight anything,

You're having this problem because you're using the PRIMARY selection when you should be using the CLIPBOARD selection which is more durable and intended[2] for data transfer and copy/paste-like semantics.

The Copy/Paste actions traditionally found in the "Edit" menu usually interact with the CLIPBOARD selection. This should be completely orthogonal to the middle-click feature and other interactions with the PRIMARY selection. You should be able to mix usage of both selections. Try this:

    select text "foo"
    use Copy from the Edit menu   # CLIPBOARD now points to "foo"
    select different text "bar"   #   PRIMARY now points to "bar"
    move to an editable window
    use Paste from the Edit menu  # pastes "foo"
    middle click the same window  # pastes "bar"
Unfortunately, confusion about the X server using selections instead of a global buffer is not well explained.

[1] https://www.x.org/releases/X11R7.6/doc/xorg-docs/specs/ICCCM...

[2] https://www.x.org/releases/X11R7.6/doc/xorg-docs/specs/ICCCM...


The fact that you need a wall of text with multiple external sources to explain that I'm using copy and paste wrong is pretty conclusive supporting evidence in my favor.

If I have to read pages of documentation to understand how to do something that is utterly trivial and intuitive on every other major operating system, your approach is broken.


I'm just trying to help your "obliterated ... by a wayward mouse action" problem.

As I said at the end of the post, the actual behavior of some X Window System features are poorly explained and not obvious. The unexplained differences from the de fact standard behavior new users expect is as major usability problem. However...

> utterly trivial and intuitive on every other major operating system

It's worth remembering that when the standards document referenced in my previous [1] was originally written... a lot of those "major operating systems" didn't exist. In 1987, MS Windows was on version 1.0. I'm not sure it even supported any clipboard-like features. In that era, minimizing memory usage and network traffic were important, design goals.

> intuitive

"The only 'intuitive' interface is the nipple. After that, it's all learned." -- Bruce Ediger, on X interfaces

(ref: /usr/share/fortune/linux)


> I'm just trying to help your "obliterated ... by a wayward mouse action" problem.

That's fine and I do appreciate the information, but from the perspective of the overall discussion I think the point stands that the X approach to this is completely broken from a user perspective.

> It's worth remembering that when the standards document referenced in my previous [1] was originally written... a lot of those "major operating systems" didn't exist.

That's completely understandable, but we've had over thirty years to adapt. This argument is the same one that keeps Emacs greybeards defending M-k and M-y despite the fact that there hasn't been a Meta key on a single keyboard I've used in my 30 years of computing.

At some point you have to accept that the rest of the computing world has agreed upon a different standard than yours, and you either need to take steps to accommodate that or die.

That's not to say you can't try out new things while still abiding by existing standards (for instance, with advanced features like pastebuffer history), but that's completely different from clinging on to inferior historical approaches when everyone else has moved on.


If configured, sure, and if you use the mouse for all text selection. And also if configured, middle-click can paste.

Now how about Cut (Cmd+X vs Ctrl+Shift+X)?


I don't have an alternative for Cut, unfortunately. How do you select text in your virtual terminal with the keyboard?


> How do you select text in your virtual terminal with the keyboard?

with xsel(1x) or xclip(1)

https://github.com/kfish/xsel

https://github.com/astrand/xclip


tmux lets select with just the keyboard, even when tmux is on a remote server and you are using it through ssh, you can select to your local clipboard. This is for iTerm, don't know about other terminals. No idea how this works, just tested in Terminal.app and it works too.


>You didn't use mac during the notarization system outage?

No, I didn't even know it was thing until I read about it here. However, I don't use a lot of apps from the AppStore, so I dont' know if that was a contributing factor. I'm also not one to close and launch apps regularly. Terminal, Browser, TextMate are open 100% of the time. I usually have Photoshop and/or Illustrator open in the background as well. Don't be a quitter, you don't have to be a starter.

>And don't get me started on the completly stupid difference between ctrl+c and cmd+c. One (the standard for everything except macos) is used in terminal and the other in every other app.

The Mac has had the Cmd key sincd the very first days of Apple. In fact, it was originally two separate keys: open-apple, closed-apple. It was around well before Linux was a glint in Linus' eye.


> However, I don't use a lot of apps from the AppStore

I don't have any from AppStore, besides those that are factory installed and as I wrote previously, after resume from suspend my keyboard and mouse werent' working I was thinking that this macbook just died (and it was only 1 week old).

Thank god I found out that it needed only simple block of notarization server to make this stupid "security" stop usability.

BTW. You could try to see what happens when you are far from WiFi (with poor connectivity) and try to wakup laptop from sleep - the same, keyboard and touchpad don't respond for 1-2 minutes.


I find it amusing how some Linux users completely ignore the myriad of problems users can find when using Linux on the desktop, from hardware incompatibilities to a much poorer experience in daily tasks (lack of commercial software like Adobe CS / MS Office, problems with hardware video decoding especially on popular platforms like Netflix, desktop environments like GNOME3 that like to remove features periodically) that they even can't understand why someone would use a different platform.


The OP isn't talking about Linux users as consumers though, but about Linux developers. So:

> lack of commercial software like Adobe CS / MS Office

Usually inconsequential for developers

> problems with hardware video decoding

Same

> desktop environments like GNOME3 that like to remove features periodically

This is subjective.

1) You can choose not to use Gnome

2) Personally, I've been using Gnome for the past ~6 years and I haven't noticed any significant feature reduction. I'm sure there've been many, I just haven't noticed (hence subjective).

> that they even can't understand why someone would use a different platform.

I fully understand why people want to use Apple for their consuming needs. I really don't understand why my fellow developers choose to go with Apple as their work machine. I then frequently see they have to fire up VMs/containers to do work that I can do natively. No idea what's the upside for work (dev, coding-heavy).

PS: I'm writing this from my personal Apple computer. When I'm up for my hardware refresh at work, I'll 100% go with another Linux-compatible Lenovo/Dell.


>I fully understand why people want to use Apple for their consuming needs. I really don't understand why my fellow developers choose to go with Apple as their work machine.

Because many of your fellow developers do not have and don't want a separate machine for just conding? Why is this idea so alien for some people?

I've been using linux distros (mainly Ubuntu before unity nonsense and then arch and even maintained a package in AUR) since around 2006 to 2013-14. Every now and then I install Fedora\Ubuntu\Manjaro etc to see if anything got better to allow me to use it as a universal device should be used. Every time the answer is a firm NO.

In fact sometimes the disto won't even install because of some weird reasons. For example I only managed to install Fedora on my current PC (build mainly for gaming) after changing some GRUB options (and spending two hours searching all over the web for a needed recipe). What the hell is this?

As a sibling commenter said - Linux is still only good as close-to-targed (server) work machine. And it also has a few OSS tools for musicians and painters (like MuseScore and Krita). But that's it.

Installation is still a mess, updates are still unpredictable. Software distribution is still a nightmare. General consumer software availability is ... there is almost none.

If at some point my mac will become too unfriendly to work with as a dev tool (I don't see it coming really. Some people on HN usually treat some minor things as a catastrophe) - I will build a small dev server (or rent one).


>> I fully understand why people want to use Apple for their consuming needs. I really don't understand why my fellow developers choose to go with Apple as their work machine.

> Because many of your fellow developers do not have and don't want a separate machine for just conding? Why is this idea so alien for some people?

Well as a salaried employee, which I have been all my working life, you are generally required to have a seperate machine for work. So yes, this idea is pretty alien to me.


Different fields I guess. Cultural differences too maybe.

My first serious jobs in IT were in consulting (Hyperion Planning in oil and gas in Russia) and this one company (aside from Accenture were I've had a bit of experience) where the only places that required me to use the provided notebook for the job. Same for British Petroleum. Security reasons etc.

Not as single soft dev company I've worked since around 2012 required me to use their PCs (backend dev here) even though most of the do proived their macs or Thinkpads, or something else.

Anyway - many devs are self employed and\or work remotely. They don't use corporate tech at all. Corporate VPN to access the net? Sure. But that's it.

Honestly, I won't even consider a job offer that will restrict me to employer provided PC.


> > lack of commercial software like Adobe CS / MS Office

> Usually inconsequential for developers

I agree that most developers view it this way, but I find this perspective hard to understand. Why would a developer, whose craft is software, go out of there way to not understand and use the software most people use? Including of course their own coworkers? I understand that a lot of developers are content, and able to function, in a bubble of their own creating, but I would find that incredibly limiting, both from a productivity perspective, and in an understanding software as whole (and by extension the world) perspective.

> I fully understand why people want to use Apple for their consuming needs.

You do qualify this as being only about developers a bit later, but statements like this seem fundamentally out of touch with creative work on computers (i.e., illustrating my point from the previous paragraph). Logic, Final Cut, Media Composer, Pro Tools, Cinema 4D, the Adobe Creative Suite, Max/MSP, Reaktor are just a small sampling of important creative software that run on Mac but not Linux. Saying it's about "consuming" is patently ridiculous.


It's not that were going out of the way to not use them. If they worked on Linux we would have no problem using them at all.

The issue is that Linux is a free version of MacOS. I don't need a laptop to use it and I can just use a beefy desktop in my dev environment.

If I want to use a beefy computer my choices aren't Linux vs Mac, it's Linux vs Windows. And Windows is really annoying for a lot of us.

As for creative work, we are then forced to look for alternatives. With how costly Adobe is I wouldn't even use it on windows or mac tbh.

So no we're not trying to avoid anything. There's simply nothing for us.


I understand the viewpoint the way you're expressing it here completely. What I don't understand is when I see developers on Linux expressing confusion about why anyone would use macOS, as if there aren't obvious trade-offs to each platform.

Saying you'd rather spend less money, or just prioritizing other advantages of Linux over macOS, makes sense. But pretending it's not a big deal to give up most of the industry-defining creative and productivity software (notably excluding most developer tools) just doesn't make much sense to me.


I think it's because most software most people use works fine on Linux. E.g. browsers, email, steam, websites, etc.

The ones that don't aren't necessarily widely used software. They're just specialized software for specific fields. Another example is CAD, not necessarily something everyone uses.

I also wouldn't mind using my Mac as my main computer, the problem is that my 1080p 23inch monitors don't work well with them due to pixel density.


My comment here addresses these points https://news.ycombinator.com/item?id=25971182


Because as a developer, none of the software packages you listed matter to me. They are as far removed from my line of work as is notepad.exe.

You have to realize, that a lot of the software you listed is only relevant in the creative / media production industry. So, it is really not software that most people use.


That's because I was responding to replying to this comment:

> I fully understand why people want to use Apple for their consuming needs.

So I listed creative apps in contrast to their point about "consuming", but I could have listed Microsoft Office, or the iWork Suite.

I also think you're discounting how popular these apps actually are. Affinity Designer, Affinity Photo, Logic Pro, Final Cut Pro, GarageBand, and iMovie are all in the top ten charts for the Mac App Store (an interesting tangential point here is about how mobile eating so many consumer software use cases has shifted laptops and desktops more towards creative use cases).

But the point is, familiarity with these apps is useful for software developers for a couple of reasons:

1. While they may not be developer tools, they are certainly related: User-facing apps all have a UI, and most involve media of some sort, whether it's photos, videos, icons, etc... Being able to work with those file formats is a great supporting skill for a software developer.

2. As a software developer, whose "art" is software, it's good to be able to use applications that other users prefer to use, just like, say, an interior designer might be interested in how people decorate their homes. As a software developer, I'm extremely interested in the what and why of the software people like to use.


MS Office and PowerPoint are not software developers use?

The image and sound editing software is useful for development too, if you're working on an app or game and need to process any assets for it.


> I fully understand why people want to use Apple for their consuming needs. I really don't understand why my fellow developers choose to go with Apple as their work machine. I then frequently see they have to fire up VMs/containers to do work that I can do natively. No idea what's the upside for work (dev, coding-heavy).

Once we get a Linux laptop with great and fuss free hardware, then developers will continue buying Macbooks.

Great app support might be inconsequential for software development but not for quality of life. Programmers are people too.

If a programmer who happens to do music production or art as hobby for example, which do you think he would choose, a Macbook or a Linux laptop?


> If a programmer who happens to do music production or art as hobby for example, which do you think he would choose, a Macbook or a Linux laptop?

Don't the vast majority of companies have clauses in contracts that say whatever you create on your particular work laptop belongs to the company? I certainly have that in my contract.

In general, I would say that you're getting your work laptop for work. Work-related things should have a priority from hobbies. Having to have a VM for various thing is a strange trade off when you say the upside is producing music as a hobby.


A better example of useful-to-developers-software-that-isn't-available on Linux would be prototyping and UI design apps like Sketch, Principle, ProtoPie, Framer Desktop, and Origami Sudio. There’s a good list here (https://uxtools.co/tools/prototyping).

I’d also argue that the Adobe Creative Suite is a great grab bag of tools generally useful in communication and digital asset preparation crossing the gamut from bitmap, vector, video, and motion graphics, all of which have uses when working with software, especially user-facing applications.


>> Don't the vast majority of companies have clauses in contracts that say whatever you create on your particular work laptop belongs to the company? I certainly have that in my contract.

What do you think the chances are of your company actually being able to enforce said clauses?


>> Don't the vast majority of companies have clauses in contracts that say whatever you create on your particular work laptop belongs to the company? I certainly have that in my contract.

> What do you think the chances are of your company actually being able to enforce said clauses?

Moderate to Very high, depending on the jurisdiction you live in. Like I’ve seen it play out multiple times with colleagues even, and one took it to court even and lost.


> > lack of commercial software like Adobe CS / MS Office Usually inconsequential for developers

You're really focusing on a certain type of developer at that point (systems/kernel only?). As a developer of both front end and back end systems, I use Adobe and Office all the time.


I do both front end and back end development without either Adobe nor Office. I'm not sure what you're using these for, but you most certainly don't need them for your existence.


Sure, and no one needs GCC for their existence either, but that sets a pretty high bar, doesn't it?

Wouldn't a better bar for judging software choices be "does this make you more or less productive?" rather than "is this an existential necessity as a software developer"?


If the bar is 'need for existence' then sure almost no one needs those tools.


> I then frequently see they have to fire up VMs/containers to do work that I can do natively. No idea what's the upside for work (dev, coding-heavy).

When you figure it out, let the rest of us know, because this is exactly what I think about every one of the breathless articles about how great WSL is. Speak of the devil... https://news.ycombinator.com/item?id=25965231


I’m in the same boat. I have to use a Mac for work and it’s fine. But doing personal projects on my personal laptop on Linux is just so much easier - containers are faster, Emacs is faster, upgrades don’t break things etc.


20 years ago I wanted to try graphics programming. Friend of Mine suggested Linux because it came with free software. Gimp. Six months of recompiling and reinstalling Linux lead to a career in server development. Linux took My dreams of graphic development and turned them into a much higher paying career.

I still have mixed feelings about it.


the terminal eats us all in the end


The lack of commercially supported software with restrictive licensing is certainly NOT A negative!!!


As a Linux user, you are basically safe because most likely all the applications you use are open-source and malicious intent in a program would result in your distribution removing it. As a macOS User I'm quite happy with the restrictions and sandbox of macOS, as I also occasionally use Windows and see whats going on there: 3rd Party Applications that you install scanning your harddrive and extracting calendar entries, contacts, heck even your Steam Game List just to upload it to the cloud and sell the data. 10 Years ago those app would be reported as mal-/adware by your Antivirus, but since this has become "the new default" nowadays you cannot trust any application on windows anymore - especially since the largest cooperations that you thought are trustworthy, like i.e. Adobe are in on this.

So yes, security on macOS is anoying sometimes ("Do you want to allow App X to access your Downloads folder?") but I can see why allowing software just arbitrary disk/network access nowaday is not a good idea anymore.


I see your point on the sandbox, and I agree that Windows is in many ways even worse as a consumer platform. I wasn't aware that things had gotten so bad there.

My complaint with MacOS is less about the existence of SIP (though I can't say I enjoy it) and more about things like bash, python and even SSH being regularly broken. (These are all actual problems that I've spent hours solving, not theoretical issues.) I don't understand why technical users voluntarily put up with this.


> bash, python and even SSH being regularly broken

Broken how? That's surprising - in my experience, the versions of these utilities on macos are often missing features relative to linux, but the differences are known and documented...


> bash, python and even SSH being regularly broken

Reference needed re: SSH, but as to Bash/Python/etc, did you know Apple bundles “old” versions of both? You should not be using the system provided ones imho if you are a developer.

Simple solution:

brew install bash gnutools python

That will very likely solve all your complaints listed, the second of which listed is for those who don’t understand BSD vs GNU/Linux userland CLI options aren’t universal.


> I see your point on the sandbox, and I agree that Windows is in many ways even worse as a consumer platform. I wasn't aware that things had gotten so bad there.

its because exaggeration


Not breaking userspace is the first rule of Linux kernel development, but macOS has never had that policy as far as I know. Apple care much more about being able to move fast and clean up bad interfaces in their software, which is a priority I value.

I’m not saying that’s what happened here, but I do know that Apple ultimately expects developers to pay attention to new releases and make sure their software behaves properly with them.

As for standard utilities being broken, I’ve never experienced that on macOS. I do know that Macs get their standard utilities from FreeBSD and not coreutils by default.


Of Linux *kernel* to userspace ABI development.

The user-space ABIs are widely unstable. Try to run software compiled on Ubuntu 20.04 on 18.04 for example. On macOS, it's just the matter of using -mmacosx-version-min= to make sure that newer functions don't get used.

Or for that matter, even in the reverse case, things break.


Of course that doesn't work. Why would anyone expect it to? Install your baseline distro version in a chroot and set GCC's --sysroot, alternatively use a container, alternatively use a Flatpak SDK, problem solved.

The main problem with this is knowing which libraries have stable ABIs and which don't, which is only documented in a convenient way for enterprise distros (RHEL/SLES).


Making open() returning EINTR possible on regular FS is not really cleaning up bad interfaces. Is there even a precedent on any Unix for that behavior?


It's part of the documented POSIX interface, so correct code has to handle it: https://pubs.opengroup.org/onlinepubs/9699919799/functions/o...


Only if the signal that was caught isn't using SA_RESTART: https://pubs.opengroup.org/onlinepubs/9699919799/functions/s... ("If set, and a function specified as interruptible is interrupted by this signal, the function shall restart and shall not fail with [EINTR] unless otherwise specified.")

Therefore, correct code can also not handle EINTR as long as it uses SA_RESTART for all non-fatal signals. That's probably the most common case (nobody likes EINTR).


There are liberally millions of app, Apple can't expect all of them to be always up to date on day one, to even sw like postgres. You can't just change the semantic of an API, it's just wrong. Apple should introduce a new API, deprecate the older one and remove it two or three versions later minimum. They just don't care until people is going to buy their hw.


Traditionally, Apple does not guarantee total ABI stability. They push betas instead to allow people to fix the issues, and then support older releases for multiple years.

Yes, unmaintained software might or might not stay fully functional on macOS for long, but that's not an Apple priority.


I have a linux Desktop, Windows, and Mac. I was recently trying to work out how I could do my media handling on the Linux Desktop (KDE). It didn't just work. I tried numerous apps and all of them failed to meet my expectations.

My non-development tasks... those things in life I use a computer for... are just harder on Linux desktops or have a poor experience.

MacOS, on the other hand, has a great experience for most of these everyday life kinds of activities. Even for many business activities it's better. Here I refer to the non-development parts of business.

If all you do is systems level coding than Linux is great. But, that's not the case for most folks.


After 6 years of using a Mac at work for development with OSS tools I switched to a SFF PC with Ubuntu with Gnome and never looked back. Everything is so smooth. Apt get this apt get that and I have working software. No need to homebrew development tools, install Xcode using an Apple account, withhold upgrades and fight evey iteration of the OS.


What activities do you mean by media handling?


> The hardware is nice, but as a platform, I just don't get it.

To me, that’s backward. The hardware is okay, the platform is amazing.

Nothing even comes close to the full ecosystem. Phone, tablet, headphones, watch, computer.

Occasionally, quite rarely in reality, something goes wrong. However, I don’t use bleeding edge versions of macOS and so I can’t even tell you the last time I actually experienced one of these problems that everyone on HN gets in an uproar about.


MacOS is a perfectly fine Unix, but a terrible Linux. If what you actually want is Linux, MacOS will always come up short.


I have been using Linux exclusively as my OS since 1997. Except for a 2-year hiatus some 8 years ago where I got a Mac and used Mac OS.

I can 100% relate to your experience. It was painful. There were oddities all over the place. Some software I was used to use was simply unavailable. Package managers... oh I missed apt so much... And it was much slower.

Let alone the main compelling factor for Linux for me: that it is open source, and that I can inspect anything, even the kernel, if I need to. Surely, you occasionally need to fix some driver or configuration "CLI-style", but I'm more than used with this.

But basically, with Linux I feel I control and own the system. With Mac is pretty much like Windows: they own you, and for the most part you don't know what's going on --except that MacOS is still orders of magnitude better than Windows.


I prefer Macports and macOS over Linux package managers because Macports is like a rolling-release package archive without the disadvantages of rolling release on Linux. On Linux, I can use Arch and have the latest of all software - which means I have the disadvantage of having to stay up-to-date on things like dhcpcd and systemd and coreutils. I don't want to have the latest versions of these things or spend time routinely upgrading them piecemeal.

Or I can use Debian stable and everything is stable, but if I want something new I have to compile it myself, or hope there's a backport.

With macOS and Macports I get a stable base system that Apple keeps up-to-date security-wise with monolithic updates that I don't have to worry about, and Macports has the latest for things like Emacs or even the GNU coreutils.


> which means I have the disadvantage of having to stay up-to-date on things like dhcpcd and systemd and coreutils

In the past 8 years of regularly updating my rolling release Linux systems, I have spent significantly more time reading headlines about macOS releases breaking software, than I have spent staying up-to-date with dhcpcd, systemd and coreutils. They are ridiculously stable.


Yup Macports would upgrade everything and break stuff like FFI code. That is exactly why I prefer to use a Linux LTS release like Debian, Ubuntu or even openSuSE Leap with it's 18 month release cycle and uneventful upgrades that you can now even roll back.


You can use nix on "any" Linux distro with the unstable channel and get the most bleeding edge software from nixs binary cache, or Linuxbrew, flatpak, snap, appimage.


I’m a full stack dev and I occasionally need to use a bunch of graphical and design apps. Occasionally enough that Linux isn’t practical, anyway.

On the data programming side of things I don’t see much issue with it either, though. The issue on this article would’ve definitely bit me as I run like a dozen Postgres instances but I haven’t updated to BigSur - I’m never in a rush to do so.

The only thing I miss from my Linux days is the package manager, maybe? Brew is third-party and doesn’t feel as tight as some of the Linux counterparts.


Perhaps by “anyone” you mean developers? I can think of innumerable reasons why regular mom and dad users use MacOS over Linux.


I do, and I should have said so. I forget that macOS actually has a large consumer following, because I usually encounter it with startups who have chosen it as their standard fit for developers.


I just ssh into my Linux boxes from macOS. Problem solved (for me, at least).


So you've overcome the issues macOS threw at you. What's the upside? What does it give you?


Apple's integration / syncing between other Apple devices is quite a nice feature.

I regularly take/make phone calls from my desktop or laptop, routed via my phone. Same with SMS. All of my contact details (and photos/etc) are synced with my phone, tablet, desktop and laptop. My logins/passwords are synced between all devices. I can seamlessly switch devices and pick up open browser tabs. Plus I use Apple software that isn't available on other platforms (Logic Pro specifically here).

Sure many of these can likely be done on other platforms too, but on macOS these things (and more) are seamless and don't require any other software, nor configuration (beyond setting up one's iCloud login on each device).

Don't get me wrong: I'm a developer, and use Linux in local VMs and on local and remote servers. But all of these Apple features targeted at 'normal users' are quite nice, and one quickly gets used to them. But perhaps not everyone's cup of tea.


Photoshop, if that’s your thing?


pretty sure you can use photoshop via wine on linux distros


The hardware is nice, and until a couple years ago there was nothing available in the PC world that matched it, but it's the best mainstream GUI experience on the market today. I use macOS, Windows, and Linux every day. Windows and Linux's GUIs lack little affordances like a document icon on a window that you can drag and drop.

Plus there is nothing equivalent to OmniFocus outside of macOS, and that is one of the tools I depend on most heavily.


On the other hand macOS and Windows are missing little affordances like the selection buffer, middle click paste, focus follows mouse..


I manage literally 100k+ Linux nodes for my day job. I don't want to manage any more than I need to, so my workstation is a Mac.


How are you managing them? Which distro are they running? Is there any difference between managing 1k and 100k operating systems?


Support for Microsoft Office, Adobe CC, and historically a POSIX environment were an enticing combo. Recently it’s gone off the rails and become miserable to hack around in.


They didn't break userspace. Your opinions as a "long-term Linux user" are effectively useless.


So when do average developers just start considering macOS toxic and stop supporting it? About once a week, a story similar to this hits the HN front page: "$majorsoftwareproduct doesn't work on new OSX because they broke documented APIs in undocumented and unexpected ways, without any warning".

What the hell, Apple?


Please don't take HN on generic flamewar tangents. Platform flamewars, like programming language flamewars, quickly suck up all the oxygen in a thread, but they're fundamentally repetitive and boring.

https://news.ycombinator.com/newsguidelines.html


> So when do average developers just start considering macOS toxic and stop supporting it?

Presumably at the same time breathless and misinformed comments stop being posted to HN.

In this case returning EINTR is documented behavior and the bug is thusly on PostgreSQL not handling it (correctly or at all).


How is returning EINTR undocumented? It’s part of the POSIX interface.

It is true that typically Apple doesn’t care if they change things within the limit of the specification and 3rd party applications fail. That’s why they release beta versions.


Standards aside, returning EINTR on a write to local file is extremely rare in POSIX.

Almost any syscall can return EINTR, but in practice, for all previous OSes, it has been reserved for those that can hang for an undetermined amount of time -- that is, anything from the network. Local I/O should complete in a bounded amount of time, so that kind of sleep is usually implemented as uninterruptable.

There's one highly relevant edge case: NFS. NFS looks like local I/O, but if the NFS server goes away, what do you do? Traditionally, the process just hangs in the hope that the server will come back, which leads to the dreaded issue of processes being stuck and unkillable. On Linux, can add the “intr” flag to the mount (possibly via a remount) to allow signals to interrupt the syscall and break the deadlock -- but then you break a ton of software, because, again, EINTR on local file I/O is unheard of.


This EINTR is returned here for attempts at file accesses trapped by an antivirus/security layer, which can indeed take an undetermined amount of time, without the OS knowing how long that is...

Probably was a hard choice to make for Apple too. Should a thread be frozen for an undetermined amount of time while an antivirus does the check?


> Should a thread be frozen for an undetermined amount of time while an antivirus does the check?

Yes, certainly it should.


> Should a thread be frozen for an undetermined amount of time while an antivirus does the check?

It certainly does on other OSes and on previous versions of this one?

Do you really want to rewrite all existing software because it would be nice in 0.01% of cases that a thread is not "blocked" by an antivirus on file open, and can actually do something else, and that not on a specialized API but on good old syscalls? From a system design point of view that makes no sense.


> Do you really want to rewrite all existing software because it would be nice in 0.01% of cases that a thread is not "blocked" by an antivirus on file open

No. First, it's not a rewrite but a bugfix patch. Second, you will want this fix because open can return EINTR. It's documented. The code (postgresql in this case) is wrong. It should be fixed. simple.


POSIX is full of aspects that all OS provide greater guarantees or in some cases even deviates. It makes no sense to gratuitously / carelessly remove some of those guarantees and ask the world to fix their "obviously" broken programs that nevertheless worked perfectly on this (previously not really disputed) point during decades -- especially if the advantages such changes seem to be able to provide are not clear at all...


I'd surprise if a antivirus on windows would fail the read attempt from random software instead of temporary hang it. It will most-likely to be blamed as a bad AV here. Instead of just accepted because apple always do the correct thing.


It isn't a failure code, EINTR is a code that explicitly tells you to retry.


The equivalent API on Linux (https://man7.org/linux/man-pages/man7/fanotify.7.html), AFAIK, does block for an undetermined amount of time, and doesn't make the open() system call return EINTR.


> EINTR on a write to local file

Apparanty it was on an open() call on local file which is certainly ... unusual

https://www.postgresql.org/message-id/20210115210548.zfbnulf...


So, the old ‘Windows runs 50 year old software and therefore it drags along ancient api’ argument.


Welp, looks like it's happening now, so Postgres should fix it.


HN should not be considered representative, and is suffering a great deal of echo chamber self-affirmation here. Apple requires developers to keep up with major breaking changes on their platform, and some folks don’t like that. HN has at least one concentration of such people, seemingly used to Linux/Windows development, who bring an expectation that “code once written should work for years without needing to keep up with OS changes”. That expectation is well-known to be void when working with any Apple platform.

open() was documented to be capable of returning EINTR. The Postgres team chose not to write code to handle that case. Now they’ll have to write code to handle that case, or drop support for macOS. That’s life on macOS. They’ll fix it or they’ll drop macOS support.

Keep up or walk away. Repeatedly complaining about macOS being a moving target will not change Apple’s direction. Boycotting it will not either. Either target the platform on the terms provided or drop support for it.

But for the sake of HN, please just let it go. If you don’t want to be macOS hackers, then just don’t be, and move on to another post. Let those of us who do want to be macOS hackers contribute value to HN instead, by talking about the minimum-viable workarounds and how to find other broken code in Homebrew, instead of driving us away. (For example, there’s an obvious way to solve this that doesn’t require disabling SIP.)

The recurring outrage and the reposted miserable viewpoints framed as ‘questions’ with nothing new or curious in them makes HN look immature and unprofessional. Everyone that y’all might have swayed has already been swayed, everyone that hasn’t been swayed has stopped commenting about it, and all that’s left is the echo chamber complaining to itself.


Last summer I stopped using MacOS (and the MBP) as my primary development machine. My development happens on a Thinkpad X1 with an Arch inspired distro + XFCE desktop combine. I still use Mac OS and Macbook Pro for meetings, presentations etc.

The development experience on Linux is sublime. I feel so much more productive. On the flip side, it took me two or three weeks to customize and setup the way I like it.

In the end, it was worth it.


About two years ago.

I read all of these posts and think: I'm so glad that I dropped OSX support. A lot less entitlement and a lot less work :)


[flagged]


We love our walled garden.

Don’t upgrade for a few weeks, and the hardcore fanboys will have fixes for all the annoying little breaks.


Apple does exactly what the API is supposed to do, specifically in combination with antivirus software, and incorrectly written programs have small faults.

Now read the comments before. This borders on parody at this point.

It would be cool if all of the "this is the last straw!" and "welp, guess this is when I leave Mac" people who appear in each of these threads just, like, fuck off and leave then?


Apple keeps throwing spanners in other companies wheels. Why spending time on new product development, if you have to commit teams to solve whatever this time Apple decided to break? This is anti-competitive behaviour that keeps companies working with Apple eco-system in check. Unfortunately thanks to excellent marketing, the user base is not going anywhere so companies need to put up with this.


Jan 23 I have started a rarely used windows laptop to do C++ development work, because my app depends on library that depends on Microsoft library that was not signed, therefor won't run with new update.

we develop on macos and deploy on linux, macos is supported as side effect, windows was not supported... a single policy change without a workaround... made us support windows and linux, and macos left to dust.


N of one, but this user is in the process of leaving Apple.


Sigh. Looks more and more likely Mojave will be the last OSX version I’ll ever use.


> Many system calls will report the EINTR error code if a signal occurred while the system call was in progress. No error actually occurred, it's just reported that way because the system isn't able to resume the system call automatically. This coding pattern simply retries the system call when this happens, to ignore the interrupt.

> For instance, this might happen if the program makes use of alarm() to run some code asynchronously when a timer runs out. If the timeout occurs while the program is calling write(), we just want to retry the system call (aka read/write, etc).

It's perfectly valid behaviour, and the solution is to retry in that case.


Setting the SA_RESTART flag using sigaction[1], which Postgres does, makes it invalid behaviour. Even according to Apple's (confusingly worded) documentation[2].

[1]:https://pubs.opengroup.org/onlinepubs/9699919799/functions/s...

[2]: https://developer.apple.com/library/archive/documentation/Sy...


The point is that this obsession with more and more restrictions is death by a thousand cuts.


Your solution will hang postgres at 100% CPU, with no indication what is wrong.


Same. Much as I’d hate to do it I’ve been eyeing a Windows system, with Linux either through the OS layer or virtualized.


I think kernel virtualization support is better on Linux than Windows, so you may be better served running Windows guests on Linux. I think people are even gaming like this for a minimal penalty with hardware pass-through.

I absolutely can't fathom how anyone could make Windows their daily driver. Having to unbloat, unspy, unfuck a brand new system, I mean...


If you're willing to go to a virtualized Linux in order to switch to Windows, why wouldn't you do the same on MacOS?


And while they're busy breaking their current set of APIs, they're dragging their feet on adding anything designed in the last 15 years. Someone just wrote to me with a complaint because my software doesn't build on macOS due to missing getline and fmemopen implementations.

Naturally I told them to use an open source operating system and declined to update my code to accomodate Apple.


getline and fmemopen are both supported on macOS.

  [/Library/Developer/CommandLineTools/SDKs/MacOSX11.1.sdk/usr/include] 
  -> % grep -r -n getline stdio.h 
  stdio.h:355:ssize_t getline(char \* __restrict __linep, size_t \* __restrict __linecapp, FILE \* __restrict

  [/Library/Developer/CommandLineTools/SDKs/MacOSX11.1.sdk/usr/include] 
  -> % grep -r -n fmemopen \*     
  stdio.h:356:FILE *fmemopen(void * __restrict __buf, size_t __size, const char \* __restrict __mode) __API_AVAILABLE(macos(10.13), ios(11.0), tvos(11.0), watchos(4.0));
getline became available on OS X 10.7 (in 2011) and fmemopen on macOS 10.13 (2017).


They were on OSX Lepoard, maybe that's not the most recent one? I don't use Mac. When were they introduced?

In any case, it's a trend with Apple, even if this particular example is wrong. See also: Vulkan.


OS X Leopard is very old in Apple terms, released in 2007 and being the last release to support PowerPC processors. (that even pre-dated the functions that you say above being standardised, they were just GNU extensions before)

For Vulkan, MoltenVK works very well and is officially supported by Khronos directly. (Metal is a much more approachable API than Vulkan, but oh well...)


Okay, 2007 is pretty old, I can't fault them for not supporting these functions. However, this is a noticable trend with macOS. Their pace with modern Unix is severely behind, and I often get bug reports from macOS users lacking some API that's been available on open source Unicies for 5+ years. Plus, even if Linux in 2007 didn't support what you need, it's open source so backporting the required functionality is feasible, and impossible on macOS.

MoltenVK is a good workaround, but open standards > workarounds.


Technically: Linux is not in fact Unix (it is Unix-like), whereas macOS is certified Unix. The certification is current: I think that's pretty modern.

https://www.opengroup.org/openbrand/register/


Leopard is the final version of macOS to support the PowerPC architecture - and released in 2007.

Honestly, I wouldn't expect even Linux or BSD from 2007 to work well with modern software.


This is where I really need to pay respects to Windows. It’s a heaping pile but their commitment to backwards compatibility is remarkable. I recently installed a complex application that last updated in 2007, and while performance was absolutely terrible, it worked.


Microsoft’s dedication to backwards compatibility is insane - going so far as to ship with thousands of shims to imitate buggy code and hardware.

Virtualization is killing the need to do that, sadly.


I'm baffled that any developer uses a Mac. I know, I know... BSD, 3-finger swipe, but-they-wrote-the-ux-guide. But I've been running Debian Testing for 15 years now with nary a problem (on thinkpads, latitudes and desktops) I guess I'm weird in that I think gnome 3 is swell.

CI handles xCode. Otherwise, my 2x 42" 4k monitors, Gnome and works-here-same-as-the-cloud self reads these articles on puzzlement.

I can't count the times I've helped developers deal with oh-yeah-brew-is-weird-about-libpq. Now this? It's the avocado-toast of developer workstations...


> I've been running Debian Testing for 15 years now with nary a problem

I ran Linux on the desktop for 19 years, through years-long stretches of Slackware, RedHat, SuSE, Gentoo, and Ubuntu. Obviously, I was (and still am) a huge fan, but if you've managed 15 years of Linux on the desktop with "nary a problem," you have had the most unbelievable luck of any human on the planet, and should start buying lottery tickets.

After running Gentoo for several years, and switching to Ubuntu, I was amazed at how much time I was saving not dealing with portage. Fair enough. I mean, I was asking for it by using a source distro, but I had that same sigh of relief when I finally bought a Mac, and realized that I was still doing a lot of maintenance to keep Ubuntu happy, even though it was (obviously) a huge improvement over Gentoo. I just couldn't see it until it was gone.


I held on to a Dell running Ubuntu for two years because I couldn't convince myself to shell out north of $2k for a comparable MacBook (I'd had a MacBook before).

Then after one of the kernel updates the "resetting rcs0 for hang on rcs0" bug started happening, I had to downgrade to an earlier kernel, then Ubuntu 20 came with Snap everywhere, then covid came and I started having Zoom meetings all over the place and my mic would randomly stop working and I started using a 4K screen and noticed how choppy scrolling is and that YouTube stutters hard in fullscreen, then there was a memory leak in Firefox, semi-functional display scaling, the machine sometimes wouldn't wake up from deep sleep...

Then the M1 Macs came out and I got an Air. I've been using it for a month and it's heaven. It's like coming home. It gets work done, lets me have fun, watch videos, play some games and gets out of the way.


I mean, as someone that's been running Linux full time at work for two decades I still don't have copy/paste working consistently between applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: