Without leverage, your sticks and stones mean pretty much nothing to Apple. I left the Apple ecosystem because complaining about poor cross-platform support and dwindling API coverage fell on deaf ears. Nobody here can wave a magic wand and fix the problem, ultimately we're just yelling at companies that value us for dogfooding their shit UX.
I wish you luck, but I've seldom seen the "Google/Microsoft/Facebook/Apple wronged me!" threads get a happy ending on HN.
To start: VSC is made by Microsoft and not the community. That's all that needs to be said.
But to continue on your logic path...
You are right in that the original devs couldn't keep it alive because of VSC, but your correlating that this is because of a lack of community interest, which I think is far from the truth.
The reality is Microsoft owns Github, and they own VSC. Why pay to support two editors? Between the two VSC is more clearly "their" product, so they try to kill Atom to further help their product.
But VSC is dangerous, imho. Not only does it watch all sorts of things you are doing, it's just locking you into the same hegemony that brought you the frightening CoPilot AI that is scanning everybody's code on Github, creating all sorts of problems. https://thenextweb.com/news/github-copilot-works-so-well-bec...
Whereas one of the first things the Pulsar group did was remove "telemetry" from the code base, making it more of a free "no strings attached" community editor.
However, to each their own, as I suggested :) Not everybody likes Atom, but MANY people do. It certainly is not a minor editor by ANY count.
Just because it's #2 or #3 compared to VSC? That's a pretty lofty position.
And if you enjoy suckling from teat of Microsoft on VSC, by all means continue to do so. I am not a fan of their sins of the past and present, so I avoid them when possible.
And personally, I like the configurability of Atom far more than VSC, which I've tried it from time to time—but it's just a different experience, not to my liking.
I don't trust Microsoft and I don't really like VSC.
VSC is made by Microsoft but they have the community who make plugins, report bugs, contribute, spread the word, etc. They have the critical mass to make something like this work while Atom doesn't.
Atom did not die because Microsoft killed it. They didn't have to. They only had to push VSC. Now the original Atom devs are working on another editor (zed.dev) because even they know Atom is done.
Imagine it's an important lib you can't just replace and there's a bug you need fixed and can't just wait for the maintainer to do it for you. Now you suddenly need to understand that language enough to do that. To some extent this happens very often, I've had it in Clojure+Java and PHP+C libraries.
Exactly the problem I've run into on more than one occasion. And the erlang libs I'm referencing have almost zero documentation, so it really means "RTFC" which then is painful :D
Although I also have issue w/the Elixir docs, they tend to have more explanations in them than the Erlang ones. Marginally :)
I am one who came not from Ruby, btw. My roots come from a variety of things including Smalltalk, C, C++, Perl (dare I admit), Java, Javascript and Python, among those most influential on my life (and in that order).
I'm really happy w/Elixir, finally getting back to a better true object oriented system (for those who know the roots of such).
Yep :) Anybody familiar with the roots of OO will understand how wildly Java/C++ and their ilk went off the rails. It wasn't intended as a code organizational system. It's about encapsulation of needs, actors, and sending messages back and forth.
Then there are those bosses who do it on purpose, to just screw with you. I had one boss who was upset because we both submitted to present at a conference (VMworld) and I was selected, but he wasn't. He then declined the business paying for the trip, so I paid my own way, then he arrived at the conference, slipped past the guard the day I was presenting (I was first panel of the day and did a dry-run) and stepped up to a group of us talking after I had just finished and said, casually, "whew, what a long call I just got off, we were talking about your replacement."
I remember looking at the VMware guys I was with and their eyes were bugging out like WTF, did he really just say that. By this point I was used to his crap and kindof let it roll off me. After we got away one of them asked if I was okay and said screw the presentation--he'd go introduce me to some people :D I talked him through it and did the presentation anyway.
This boss was definitely toxic--he relished psych games, and at one point handed out copies of "The art of War" so we could all "brush up"--and as much fun as I was having working there (not because of this stuff), I finally did leave within a few months on my own terms. Never have I worked with anybody as toxic as that.
(After I returned I asked the CTO about the call the boss was on where they "talked about my replacement." The CTO scoffed at it, and said that call had no such conversation.
In followups my boss backpedalled and said he just meant because I was going away for a few weeks and who would cover me during that time--of course that's what he meant).
Sorry, but your testing methodology needs some help. Were you testing OSes? Postgres? Storage configuration? I suggest investigating FIO first, use it to isolate the best performing disk configuration (Storage+Kernel+Filesystem+whatever), then do some pgbench tests with different postgres tuned parameters, to show the best way to tune postgres.
A few thoughts:
* You weren't testing OSes which the subject implied, you were testing Linux kernel variants and their stock OS configurations/kernel scheduler setups, and FreeBSD was tossed into the mix. Whether you are running Ubuntu, CentOS, Debian or whatever you should have the Linux kernel tuned to perform well, so adding the distribution as a variable is just a red herring. I'd be more interested in removing that variable and comparing different storage configurations (such as XFS, and LVM).
* Clients connecting over the network adds a huge variable at play (the network) -- ideally you would want to remove this.
* I may have missed it, but it wasn't clear if you had a warmup period to your benchmarks. Especially with a system like ZFS which has COW, you need to do a few benchmarks on the same blocks first, to break past the cache.
He did adequately disclose that the OS was swapped out while pgsql and settings were held steady. Adding in more arbitrary benchmarks like FIO wont really clarify pgsql performance if that is the desired investigation. Instead, active bench marking where you identify the bottlenecks in each tested platform would be a better use of time, and you could hypothesize how to improve each platform if you were to pick it.
As a counter-example, I could easily cherry pick versions, tunables and patch sets to make the numbers go whichever way I want so these types of comparisons aren't that useful unless someone is dropping a big delta on the floor with out of the box settings vs another.
As a FreeBSD developer, I will actually tell you that Linux could be selected to graph massive wins by cherry picking hardware with very high core count and several NUMA domains. But even then, by selecting kernel features (which could be innocuously hidden in a version number/vendor patch set) you can cherry pick large swings. That said, FreeBSD+ZFS+PGSQL (https://www.slideshare.net/SeanChittenden/postgresql-zfs-bes...) is a joy to administer, and is unlikely to be the weak link in a production setup if you stick to a two socket system. There is a lot of work going on in HEAD that is relevant to this workload in the memory management subsystem, including NUMA support. And some TCP accept locking changes that'd be relevant for TCP connection turnover.
People benchmark CentOS too much. A far more interesting test would be Oracle Linux, both with and without the UEK.
I would bet this would wipe out the SUSE advantage:
# cat /proc/version
Linux version 4.1.12-112.14.1.el7uek.x86_64 (mockbuild@) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #2 SMP Fri Dec 8 18:37:23 PST 2017
The "RedHat-compatible kernel" should have identical performance to CentOS.
# rpm -qa | grep ^kernel | sort
kernel-3.10.0-693.11.1.el7.x86_64
...
kernel-uek-4.1.12-112.14.1.el7uek.x86_64
...
I realize that many people don't like Oracle Linux due to their unauthorized appropriation and support of their RedHat clone. It does bring new functionality to the table, however (primarily Ksplice) and has great support for the eponymous database.
When RedHat 9 support ended, it never touched my personal systems again. Even my CentOS habit has quelled.
Hello, OP here. I'm certain that you can fine tune every OS for specific use case. I may indeed do that in a future blogpost. The question is what to compare ? Should I compare Linux kernel versions, PostgreSQL versions, filesystems (and features like compression, block size, ...) ? As you can see the permutations are endless and thats why I compared stock OSes with their default filesystems of choice.
I don't think that a Linux distribution is just a variable and the only thing that differs is the kernel version. Each distro made its own choices, for better or worse...
As for the clients connecting over the network - that was exactly my point. My idea was to benchmark in conditions similar to production deployment. I doubt that many production systems connect over unix socket.
And for the warmup period, as you can see in the benchmarking script there is a 30 min warmup period before I start to record the results.
> testing Linux kernel variants and their stock OS configurations/kernel scheduler setups, and FreeBSD was tossed into the mix
For a lot of people who don't have time/understanding to play with things much beyond using stock versions and configuration, this could still produce a useful benchmark. People who have the knowledge, confidence, and time, probably won't be reading the article at all as they'll have already performed their own less artificial tests (i.e. benchmarking with their own application using live-like data and load patterns).
Though that is my argument against any benchmark like this: it is at best an indicator of peak activity under very specific conditions, a starting point but it doesn't really represent my application with any precision nor accuracy.
> it wasn't clear if you had a warmup period to your benchmarks
I think it's pretty fair to test each OS with its filesystem of choice. I'm aware that you can use ZFS on Linux, but I'm not (yet) brave enough to recommend ZFS+Linux. And yes there's btrfs but would you trust it with your data ? :)
I kind of miss network traffic diagrams; just mentioning its connecting over Gbit isn't enough for me. Is there 200 mbit sql traffic going back/forth, or just 8 mbit?