For background: This was Intel's distro and it's likely that most/all of the folks that were maintaining it are a part of the 5,000 layoffs just announced, bringing the total Intel layoffs to 20,000 people.
Round after round of layoffs is terrible for morale and drives away existing good talent. Layoffs should cut deep... once. Corporations these days are rehashing the unwise, bureaucratic stupidity of decades past.
Yep. Wage suppression and insta stock bump. Publicly- and private equity-owned corporations are inherently unstable beasts. Knowledge workers and ordinary workers need to band together and start their own co-ops that stay private, partially collectively-owned, have retirement plans, high performance people, pride in their goods and services, high job satisfaction, and very low turnover because everyone is trying to join them.
From my experience, none. "Utterly unfortunately, today was your last day with the company. A separation agreement has been sent to your personal email. Your corporate access is being revoked. Thank you for your contribution!"
Being fired for poor performance is all about ample warnings, issuing a PIP, etc. The company wants the employee back on track. Being laid off is a situation that an employee cannot fix with their efforts. There's no incentive to work this week if it is already known that you are going to be laid off next week, but some employees might consider a prank or even minor sabotage as a helpless act of protest. It's safest to dismiss the laid-off ASAP.
It’s also not legal in much of the world to do that, thankfully. There’s weeks/months to negotiate collectively with the company, possibly organise a strike with those not at risk of redundancy or even just to say goodbye to coworkers.
This works if you have a trade union. I'd hazard to say that at least 99% of software engineers in the US work on "at-will" employment agreements, and do not belong to any unions.
A separation agreement usually stipulates paying 2-3 months worth of salary, and extending the benefits similarly. I don't see how it is worse than spending a couple of extra weeks in the office, and receiving the same.
(Also note that employees that are harder to lay off also get hired with much more reluctance.)
In several countries that I'm aware of, collective consultation is a requirement above a certain number of redundancies even if there is no trade union agreement in that workplace.
Collective bargaining could achieve far more than 2-3 months of pay. I've personally seen cases of over double that much pay, reduced numbers of redundancies, sacking a few directors instead of a few workers and in one case even preventing the redundancies entirely by forcing management to reduce other costs.
Of course, you are likely to fare much better if the workplace is already unionised, so join/build a union now!
The US has the WARN Act which requires companies to give 2 months notice of large layoffs. There are a number of loopholes however and the company doesn't need to give any specific employee 2 months advance notice, so the good employees start their job hunt while everyone else hopes to survive the culling.
Early in my career in the mid 2000s, the startup that was on the same floor as mine laid off a QA person, who then showed up the next day and fatally shot the CEO and head of HR. Our CEO called me and told me not to come in that day.
This[0] is just the first result searching and it is from this week too. But it is not uncommon. Insider threats to infrastructure exist at all times of course but a disgruntled employee with administrative access and knowledge of the infrastructure can do a lot of damage quickly.
if your goal is healthy companies that can hire employees and give the employees what employees want (jobs, if you got lost) then don't saddle companies with extra expenses that are not productive. giving employees a bonus for non productive activity on top of the company's failure is just stupid.
if immigrants with menial jobs can save money and even send money back home to their families, so can higher status employees.
Knowing which projects/languages/frameworks to invest time into and which to skip (even if they produce useful subprojects) is a superpower these days.
"a theorized phenomenon by which the future life expectancy of some non-perishable things, like a technology or an idea, is proportional to their current age."
Sure, but the counter is that you're going to be very late to some new foundational tech (ex. Kubernetes) that are legitimately useful. There are benefits to being early to a trend that has legs
There’s nothing wrong with getting involved in things that seem like they might be interesting without counting on their long-term survival. Hype-chasing on the other hand tends to be a bad plan.
Yes, but you pay a real cost for those choices too. A management plane that is non deterministic, imperative, and full of highly mutable state, not to mention basic stuff like the package manager metadata and cache not being shareable, and package installs all having to be serialized because they all call shell scripts as root. These limitations constrain even tools like dagger from providing a first class interface to apt like there is for apk because any deb could have rm -rf / as the postinstall script.
A lot of normal users don’t feel these pain points or tolerate them by sidestepping the whole affair with containers or VM images. But if you’re in a position where these things have an impact it can be extremely motivating to seek out others who are willing to experiment with different ways of doing things.
I’m assuming a friendly tone here, and in a similar tone its funny because I also think Nix is not adopted because its benefits just aren't worth the cost to users (devs)
I did indeed deploy Nix to moderate success in a prior gig, but have held back pushing it at my current one; we're simply not at the scale where the problems that Nix solves are worth the cost (yet, maybe ever).
For a less controversial take, consider alpine's apk package manager. For a single-use container that runs one utility in an early dockerfile stage, apk can probably produce that image in 2-3 seconds, whereas for an apt-based container it's more like 30 seconds. That may not matter in the grand scheme of things or with layer caching or whatever, but sometimes it really does.
A good start would be to distrust anything made by a VC funded start-up or a once-great tech co.
If you do want to use something they made, create a hard fork and pretend they already ditched the project as they inevitably will.
If you spit up chip design and fab, who would be interested in each? And is there enough x86 demand to keep the design side open? Windows on ARM is a thing, and data centers have been buying more from AMD than they used to.
Hard disagree here, corporations almost always have the biggest pockets to fund continued R&D.
There’s a tension there, but this is why it’s a skill — theres no simple rule. Fully open source community governed projects can be some of the most obviously good to ignore.
Well, sure, but those weren't instructions. Just general guidelines to counter the claim that choosing safe technology requires superpowers. It is much simpler than that.
Some are easy to see to avoid (ie Google, https://killedbygoogle.com), whereas others like this one are a bit more unexpected though make sense (to me) in hindsight.
This happened recently with Scitkit-Learn Intelex, which was a drop-in replacement for some parts of sklearn that was a bit faster. One day, the Intel channel on Conda just stopped working (and I learned that Anaconda loses the will to live when a random channel you installed one package from is unavailable) and another organization took over Sklearn Intelex.
No communication could be found on Google connecting them to Intel (whose only news around the package was announcing the initial release a few years ago), you had to read the Git issue history to find people talking about the transfer.
I still have no idea what even happened to their Conda channel after the sudden disappearance. The complete lack of communication just left a bad taste in my mouth...
Is OpenCV still owned by Intel, or dependent by them (funding, engineers, etc)? There are many good distros out there, but to my knowledge OpenCV has no other FOSS alternative on par with it.
Pretty sure Intel abandoned it like fifteen years ago, and then Willow Garage employed some of the people, now there’s an independent OpenCV Foundation.
But I have no idea who’s actually paying the bills, behind the scenes.
I mean, Clear Linux was the leader in the vast majority of Linux benchmarks, to my knowledge. So much so that even AMD used it in their advertised benchmarks for CPU releases because of the performance advantage.
I think it was quite successful, and I doubt they are shuttering it because they don't see the value in it, but because of overall lackluster company performance and the new CEO cutting costs/the workforce aggressively.
I don't think there will be one, a company would need to commit to salaried devs. What would the value-added proposition be for them that they can't get by using any other distro out there?
The problem is that Clear Linux did a lot of tweaking in their packaging to get good performance, up to and including actual code patching IIRC, so it would be a nontrivial ongoing effort to continue that work.
I wonder how much this will affect Kata Containers, which is AFAIK like the best/only good way to run containers in k8s with the security of VMs? https://katacontainers.io/
Man. There's so much amazing work Intel has done for the ecosystem. It's so hard so scary to imagine this world where no one else fills in so so much, so unclear who else does. Intel has done so so much for the ecosystem. It feels like open source has been an Immortal phalanx, always people to fill in: I hope so much I'm wrong but this shift in Intel feels like the death of the Immortal. What a pity that CHIPS act turned to dust, left such an amazing crucial industry hang out to dry.
Clear Linux's performance came primarily from function multi-versioning (CPU-specific optimizations at runtime), aggressive compiler flags (-O3, LTO, AutoFDO), kernel tweaks, and a stateless design that minimized I/O overhead.
Yeah, but there is something else here too... I used cachy for a heartbeat and it advertises the same benefits; it just felt slower (notably on boot) Maybe it was just all the graphical load screens.
There's something clear had that made it feel modern, familiar and boring (which might not be for everyone) 90% of my tasks were in vscode devcontainers so kept things simple and out of the system for the most part.
I'm in the market for a new desktop PC. Historically, Intel has been better, or at least more likely to have Linux support. Performance-wise, it's comparable enough for productivity use and maybe more power efficient, but the company is losing money, shipping bugs that can damage hardware, and disinvesting in software. I'm late to the party, but I feel like I have to go AMD.
This seems extraordinarily bad. Is there something weird about your machine? My completely vanilla ubuntu boots in 5s and Ubuntu is considered to be a slow-starting Linux.
This news, which has been obviously coming for years, also tends to throw doubt on all of Intel's other software projects. Who, for example, would actually invest in exploiting QAT? Even though it clearly offers opportunities for massive gains in the right applications, it also carries the obvious risk that Intel will abandon it.
Can't entirely fault Intel. Linux is mature enough to take care of itself, and Intel has other priorities. But they could have given a bit more notice.
That's a huge disappointment. Clear Linux has been reliably the fastest distro. I'm going to have to find a replacement distro for my Minecraft server.