It wouldn't be able to open a TCP connection without knowing what IP address / interface to use.
You're right--it should outright error. You should only see timeouts like that if you were dropping the packets from some middleware or middlebox, but your client still had a valid IP address.
The number seems arbitrary. Maybe we should be subsidizing until we have 100,000 more.
I'm always skeptical when something is presumed to be a universal good in a way that's unfalsifiable. What metrics would you expect to see if we had too many STEM PhDs? What metrics can we expect to improve if we had more of them?
I think your comment is more a refutation of the top level than the person you're responding to. I think people are right to assume there is already a serious throughput issue with scientific research, especially so-called "basic" research in the US and seeing a mass exodus from the government is troubling because public funding is what, historically, generates the big breakthroughs.
What the person you're replying to was likely trying to say is that once you get into this size of layoffs its no longer about the individuals and their performances and a claim that all 10k of them are on one side of a theoretical "bell curve" (which btw i havent seen evidence can actually be measured) is big and needs evidence.
> public funding is what, historically, generates the big breakthroughs
Without an opinion on the rest of this, public funding in the US doesn't produce big breakthroughs from scientists employed by the government, but rather by funding university research.
It appears that, after the administration canceling a significant proportion of grants in 2025, that science funding has largely been maintained or increased from pre-2025 levels for 2026, although how the 2026 funding gets spent, and whether it is all spent, is an open question.
It’s a separate question not arbitrary. How many PHD’s the government should employ is debatable, but saying we should have fewer such people says nothing about who was let go.
It’s always tempting to say ‘This was a good decision therefore all the consequences are good’ but in the real world good and bad decisions will have both positive and negative consequences. Understanding individual consequences is therefore largely separated from the overall question of should we do X. However in politics nobody wants to admit any issues with what they did so they try and smokescreen secondary effects as universally beneficial/harmful.
One metric you could use is how often publications are mentioned by patents, and how often those patents lead to economic value. By this metric, it is valuable.
The number of PhDs we have is currently too many given the amount of money we have for project grants. But there is no evidence that the money we allocate to research is too large. If anything, you could argue the opposite.
I would be delighted if the private market funded basic research - the seed ideas that lead to patents.
You’re confusing two different questions. ‘Should we have more STEM PhDs in government?’ is a reasonable policy debate. ‘Is losing 10,000 STEM PhDs in weeks a problem?’ has a clearer answer… yes, because institutional knowledge doesn’t rebuild quickly. Also, there’s no evidence this was performance-based attrition. Lastly, recruitment becomes harder after mass departures signal instability.
The burden isn’t on critics to prove some theoretical optimal number. The burden is on defenders of this exodus to show it improved government technical capacity rather than hurt it.
I disagree--we're all paying for it, so it should be justified regardless.
And I don't need an optimal number. But the common refrain is essentially that more is always better, and fewer means we're losing our standing in the world. Always.
Maybe keeping a lot of them but shedding some percentage is actually more optimal. But I'm open to being wrong. That's why I'm asking for metrics.
If this was intentional workforce reduction, then the agencies affected should show improved efficiency or output with fewer people. We should see faster regulatory reviews, better grant decisions, stronger technical evaluations, just with leaner teams.
Instead, what we’re likely to see is degraded capacity, slower timelines, and reduced technical oversight. If that happens, will you acknowledge this was a mistake? Or will any negative outcome just get blamed on the remaining employees?
No, I accept the outcomes you are claiming are likely. I'm talking about the net results for the rest of us.
There are now 10k STEM PhDs who are (presumably, mostly) not being paid by the federal government, and are now employed in the private sector and contributing more to the federal budget than taking from it. Or retired, as noted in the article.
On the downside, some grants are maybe taking longer to be disbursed? Not ideal, but again: there's some reason we didn't previously have 100k more STEM PhDs. And we could make the same argument: if we had, we'd have faster regulatory reviews, better grant decisions, and stronger technical evaluations.
There's basically no way to argue for any number other than "more." That suggests an unfalsifiable argument.
Per the article, retirements were a big chunk. I guess the rest could be homeless and on social welfare systems for the rest of their lives, but I think it's more likely they have or will find employment elsewhere.
The US government operates with such a huge debt that we aren’t paying for these things. Instead we are paying for the long term effects of such spending.
Cutting 10k scientists could therefore result in increased taxes without anyone ever seeing any savings. Or it could result in net gain from 1$ all the way up to what their cost * interest in the debt.
Therefore there’s no obvious side who takes the default win here. Instead you need actual well supported arguments.
It also happened with doorbell cameras. If you'd have asked people ten years ago if it seemed reasonable for nearly every home to have an Internet-connected camera on the front facing the street (and neighbors' houses), I think you'd get pretty universally negative responses. Yet here we are.
It may just be to stop third parties from creating a whole business out of "shop for me" AI bots. Individual users getting away with it might not be a problem, but with it being against ToS, it'd be a lot more shaky to build a business out of it.
In fact, it may just be that eBay wants to be the business selling AI "buy for me" agents.
Or alternatively: they have reduced the expectations of "two day shipping" so much that they no longer need to try that hard (by commingling inventory) to actually meet them.
This makes sense in the modern age where retailers accept returns for any/no reason and manufacturers tend to bend over backwards to get you to avoid returning anything.
Same reason why any furniture you order online seems to always have all the tools necessary to assemble it. They never require power tools and always include screwdriver(s) and/or Allen wrenches. They need to design away every possible reason someone might just return it.
I was actually spoiled by the fact that self-assembled furniture typically does not require any power tools. Then I bought a bike rack and was disappointed that the first step required a drill.
Most unions in the US seem to have pretty strict rules about titles, who does what, and how much each role gets paid. It's not unreasonable to expect it'd happen with software developers, too.
That said, I always point to the NFL Players Association as one that seems to be able to provide value to highly and diversely paid talent apparently without kneecapping their high performers. Though it's not something I've researched deeply.
It is overwhelmingly less resource-intensive to maintain over time than any Jenkins server. And the config is in the repo (not the Jenkins server's config).
Once you get beyond shell, make, docker (and similar), dependencies become relevant. At my current employer, we're mostly in TypeScript, which means you've got NPM dependencies, the NodeJS version, and operating system differences that you're fighting with. Now anyone running your build and tests (including your CI environment) needs to be able to set all those things up and keep them in working shape. For us, that includes different projects requiring different NodeJS versions.
Meanwhile, if you can stick to the very basics, you can do anything more involved inside a container, where you can be confident that you, your CI environment, and even your less tech-savvy coworkers can all be using the exact same dependencies and execution environment. It eliminates entire classes of build and testing errors.
I use to have my Makefile call out and do `docker build ...` and `docker run ...` etc with a volume mount of the source code to manage and maintain tooling versions etc.
It works okay, better than a lot of other workflows I have seen. But it is a bit slow, a bit cumbersome(for langs like Go or Node.js that want to write to HOME) and I had some issues on my ARM Macbook about no ARM images etc.
I would recommend taking a look at Nix, it is what I switched to.
* It is faster.
* Has access to more tools.
* Works on ARM, X86 etc.
I've switched to using Deno for most of my orchestration scripts, especially shell scripts. It's a single portable, self-upgradeable executable and your shell scripts can directly reference the repositories/http(s) modules/versions it needs to run without a separate install step.
I know I've mentioned it a few times in this thread, just a very happy user and have found it a really good option for a lot of usage. I'll mostly just use the Deno.* methods or jsr:std for most things at this point, but there's also npm:zx which can help depending on what you're doing.
It also is a decent option for e2e testing regardless of the project language used.
reply