> If you start saying no to tasks assigned by your manager, you are not going to get promoted. You’re going to end up on PIP track for insubordination.
I've had a lot of success in asking "are you asking me to do this or telling me", when I've been tasked with something I think is extremely dumb.
If the response is "I'm asking", then I will usually respond with some variation of "can you assign it to someone else, or better yet, throw the task in the garbage".
If the response is "I'm telling you", then I'll go on a spiel about how I think it's incredibly stupid and the people involved in this decision are bad at their jobs, then get on and do it.
But if you're reading this, there is a good chance you are American, so take this advice with a massive grain of salt as I'm not. The culture here in NZ sounds extremely different to almost everything I've read on this forum.
Maybe by law domestic robots should be physically much weaker than humans. I want a butler bot that can tidy up and make me tea, but I should be easily able to defeat that bot in fight if it comes down to it.
Tough to achieve that in all circumstances. Someone brought up a robot holding a knife, while its target is asleep. Pretty hard to win that fight unless it has bad aim.
How much of a moron do you have to be to buy direct-to-bezos listening devices that are always on and submit your conversations to the cloud? Only because you don't want to print a recipe?
I mean, at least the direct to bezos bugs were damn cheap for what they were (smart devices) and in absolute numbers (I remember them literally being gifted to you when ordering specific stuff or signing up for prime for the first time).
These humanoid robots are cheap for what they are (admittedly very capable and high end robots), but their absolute pricetag remains far from being cheap.
Yeah and people (and bystanders) have experienced some terrible outcomes already from diving deep with AI therapists.
It's going to be the wild west for a while now with AI and robotics before laws catch up. Maybe there'll soon be a market for pocket EMP devices out there...
Going to preface this post by saying I use and love Obsidian, my entire life is effectively in an Obsidian vault, I pay for sync and as a user I'm extremely happy with it.
But as a developer this post is nonsense and extremely predictable [1]. We can expect countless others like it that explains how their use of these broken tools is different and just don't worry about it!
By their own linked Credits page there are 20 dependencies. Let's take one of those, electron, which itself has 3 dependencies according to npm. Picking one of those electron/get has 7 dependencies. One of those dependencies got, has 11 dependencies, one of those cacheable-request has 7 dependencies etc etc.
Now go back and pick another direct dependency of Obsidian and work your way down the dependency tree again. Does the Obsidian team review all these and who owns them? Do they trust each layer of the chain to pick up issues before it gets to them? Any one of these dependencies can be compromised. This is what it means to be. supply chain attack, you only have to quietly slip something into any one of these dependencies to have access to countless critical user data.
Coincidentally I did that yesterday. Mermaid pulls in 137 dependencies. I love Obsidian and the Obsidian folks seem like good people but I did end up sandboxing it.
To be fair, the electron project likely invests some resources in reviewing it's own dependencies, because of its scale. But yeah this is a good exercise, I think we need more systems like Yocto which prioritize complete understanding of the entire product from source.
I feel like when I'm presented with most modern criticism of Apple devices/software I tend to agree, but despite all the mostly valid criticisms I see batted about, who is doing consumer tech better?
I've recently (finally) managed to purge the last instance of Windows from my life when I replaced Windows on my gaming desktop with Linux. So I've got Linux on the (gaming) desktop, a Steam Deck and Debian stable on a server, which is great.
But I mean, that covers my home office? I still need a phone (iPhone), a smart watch (Apple Watch) and while not critical, certainly adds a lot of value for me. The things that connects to the TV (AppleTV) is the best of all I've tried when compared to any other type of solution (Firestick, Chrome Cast, Home Media Server, Built-in TV Smarts). I've also got an M4 MacBook for dev, which is frankly fantastic when compared to whatever other hardware I could get here in NZ and would involve going back to Windows anyway?
So I mean, what are the actual valid options really? Apple still offer great devices and the integrations between them are the best on the market imo.
Perhaps in a perfect world Pine64 devices would be rock solid and I could run Linux everywhere, but failing that, what else ya gunna do?
Nobody. Apple's still doing the best by far. Apple Silicon chips. Safari having the strongest anti-tracking of any platform's browser (AAPL, GOOG, MSFT). Privacy on the Apple TV. Using 100% renewable electricity for their AI data centers (Private Cloud Compute) and not using its data for model training, unlike everyone else. They're even starting to compete on price with the $600 Walmart MacBook Air. But then there's all the bad stuff we're all familiar with.
The worst part to me is that I don't think any systemic solution (like antitrust) can ensure it remains that way, or make the others fix their shit. Apple is this way because of the decisions, personalities and whims of a handful of individuals that lead Apple. The other companies are fuckups for the same reason. Maybe the only safeguard is ideology (i.e., up-and-coming Apple employees who dogmatically believe in their marketing on privacy, energy efficiency, speed, etc). From the outside all we can do is impose a PR cost on them and their competitors when they fall short, and on the margin, that helps strengthen that internal faction of dogmatically principled employees against their colleagues who don't care.
Nobody. It's possible to be the best without being good.
I'm surprised a consumer-focused RedHat hasn't come along to build an offering of just-works-but-still-open devices. There are companies out there that do parts of it but nobody does the full personal device stack thing like Apple. I'm still disappointed they went the cloud route instead of everything lives on your AirPort. If I ever win the lottery ten times this is the startup I'll build.
And, to GP’s point, there is no one to replace them.
As someone who lived Apple stuff were between a rock and a hard place. What we loved is dissolving away into mediocrity or worse. And we don’t like the competition better. If we did we’d already be over there.
Add in that lots of companies like to follow Apple’s design leads, for better or worse, and we’re left with nowhere to go.
So we really want the thing we liked to be good again. Or at least to stop getting worse for no good reason.
This is exactly how I feel as someone who enjoyed the Mac during the Jobs era of Mac OS X and has been quite disappointed with the state of personal computing since then. The Apple experience is not the same today as it was during the Snow Leopard days. It seems to me that the old guard at Apple is gone and that the people making the key decisions at Apple in the past decade or so are taking Apple in a different direction than what I would like, as someone who is a big fan of both the classic Macintosh and Jobs-era Mac OS X.
What I'd give for a modern OS with an interface designed with the principles of people like Don Norman and Bruce Tognazzini in mind, combined with rock-solid underpinnings taking advantage of the best that OS research had to offer in the past 30 years. In other words, I want an updated Smalltalk/Lisp machine with a classic Mac interface brought up to 2020s standards regarding networking, security, and other concerns.
Modern macOS to me is a disappointment compared to Mac OS X Snow Leopard, and don't get me started on the lack of user-upgradeable RAM in modern Macs. However, Windows 10/11 is even more disappointing to me compared to Windows 7, which was a nice OS and is my second favorite version of Windows, my favorite being Windows 2000. Desktop Linux seems to be in an eternal Sisyphean cycle of churn.
So, today I begrudgingly use Windows on my personal machines and macOS on my work-issued MacBook Pro, longing for a compelling alternative to appear one day that pushes personal computing forward.
It really feels like Apple is very slowly going the way of enshittification. What's a consumer to do, switch to another platform? Don't make me laugh. Windows and Linux drive me insane. Apple's operating systems are the only ones that seem to 'get' me, which really makes it suck that they're in such danger.
Tahoe is the first macOS that I don't "get", and its fucking scary. I can stay on Sequoia for another year or so, and then what?
When Tahoe came out, I tried it for a day, liked some of it, hated most of it. I gave it a week. Still hated most of it.
The end of that week I bought a used ThinkPad and installed Arch on it. My future is no longer on the Mac. I have a few years to try and transition, but I am otherwise done with them. Butt ugly uber-rounded bouba squircles for fucking windows that cut off the content in my PDFs? That can't even help but cut off the buttom of the scroll bars? This piss ugly grey on light grey on grey with the most pathetic, cowardly whisper of texture they call "glass"? It's fucking over. At least until Alan Dye crawls back into whatever print ad shithole he crawled out of.
> The end of that week I bought a used ThinkPad and installed Arch on it. My future is no longer on the Mac.
same, i think the slow decline of macos' user interface means kde is actually the same level or even better (kde slowly improving mac slowly declining) so i might as well jump sooner than later... i'll miss the quality of some native apps, but that to me is more a business opportunity than a pure negative per se
If it’s not a secret that is used to sign something, then the secret has to get from the vault to the application at some point.
What mechanism are you suggesting where access to the production system doesn’t let you also access that secret?
Like I get in this specific case where you are running some untrusted code, that environment should have been isolated and these keys not passed in, but running untrusted code isn’t usually a common feature of most applications.
If you actually have a business case for defense in depth (hint: nobody does - data breaches aren't actually an issue besides temporarily pissing off some nerds, as Equifax' and various companies stock prices demonstrate), what you'd do is have a proxy service who is entrusted with those keys and can do the operations on behalf of downstream services. It can be as simple as an HTTP proxy that just slaps the "Authorization" header on the requests (and ideally whitelists the URL so someone can't point it to https://httpbin.org/get and get the secret token echoed back).
This would make it so that even a compromised downstream service wouldn't actually be able to exfiltrate the authentication token, and all its misdeeds would be logged by the proxy service, making post-incident remediation easier (and being able to definitely prove whether anything bad has actually happened).
In this specific case running linters doesn't even need that much I think, it's never going to need to reach out to GitHub on its own, let alone Anthropic etc. The linter process likely doesn't even need network access, just stdout so you can gather the result and fire that back to GitHub or whenever it needs to go. Just executing it with an empty environment would have helped things (though obviously an RCE would still be bad)
Unless "national security" is going to either pay people proactively to pass gov-mandated pentests, or enforce actual, business-threatening penalties for breaches, it doesn't really matter from a company owner perspective. They're not secure, but neither are their competitors, so it's all good.
A pretty straightforward solution is to have an isolated service that keeps the private key and hands back the temporary per-repo tokens for other libraries to use. Only this isolated service has access to the root key, and it should have fairly strict rate limiting for how often it gives other services temporary keys.
Ruby and Perl are great examples of massively popular languages that withered because the language/platform outpaced the community.
If the first question you’re asking yourself looking at a code base is “what version is this/do I know this version” then that language is not facilitating you.
The successful languages are ones where “the community” prioritises backward compatibility. Java, C, Python have backward compatibility spanning decades. There’s a few discontinuities (lambdas in Java 8, Python 3, C++) but in most cases there’s a clear mapping back to the original. Python 3 is an exception to this but the migration window was something like 15 years…
Busy engineers, scientists and academics have little interest in keeping up to date with language features. A computer and a programming language are a tool for a job and the source code is just an intermediate artifact. These are your “community”, and the stakeholders in your success.
Have you used them? Perl has version tags in source code and everything is feature gated including the stdlib. Python does none of that. The stdlib changes constantly and just looking at source code gives you no indication if you can run it with your installed python version.
I don't believe that "from __future__" is the future-proofing you think it is, they just named it that way to be cute - a hypothetical 3.19 version couldn't even use it, since it's just a normal python import
$ python3.13 -c 'from __future__ import awesome_feature'
File "<string>", line 1
SyntaxError: future feature awesome_feature is not defined
the very idea of "future feature is not defined" is insaneo
Anyway, I'd guess what they intend is
try:
from __future__ import fever_dream
except SyntaxError:
fever_dream = None
because python really gets off on conditional imports
I was responding to the "Python does none of that" by pointing out that Python does indeed have features to help introduce new capabilities in a thoughtful way - I know it's not the same thing as something like conditional imports.
I do not really know which event you mentioned. But if you use the current version of perl interpreter. It still work for most of old versions. And new features keep safe for old perl.
Perl doesn't have "perfect backward compatibility" in the normal sense of the word. There is only Perl 5 which is perfectly compatible since it hasn't changed for 25+ years (which is how they achieved "compatibility" -- by not changing), and there's Perl 6 which isn't backward compatible and nobody really uses it.
A major new version of Perl ships regularly. A few weeks ago the latest major new version shipped. From the 2025 changes document:
> Perl 5.42.0 represents approximately 13 months of development since Perl 5.40.0 and contains approximately 280,000 lines of changes across 1,600 files from 65 authors.
Skipping back 5 major new versions (to 2020):
> Perl 5.32.0 represents approximately 13 months of development since Perl 5.30.0 and contains approximately 220,000 lines of changes across 1,800 files from 89 authors.
2015:
> Perl 5.22.0 represents approximately 12 months of development since Perl 5.20.0 and contains approximately 590,000 lines of changes across 2,400 files from 94 authors.
2010:
> Perl 5.16.0 represents approximately 12 months of development since Perl 5.14.0 and contains approximately 590,000 lines of changes across 2,500 files from 139 authors.
There's been well over 10 million lines of code changed in just the core Perl codebase over the last 25 years reflecting huge changes in Perl.
----
Perl 6 ... isn't backward compatible
Raku runs around 80% of CPAN (Perl modules), including ones that use XS (poking into the guts of the Perl 5 binary) without requiring any change whatsoever.
(The remaining 20% are so Perl 5 specific as to be meaningless in Raku. For example, source filters which convert Perl 5 code into different Perl 5 code.)
----
But you are right about one thing; no one you know cares about backwards compatibility, otherwise you'd know the difference between what you think you know, and what is actually true.
> But you are right about one thing; no one you know cares about backwards compatibility, otherwise you'd know the difference between what you think you know, and what is actually true.
What the hell is this? Even if nobody I know cares about backwards compatibility, how does this relate to whether my knowledge is true or not?
Apologies for trivializing perl5's progress in the past 25 years, but come on, chill out dude.
I know people don't like Perl, I just want to add some info here.
The Perl 5 Porters have restarted adding new features at 15 years ago. It has progressed from version 5.20 to 5.42. Although the speed is slower than popular languages, they are maintaining backward compatibility while adding new features.
How have they withered? Does every programming language have to compete for world domination via cancerous growth? I thought that only applied to VC backed startups and public companies if the startups survive...
They’re not actively used in any circles I move in. The fact that your back is up suggests you have something invested in these antiquated niche tools.
They have a right to exist. But there is strength in community. Successful platforms facilitate this and provide a means for participants to exceed the sum of there parts.
This can in turn fuel development of the platform therefore helping keep it relevant.
In recent times we call this “the network effect” and it applies to more than just social media.
The community withered. 10-15 years ago the cries of “Ruby” “Ruby” “Ruby” were deafening. I used Ruby and I really enjoyed and I thought I would leave Python behind but it but it never went anywhere from there. Somewhere around 3.x I think there were a load of breaking changes introduced and I imagine lots of people like myself just went back to using more stable platforms.
The fact your friend is suffering no consequences and is able to just carry on is exactly what is wrong with this industry.
In a perfect world the creation of software would have been locked down like other engineering fields, with developers and companies suffering legal consequences for exposing customer information.
The 80s and 90s devs who built our current software infra were, on average, FAR less credentialed than today's juniors and mids who mostly don't understand what they're building on.
Sure, and Da Vinci didn't have an architectural degree when he was designing bridges, but now you need a proper license to do so. Society learns to do better
Surprisingly, 80's and 90's developers were quite skilled low-level developers who knew very well all the ways things could go wrong. The difference was the stakes were not high then. The blast radius was maybe a hundred thousand people and the worst was they lost their own files. Now some AI-controlled process or apparatus could ruin everyone's credit and maybe even kill you and all your neighbors.
In our imperfect world, by the time the government could get together a reasonable certification process the content you're tested on would be out of date. Maybe when the industry is older it'll change slow enough to do that, but I don't think that'll happen so long as there's so much money aimed at disrupting everything and monetising the disruption.
Were going in circles far too fast to have licensure that hinges on being up to date.
That's what tort law is for. It leaves the details to the experts, and judges based on general notions of intent, negligence, and harm caused. The threat of financial ruin should incentivize against selling malware.
Let's say it was coded extremely well, but nevertheless a more advanced exploiter wreaked similar havoc. Would they still be liable in your perfect world? To some degree the principle of caveat emptor should apply in some tiny, nascent business, otherwise only large juggernaut monopolistic incumbents would have the means to have any stake in software.
> Let's say it was coded extremely well, but nevertheless a more advanced exploiter wreaked similar havoc.
A doctor kills a patient because malpractice.
Could that patient have died anyway if the patient had a more critical condition?
That is a non sequitur argument.
> Would they still be liable in your perfect world?
Yes. The doctor would be liable because did not meet the minimum quality criteria. In the same way that the developer is liable for not taking into account any risks and providing a deeply flawed product.
It is impossible in practice to protect software from all possible attacks as there are attackers with very deep pockets. That does not mean that all security should be scrapped.
Your spouse dies in surgery. The highly experienced surgeon made a mistake, because, realistically, everyone makes mistakes sometimes.
Your spouse dies in surgery. The hospital handed a passing five year old a scalpel to see what would happen.
There's a clear difference; neither are _great_, but someone's probably going to jail for the second one.
In real, regulated professions, no-one's expecting absolute perfection, but you're not allowed to be negligent. Of course, 'software engineer' is (generally) _not_ a real, regulated profession. And vibe-coding idiot 'founder' certainly isn't.
I don't remember the specifics well, but under GDPR they'd be required to give breach notification to customers, maybe write a report and get audited and possibly get fined depending on the situation. Customers could demand compensation (probably doesn't make sense here).
Right. Because the solution to all of this madness is SOC2 compliance or something along those lines.
What happened is a perfect natural selection. The friend is a very small actor with probably a dozen customers not a multi-billion $$ company with millions of customers.
But I guess the lesson is to vibe code to test the market while factoring a real developer cost upfront and hiring one as soon as the product gets traction.
In that world we’d just be transitioning to 32-bit software and still running MS-DOS since it’s certified. Linux would never ever have broken through. Who can trust code developed by open source cowboys? Have we verified all their credentials?
There are some industries where the massive cost of this type of lock down — probably innovation at 1/10th the speed at 100X the cost — is needed. Medicine comes to mind. It’s different from software in two ways. One is that the stakes are higher from a human point of view, but the more significant difference is that ordinary users of medicine are usually not competent to judge its efficacy (hence why there’s so much quackery). It has an extreme case of the ignorant customer problem, making it hard for the market to work. The users of software usually can see if it’s working.
I don't know if we'd be worse off with a lot of other software and/or public internet sites of 20-to-30 years ago. A lot of people are unhappy with the state of modern consumer software, ad surveillance, etc.
Probably a lot less identity theft and credit card/banking fraud.
For social media, it depends on if that "regulate things to ensure safety" attitude extends to things like abuse/threats/unsolicited gore or nudes/etc. And advertising surveillance. Would ad tracking be rejected since the device and platform should not be allowed to share all that fingerprinting stuff in the first place, or would it just be "you can track if you check all the data protection boxes" which is not really that much better.
I'm sure someone would've spent the time to produce certified Linux versions by now; "Linux with support" has been a business model for decades, and if the alternative is pay MS, pay someone else, or write your own from scratch, there's room in the market.
(Somewhere out there there's another counterfactual world where medicine is less regulated and the survivors who haven't been victimized by the resulting problems are talking about how "in that other world we'd still be getting hip replacement surgery instead of regrowing things with gene therapy" or somesuch...)
A lot of the things people are upset about are not related to this issue and not something licensing engineers would fix. They're products of things like market incentives.
What you're really talking about when you talk about "locking down the field" is skipping or suppressing the PC revolution. That would make things like opaqueness and surveillance worse, not better. There would be nothing but SaaS and dumb terminals at the endpoint and no large base of autodidact hacker types to understand what is happening.
I have wondered if medicine wouldn't be a lot more advanced without regulation, but I tend to think no. I think we have the AB test there. There are many countries with little medical regulation (or where it is sparsely enforced) and they do not export sci-fi transhumanist medical tech. They are at best no better than what more regulated domains have. Like I said, I think many things about medicine are very different from software. They're very different industries with very different incentives, problem domain characteristics, and ethical constraints. The biggest difference, other than ethics, is that autodidactism is easy in software and almost impossible in medicine, for deep complicated practical as well as ethical reasons.
For software we do have the AB test. More conservative software markets and jurisdictions are vastly slower than less conservative ones.
Go check out VxWorks or the like. only 20K a seat, build tools at a similar price, and then oh joy, runtime licenses required to deploy the sw you wrote.
Which are reasonable prices when lives are at risk.
Yes, I know RTOS are not general purpose, this is NOT apples to apples, but that is what that kind of reliability, testing, safety certification, etc. costs.
I disagree. Mature open source projects last long enough without significant disruption to still be relevant after they make it onto the certification exam. Products, not so much.
Investing time building familiarity with proprietary software is already a dubious move for a lot of other reasons, but this would be just one more: why would I build curriculum around something that I'm just going to have to change next year when the new CEO does something crazy?
And as bad as it might be for many of us who hang out here, killing off proprietary software would be a great step forward.
You're assuming the process would not be instantly subjected to regulatory capture by for-profit companies and by universities with an interest in inserting themselves into the required licensure pipeline.
Microsoft in the 1990s would have used the regulatory and licensure process to shut down open source. They tried to do it with bullshit lawsuits.
...and this is where compliance comes in, and is the exact reason real companies won't talk to you unless you have (at minimum) SOC2. There's billions of products out there, how do you know if it's actually good software developed by a team, or some idiot like above vibe-coding slop into what appears to be a functional application? We all make fun of audits and checklist-based-security but it would've almost certainly prevented the above from happening.
Imagine vibe coding spreads to civil engineering and people start building bridges this way. Have AI design it and then probably 3D print it on location.
> legal consequences for exposing customer information.
Still a good idea. Also without taking vibe coding into account. Far too many tech companies are way too sloppy with customer data. Often intentionally so.
Letting another country be the wild west and then cherry-picking the good stuff while regulating the nasty stuff doesn't seem like a terrible place to be for the, what, 99% of people who aren't Silicon-Vally-bigtech-execs-and-engineers getting all those profits?
Even in the US most software jobs are lower-scale and lower-ROI than a company that can serve hundreds of millions of users from one central organization.
But for the engineers/investors in other countries... I think the EU, etc, would do well to put more barriers up for those companies to force the creation of local alternatives in those super-high-ROI areas - would drive a lot of high-profit job- and investment-growth which would lead to more of that SV-style risk-taking ecosystem. Just because one company is able, through technology, to now serve everyone in the world doesn't mean that it's economically ideal for most of the world.
Some of that is US companies hiring in the EU because the salaries are lower. Source: I know of multiple companies, even on the smaller side, doing this.
It's a textbook case study of market failure in neoclassical economics caused by information asymmetry. If customers knew about the vulnerabilities, they wouldn't have paid money, or they would have demanded a lower price.
To be clear I don't mind it as a casual term, I'm simply saying that, to me, it comes off as puerile in this context. It's akin to putting out a press release for a project with "Suckerberg" written everytime Meta comes up, or for an older reference, Micro$oft. It personally made the article hard to take seriously, and cast a bad first impression on the project. It may not come across that way to all - I've simply never been a fan of that highly editorialized and charged communication style when it comes to community management. It almost has a combative tone, sort of like when CMs argue with users that have opinions they don't like. Take it or leave it.
_____________
As for the individual points:
The initial concerns about copyright are convincing.
The point about resource impact ending with "these resources would literally be better spent anywhere else" devolved into meaningless grandstanding. I wouldn't mind seeing a project take a stand because of environmental impact, but again it just ends up sounding like the author has a bone to pick rather than a genuine concern about the environment. If that's not the case, then that's a prime example of why tone matters in communication.
The Reddit comment paragraph where the author berates users for using LLMs on social media is just odd and out of place. Maybe better suited to the off-topic section of their community forum/discord.
And the last point I simply disagree with. Highly knowledgeable people in a field that requires precision use LLMs every day. It's a tool like any other. I use it in financial trading (ex: it's great for scanning reams of SEC filings and earnings report transcripts), I know others who use it successfully in trading, and I know firms like Jane Street have it deeply integrated in their process.
> But what got me sober, what got me through the first one, two, three hard years - none of it was in those notes.
> It hit me: what got me here won’t get me where I need to be next.
Where was it, or, what was it that did?
I believe the author when they went through their system of notes and effectively found nothing that contributed to the most important parts of themselves, but I was also sort of waiting for the alternative answer that I thought was supposed to be coming...
I've had a lot of success in asking "are you asking me to do this or telling me", when I've been tasked with something I think is extremely dumb.
If the response is "I'm asking", then I will usually respond with some variation of "can you assign it to someone else, or better yet, throw the task in the garbage".
If the response is "I'm telling you", then I'll go on a spiel about how I think it's incredibly stupid and the people involved in this decision are bad at their jobs, then get on and do it.
But if you're reading this, there is a good chance you are American, so take this advice with a massive grain of salt as I'm not. The culture here in NZ sounds extremely different to almost everything I've read on this forum.