Hacker Newsnew | past | comments | ask | show | jobs | submit | j2kun's commentslogin

Fatalism will also not fix anything. But I suppose death comes for us all, yes? Why do anything at all?

I feel that fatalism, especially when people treat it as some sort of personal philosophy, is kind of lazy.

It requires no effort to say "fuck this, nothing matters anyway", and then justify doing literally nothing.


> I feel that fatalism, especially when people treat it as some sort of personal philosophy, is kind of lazy.

I think a lot of fatalism is fake. It's really someone saying "I like this, and I want you to believe you can't change it so you give up."


It also makes no sense! "Fuck this, it doesn't matter - but I'll happily spend effort communicating that to others, because apparently making others not care about something I don't care about is something I do care about." Wut?!

Well, I say it makes no sense. Alternatively, it makes a lot of sense, and these people actually just wanna destroy everything we hold dear :-(


Perhaps the current societal trajectory is destroying everything that they hold dear.

I mean, just look around you.


Then do something about it. Vote for better politicians. Donate money to causes that you think are important. If you think you can do it better, and this isn't meant to be facetious, run for political office.

Being fatalistic can be a great excuse not to do anything.


>Vote for better politicians.

I cannot. I can only vote better politicians if they are there. That is without even going into the minefield of what is "better". My implication is that I have no confidence whatsoever in any current politician in my state.

> Donate money to causes that you think are important.

I have no money.

> If you think you can do it better, and this isn't meant to be facetious, run for political office.

I have no money, no visibility and no connections. Even if I was magically given tons of money, I would still need a strong network to attempt any real change, even without taking into consideration the strong networks already in place preventing it.

Telling random citizens "run for office" is facetious, whether you mean it or not.


> Telling random citizens "run for office" is facetious, whether you mean it or not.

Hard disagree. At least where I live, "random citizens" run for local office and succeed all the time.

Also, complaining that you "have no network" is a you problem, not a system problem. I'm truly sorry if you feel you have no friends, but you'll be better off at least trying to get some (independent of politics). And if that's something you've tried and failed at before, I do feel pity. But I don't think hope is lost for anyone. And even if it were lost, please don't actively spread the misery!


Don't spread the misery?? Wow, fucking thanks.

You are kind of proving my point. You are actively justifying doing literally nothing about what bothers you and acting indignant and self righteous about it.

Apathy has a striking number of motivated evangelists!

This is more cultural rather than rational.

This is the only relevant question. And it leads right to the next one which is “what is a good life?”

But humans have a huge bias for action. I think generally doing less is better.


On the other hand, if a dead person can do it better than you can, it's not that much of an accomplishment.

I didn’t mean that you should strive to do as little as possible; rather that if you have 2 choices, do more or do less - then I would be biased towards doing less. Of course not always a realistic option

> I think generally doing less is better.

My sedentary lifestyle is responsible for my recurrent cellulitis infections.

Just saying.


You can probably find a million situations where doing less is terrible.

I think first step would be to define for yourself what doing less actually means - it could mean taking a walk instead of chasing dopamine -> doing less but you move more.

But whatever it’s a philosophical question and there aren’t any right or true answers


I got hit by a car while out for a run. Just saying.

I think "adapt or die" is the takeaway.

It's fun to pet the cat. It's not fun to rage against an unstoppable force. Well, maybe it is for some people. But I find people often underestimate the detrimental effects.

> But I suppose death comes for us all, yes? Why do anything at all?

Wrong take. Death comes for us all, yes, so why hold back? Do you want to live forever?


> Do you want to live forever?

Yes, of course. Do you prefer to die? Those are the only two alternatives, and a decision that you don't want one is a decision that you prefer the other.


No, there is no alternative. Everything eventually dies, so you better make peace with it. The only people who believe that they won't die are religious people who believe in an afterlife (which is a preposterous position) and the people who have their heads or whole bodies frozen because they think they are so special that the future will honor their contracts and revive them.

Both of these are bound to lead to the exact same outcome so it doesn't really matter what you believe but it may guide you to wiser decision while you are alive to accept reality absent proof to the contrary.


s/make peace with it/make war with it/. To the last breath.

I can think of no concept more horrifying than personal immortality and if you disagree I don't think you've thought about it enough.

I'm sorry to hear that you don't want to exist in the future. I do. I have thought about it extensively, and there is literally no scenario in which I consider not-existing better than existing.

There is an essentially infinite amount of creativity and interesting complexity available in the richness of interactions with other people and the things people create. What, exactly, are you "horrified" about?


The difference between "essentially infinite" and "actually infinite". Infinity is a very long time.

Cringe.

Classic HN comment: ignore the article and respond directly to its title

Well I read the article discussing pypi packages but I think for a lot of people it’s more single use tools. My little apks are ugly and buggy but work for me

This happens every time non-technical users get their hands on technical tools.

Just go look at some HyperCard compilation CD: all stacks were horrible, ugly and buggy, but if the author massaged them the right way, they kind of worked, held by spit and prayers. "How to sit people at my wedding" type of garbage. The only good quality HC stacks were the demo ones that came with the program, made by professional developers and graphic designers working at Apple. In the decade HC was a product, maybe 15 high quality stacks emerged.

Same with the horrible mess that "users" manage to cobble together if you give them access to Office(TM) macros. Users don't seem to know about Normal Forms when they begin to create tables in Access. The horror.

An education in Computer Science is necessary when systems have to interact reliably. One-off "I vibe coded a dashboard for my smart watch" are in the same category as Visual Basic with the server paths hardcoded all over, breaking on empty directories and if two PCs happen to run the same macro, then half of the files in some shared directory get wiped for good. You are welcome.


Well, I've been a software developer for 15 years (and cut my teeth on BASIC well before that...) but sometimes I just need something quick and dirty that works. Most people do, actually. And I no longer give a crap about Beautiful Code when I actually just want "like Anki but it let's me watch tv in between quizzing me and I'll delete it when I'm fluent"

You are welcome.


They were not. The rule now is that they have to go into a special bag that cannot be opened while school is in session. Before they could be left in a backpack and snuck out or used between classes.

The legislature (of states and the federal government) routinely passes laws explicitly giving the head of state the power to make decisions like this without passing a law. The most recent one in Oregon about schooling was SB 141.

+1, PageRank was taken from academia. They even cited it in their original work. Funny how the origins of these things get forgotten.

Inspired by, not taken. It's a clever solution to a hard problem!

Do you remember Ask Jeeves? Dogpile? Google was an incredible improvement!


Beyond the content, I have to say I love the aesthetic vibe of this website.


The article is full of PR-speak. What is really going on in this law?


Because it was the "hackers" (Musk) that created this situation, and the "hackers" (DOGE staffers) that participated.


This is a bit of a straw man. The harms of AI in OSS are not from people needing accessibility tooling.


I disagree. I've done nothing to argue that the harm isn't real, downplayed it, nor misrepresented it.

I do agree that at large, the theoretical upsides of accessibility are almost certainly completely overshadowed by obvious downsides of AI. At least, for now anyway. Accessibility is a single instance of the general argument that "of course there are major upsides to using AI", and there a good chance the future only gets brighter.

My point, essentially, is that I think this is (yet another) area in life where you can't solve the problem by saying "don't do it", and enforcing it is cost-prohibitive. Saying "no AI!" isn't going to stop PR spam. It's not going to stop slop code. What is it going to stop (see edit)? "Bad" people won't care, and "good" people (who use/depend-on AI) will contribute less.

Thus I think we need to focus on developing robust systems around integrating AI. Certainly I'd love to see people adopt responsible disclosure policies as a starting point.

--

[edit] -- To answer some of my own question, there are obvious legal concerns that frequently come up. I have my opinions, but as in many legal matters, especially around IP, the water is murky and opinions are strongly held at both extremes and all to often having to fight a legal battle at all* is immediately a loss regardless of outcome.


> I've done nothing to argue that the harm isn't real, downplayed it, nor misrepresented it.

You're literally saying that the upsides of hallucinanigenic gifts are worth the downside of collapsing society. I'd say that that is downplaying and misrepreting the issue. You even go so far to say

>Telling people "no AI!" (even if very well defined on what that means) is toothless against people with little regard for making the world (or just one specific repo) a better place.

These aren't balanced arguments taking both sides into considerations. It's a decision that your mindset is the only right one and anyone else is a opposing progress.


> are worth the downside of collapsing society.

At least in the US, society has been well on it's way to collapse before the LLM came out. "Fake news" is a great example of this.

>It's a decision that your mindset is the only right one and anyone else is a opposing progress.

So pretty much every religious group that's ever existed for any amount of time. Fundamentalism is totally unproblematic, right?


> At least in the US, society has been well on it's way to collapse before the LLM came out. "Fake news" is a great example of this.

IMO you can blame this on ML and the ability to microtarget[1] constituencies with propaganda that's been optimized, workshopped, focus grouped, etc to death.

Proto-AI got us there, LLMs are an accelerator in the same direction.

[1] https://en.wikipedia.org/wiki/Microtargeting


welp, flip another one from the "they definitely could do this and might be" pile to the "they've already been doing this for a long time" pile


Sure. I always said Ai was a catalyst. It could have made society build up faster and accelerate progress, definitely.

But as modern society is, it is simply accelerating the low trust factors of it and collapsing jobs (even if it can't do them yet), because that's what was already happening. But hey, assets also accelerated up. For now.

>So pretty much every religious group that's ever existed for any amount of time. Fundamentalism is totally unproblematic, right?

Religion is a very interesting factor. I have many thoughts on it, but for now I'll just say that a good 95% of religious devouts utterly fail at following what their relevant scriptures say to do. We can extrapolate the meaning of that in so many ways from there.


>You're literally saying that the upsides of hallucinanigenic gifts are worth the downside of collapsing society.

No, literally, he didn't.


Yes, I literally quoted it.


You quoted him and then put words into his mouth based on your own strongly held beliefs. Words he neither said nor implied.


It's absolutely not a straw man, because OP and people like OP will be affected by any policy which limits or bans LLMs. Whether or not the policy writer intended it. So he deserves a voice.


He doesn't think others deserve a voice, so why should I consider his?


The fact that you are engaging in this thread shows me you have considered my opinions, even if you reject them. I think thats great, even in the face of being told I advocate for the collapse of civilization and that I want others to shut up and not be heard.

It is a bit insulting, but I get that these issues are important and people feel like the stakes are sky-high: job loss, misallocation of resources, enshitification, increased social stratification, abrogation of personal responsibility, runaway corporate irresponsibility, amplification of bad actors, and just maybe that `p(doom)` is way higher than AI-optimists are willing to consider. Especially as AI makes advances into warfare, justice, and surveillance.

Even if you think AI is great, it's easy to acknowledge that all it may take is zealotry and the rot within politics to turn it into a disaster. You're absolutely right to identify that there are some eerie similarities to the "gun's don't kill people, people kill people" line of thinking.

There IS a lot to grapple with. However, I disagree with these conclusions (so far) and especially that AI is a unique danger to humanity. I also disagree that AI in any form is our salvation and going to elevate humanity to unfathomable heights (or anything close to that).

But, to bring it back to this specific topic, I think OSS projects stand to benefit (increasingly so as improvements continue) from AI and should avoid taking hardline stances against it.


Sure. I don't necessarily think your opinion is radical. But it's also important to consider biases within oneself, especially when making use of text as a medium where the nuance of body language is lost.

The main thing that put me off on the comment was the outright dismissal of other opinions. That's rarely a recipe for a productive conversation.

>However, I disagree with these conclusions (so far) and especially that AI is a unique danger to humanity. I

I don't think it's unique. It's simply a catalyst. In good times with a system that looks out for its people, AI could do great things and accelerate productivity. It could even create jobs. None of that is out of reach, in theory.

But part of understanding the negative sentiment is understanding that we aren't in that high trust society with systems working for the citizen. So any bouts of productivity will only be used to accelerate that distrust. Looking at the marketing of AI these past few years confirms this. So why would anyone trust it this time?

Rampant layoffs, vague hand waves of "UBI will help" despite no structures in place for that, more than a dozen high profile kerfuffles that can only be described as a grift that made millions anyway, and persistent lobbying to try and make it illegal to regulate AI. These aren't the actions of people who have the best interests of the public masses in mind. It's modern day robber barons.

>I think OSS projects stand to benefit (increasingly so as improvements continue) from AI and should avoid taking hardline stances against it.

I don't have a hard line stance on how organizations handle AI. But from my end I hear that Ai has mostly lead to being a stressor on contributors trying to weed out the flood of low quality submissions. Ai or not (again, Ai is a catalyst. Not the root cause), that's a problem for what's ultimately a volunteer position that requires highly specialized skills.

If the choice comes between banning Ai submissions, restricting submissions altogether with a different system, or burning out talent trying to review all this slop: I don't think most orgs will choose the latter.


> These use cases are like blaming MySQL for storing the lat/long of the school.

A storage layer versus a decision making system? What a ridiculous comparison.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: