Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This presupposes that human value only exists in the things current AI tech can replace—pattern recognition/creation. I'd wager the same argument was made when hand-crafted things were being replaced with industrialized products.

I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there. And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers. I don't believe that is the case.



> I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there.

This sounds sort of like a "God of the gaps" argument.

Yes, we could say that humanity is left to express itself in the margins between the things machines have automated away. As automation increases its capabilities, we just wander around looking for some untouched back-alley or dark corner the robots haven't swept through yet and do our dancing and poetry slams there until the machines arrive forcing us to again scurry away.

But at that point, who is the master, us or the machines?


What we still get paid to do is different than what we're still able to do. I'm still able to knit a sweater if I find it enjoyable. Some folks can even do it for a living (but maybe not a living wage)


If this came to pass, the population would be stripped of dignity pretty much en masse. We need to feel competent, useful, and connected to people. If people feel they have nothing left, then their response will be extremely ugly.


> And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers.

It would but I don't think that's what they're saying. The agent of dehumanization isn't the technology, but the selection of what the technology is applied to. Or like the quip "we made an AI that creates, freeing up more time for you to work."

Wherever human value, however you define that, exists or is created by people, what does it look like to apply this technology such that human value increases? Does that look like how we're applying it? The article seems to me to be much more focused on how this is actually being used right now rather than how it could be.


It kind of makes sense if following a particular pattern is your purpose and life, and maybe your identity.


We should actively encourage fluidity in purpose, too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.

Resilience and strength in our civilisation comes from confidence in our competence,

not sanctifying patterns so we don’t have to think.

We need to encourage and support fluidity, domain knowledge is commoditised, the future is fluid composition.


Great, tell that to someone who spent years honing their skills that it's too bad the rug was pulled out from beneath you, time to start over from the bottom again.

Maybe there would be merit to this notion if society provided the necessary safety net for this person to start over.


Agreed, I think there should be much more safety net for people to start over and be more fluid, I definitly think the weird "Full time employed or homeless" thing has to change


"Protect the person, not the job" is what we should be aiming for. I don't think we will, but we should.


> We should actively encourage fluidity in purpose

I don't think we should assume most people are capable of what you describe. Assigning "should" to this assumes what you're describing is psychologically tenable across a large population.

> too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.

Or maybe some people have a singular focus in life and that's ok. And maybe we should be talking about the responsibility of the companies exploiting everyone's content to create these models, or the responsibility of government to provide relief and transition planning for people impacted, etc.

To frame this as a personal responsibility issue seems fairly disconnected from the reality that most people face. For most people, AI is something that is happening to them, not something they are responsible for.

And to whatever extent we each do have personal responsibility for our careers, this does not negate the incoming harms currently unfolding.


“Some people have a singular purpose in life and that’s OK”

Strong disagree, that’s not OK, it’s fragile


Much of society is fragile. The point is that we need to approach this from the perspective of what is, not from what we wish things could be.


People come with all sorts of preferences. Telling people who love mastery that they have to be "fluid" isn't going to lead to happy outcomes.


Absolutely, I agree with that.


How would this matter?

People can self assign any value whatsoever… that doesn’t change.

If they expect external validation then that’s obviously dependent on multiple other parties.


Due to how AI works its only a matter of time till its better at pretty much everything humans do beside “living”.

People tend to talk about any AI related topic comparing it to any industrial shift that happened in the past.

But its much Much MUCH bigger this time. Mostly because AI can make itself better, it will be better and it is better with every passing month.

Its a matter of years until it can completely replace humans in any form of intellectual work.

And those are not mine words but smartest ppl in the world, like AI grandfather.

We humans think we are special. That there wont be something better than us. But we are in the middle of the process of creating something better.

It will be better. Smarter. Not tired. Wont be sick. Wont ever complain.

And it IS ALREADY and WILL replace a lot of jobs and it will not create new ones purely due to efficiency gains and lack of brainpower in majority of ppl who will be laid off.

Not everyone is a noble prize winner. And soon we will need only such ppl to advance AI.


> because AI can make itself better

Can it? I'm pretty sure current AI (not just LLMs, but neural nets more generally) require human feedback to prevent overfitting. Fundamentally eschewing any fear or hope of the singularity as predicted.

AI can not make itself better because it can not meaningfully define what better means.


AlphaEvolved reviewed how its trained and found a way to improve the process.

Its only the beginning. aI agents are able to simulate tasks, get better at them and make themselves better.

At this point its silly to say otherwise.


> Its a matter of years until it can completely replace humans in any form of intellectual work.

This is sensationalism. There’s no evidence in favor of it. LLMs are useful in small, specific contexts with many guardrails and heavy supervision. Without human-generated prior art for that context they’re effectively useless. There’s no reason to believe that the current technical path will lead to much better than this.


Call me when 'AI' cook meals in our kitchens, repairs the plumbing in our homes and removes the trash from the curb.

Automation has costs and imagining what LLMs do now as the start of the self-improving, human replacing machine intelligence is pure fantasy.


To say that this is pure fantasy when there are more and more demos of humanoid robots doing menial tasks, and the costs of those robots are coming down is ... well something. Anger, denial (you are here)...


To say that this is pure fantasy when there are more and more demos of humanoid robots doing menial tasks

A demo is one thing. Being deployed in the real world is something else.

The only thing I've seen humanoid robots doing is dancing and occasionally a backflip or two. And even most of that is with human control.

The only menial task I ever saw a humanoid robot do so far is to take bags off of a conveyor belt, flatten them out and put them on another belt. It did it at about 1/10th the speed of a human, and some still ended up on the floor. This was about a month ago, so the state of the art is still in the demo stage.


I'm waiting. You're talking to someone who believed that self-driving vehicles would put truckers out of work in a decade right around 2012. I didn't think that one through. The world is very complicated and human beings are the cheapest and most effective way to get physical things done.


not entirely.

The risk raised in the article is that AI is being promoted beyond its scope (pattern recognition/creation) to legal/moral choice determination.

The techo-optimists will claim that legal/moral choices may be nothing more than the sum of various pattern-recognition mechanisms...

My take on the article is that this is missing a deep point: AI cannot have a human-centered morality/legality because it can never be human. It can only ever amplify the existing biases in its training environments.

By decoupling the gears of moral choice from human interaction, whether by choice or by inertia, humanity is being removed from the mechanisms that amplify moral and legal action (or, in some perverse cases, amplify the biases intentionally)


to build on your point, we only need to look at another type of entity that has a binary reward system and is inherently amoral: the corporation. Though it has many of the same rights as a human (in the US), the corporation itself is amoral, and we rely upon the humans within to retain moral compass, to their own detriment, which is a foolish endeavor.

even further, AI has only learned through what we've articulated and recorded, and so its inherent biases are only that of our recordings. I'm not sure how that sways the model, but I'm sure that it does.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: