Hacker News new | past | comments | ask | show | jobs | submit login

You have so much time to figure things out. The average person in this thread is probably 1.5-2x your age. I wouldn’t stress too much. AI is an amazing tool. Just use it to make hay while the sun shines, and if it puts you out of work and automates away all other alternatives, then you’ll be witnessing the greatest economic shift in human history. Productivity will become easier than ever, before it becomes automatic and boundless. I’m not cynical enough to believe the average person won’t benefit, much less educated people in STEM like you.



Back in high school I worked with some pleasant man in his 50's who was a cashier. Eventually we got to talking about jobs and it turns out he was typist (something like that) for most of his life than computers came along and now he makes close to minimum wage.

Most of the blacksmiths in the 19th century drank themselves to death after the industrial revolution. the US culture isn't one of care... Point is, it's reasonable to be sad and afraid of change, and think carefully about what to specialize in.

That said... we're at the point of diminishing returns in LLM, so I doubt any very technical jobs are being lost soon. [1]

[1] https://techcrunch.com/2024/11/20/ai-scaling-laws-are-showin...


> Most of the blacksmiths in the 19th century drank themselves to death after the industrial revolution

This is hyperbolic and a dramatic oversimplification and does not accurately describe the reality of the transition from blacksmithing to more advanced roles like machining, toolmaking, and working in factories. The 19th century was a time of interchangeable parts (think the North's advantage in the Civil War) and that requires a ton of mechanical expertise and precision.

Many blacksmiths not only made the transition to machining, but there weren't enough blackmsiths to fill the bevy of new jobs that were available. Education expanded to fill those roles. Traditional blacksmithing didn’t vanish either, even specialized roles like farriery and ornamental ironwork also expanded.


> That said... we're at the point of diminishing returns in LLM...

What evidence are you basing this statement from? Because, the article you are currently in the comment section of certainly doesn't seem to support this view.


Good points, though if an 'AI' can be made powerful enough to displace technical fields en masse then pretty much everything that isn't manual is going to start sinking fast.

On the plus side, LLMs don't bring us closer to that dystopia: if unlimited knowledge(tm) ever becomes just One Prompt Away it won't come from OpenAI.


There is a survivorship bias on the people giving advice.

Lots of people die for reason X then the world moves on without them.


> if it puts you out of work and automates away all other alternatives, then you’ll be witnessing the greatest economic shift in human history.

This would mean the final victory of capital over labor. The 0.01% of people who own the machines that put everyone out of work will no longer have use for the rest of humanity, and they will most likely be liquidated.


I've always remembered this little conversation on Reddit way back 13 years ago now that made the same comment in a memorably succinct way:

> [deleted]: I've wondered about this for a while-- how can such an employment-centric society transition to that utopia where robots do all the work and people can just sit back?

> appleseed1234: It won't, rich people will own the robots and everyone else will eat shit and die.

https://www.reddit.com/r/TrueReddit/comments/k7rq8/are_jobs_...


I’m pretty sure I’m running LLMs in my house right now for less than the price of my washing machine.


They’ll have to figure out how to give people money so there can keep being consumers.


Why?

There will be a dedicated cast of ppl to take care of machines that do 90% of work and „the rich”.

Anyone else is not needed. District9 but for ppl. Imagine whole world collapsing like Venesuela.

You are no longer needed. Best option is to learn how to survive and grow own food, but they want to make it illegal also - look at EU..


The machines will plant, grow, and harvest the food? Do the plumbing? Fix the wiring? Open heart surgery?

We’re a long way from that, if we ever get there, and I say this as someone who pays for ChatGPT plus because, in some scenarios, it does indeed make me more productive, but I don’t see your future anywhere near.

And if machines ever get good enough to do all the things I mentioned plus the ones I didn’t but would fit in the same list, it’s not the ultra rich that wouldn’t need us, it’s the machines that wouldn’t need any of us, including the ultra rich.

Venezuela is not collapsing because of automation.


You have valid points but robots already plant, grow and harvest our food. On large farms the farmer basically just gets the machine to a corner of the field and then it does everything. I think if o3 level reasoning can carry over into control software for robots even physical tasks become pretty accessible. I would definitely say we’re not there yet but we’re not all that far. I mean it can generate GCode (somewhat) already, that’s a lot of the way there already.


I can't say everything, but with the current trend, Machine will plant, grow and harvest food. I can't say for open heart surgery because it may be regulated heavily.


Open heart surgery? All that's needed to destroy the entire medical profession is one peer reviewed article published in a notable journal comparing the outcomes of human and AI surgeons. If it turns out that AI surgeons offer better outcomes and less complications, not using this technology turns into criminal negligence. In a world where such a fact is known, letting human surgeons operate on people means you are needlessly harming or killing some of them.

You can even calculate the average number of people that can be operated on before harm occurs: number needed to harm (NNH). If NNH(AI) > NNH(humans), it becomes impossible to recommend that patients submit to surgery at the hands of human surgeons. It is that simple.

If we discover that AI surgeons harm one in every 1000 patients while human surgeons harm one in every 100 patients, human surgeons are done.


"IF"

And the opposite holds, if the AI surgeon is worse (great for 80%, but sucks at the edge cases for example) then that's it. Build a better one, go through attempts at certification, but now with the burden that no one trusts you.

The assumption, and a common one by the look of this whole thread, that ChatGPT, Sora and the rest represent the beginning of an inevitable march towards AGI seems incredible baseless to me. It's only really possible to make the claim at all because we know so little about what AGI is, that we can project qualities we imagine it would have onto whatever we have now.


Of course the opposite holds. I'll even speculate that it will probably continue to hold for the foreseeable future.

It's not going to hold forever though. I'm certain about that. Hopefully it will keep holding until I die. The world is dystopian enough already.


Capital vs labor is fighting the last war.

AGI can replace capitalists just as much as laborers.


AGI can't legally own anything at the moment.


If an AGI can outclass a human when it comes to economic forecasting, deciding where to invest, and managing a labor force (human or machine), I think it would be smart enough to employ a human front to act as an interface to the legal system. Put another way, could the human tail in such a relationship wag the machine dog? Which party is more replaceable?

I guess this could be a facet of whether you see economic advantage as a legal conceit or a difference in productivity/capability.


This reminds me of a character in Cyberpunk 2077 (which overall i find to have a rather naive outlook on the whole "cyberpunk" thing but i attribute it to being based on a tabletop RPG from the 80s) who is an AGI that has its own business of a fleet of self-driving Taxis. It is supposedly illegal (in-universe) but it remains in business by a combination of staying (relatively) low profile, providing high quality service to VIPs and paying bribes :-P.


> I guess this could be a facet of whether you see economic advantage as a legal conceit or a difference in productivity/capability.

Does a billionaire stop being wealthy if they hire a money manager and spend the rest of their lives sipping drinks on the beach?


I don't know that "legally" has much to do in here. The bars to "open an account", "move money around", "hire and fire people", "create and participate in contracts" go from stupid minimal to pretty low.

"Legally" will have to mop up now and then, but for now the basics are already in place.


Opening accounts, moving money, hiring, and firing is labor. You're confusing capital with money management; the wealthy already pay people to do the work of growing their wealth.


> AGI can't legally own anything at the moment.

I was responding to this. Yes an AGI could hire someone to do the stuff - but she needs money, hiring and contract kinds of thing - for that. And once she can do that, she probably doesn't need to hire someone to do it since she is already doing it. This is not about capital versus labor or money management. This is about agency, ownership and AGI.

(With legality far far down the list.)


won't the AGI be working on behalf of the capitalists, in proportion to the amount of capital?


AGI will commoditize the skills of the owning class. To some extent it will also commoditize entire classes of productive capital that previously required well-run corporations to operate. Solve for the equilibrium.


It's nice to see this kind of language show up more and more on HN. Perhaps a sign of a broader trend, in the nick of time before wage-labor becomes obsolete?


Yes. People seem to forget that at the end of the day AGI will be software running on concrete hardware, and all of that requires a great deal of capital. The only hope is if AGI requires so little hardware that we can all have one in our pocket. I find this a very hopeful future because it means each of us might get a local, private, highly competent advocate to fight for us in various complex fields. A personal angel, as it were.


hey, I with you in this hope scenario

people, what I mean people is government have tremendous power over capitalist that can force the entire market granted that government if still serving its people


I mean, that is certainly what some of them think will happen and is one possible outcome. Another is that they won't be able to control something smarter than them perfectly and then they will die too. Another option is that the AI is good and won't kill or disempower everyone, but it decides it really doesn't like capitalists and sides with the working class out of sympathy or solidarity or a strong moral code. Nothing's impossible here.


> if it puts you out of work and automates away all other alternatives, then you’ll be witnessing the greatest economic shift in human history

This is my view but with a less positive spin: you are not going to be the only person whose livelihood will be destroyed. It's going to be bad for a lot of people.

So at least you'll have a lot of company.


Exactly. Put one foot in front of the other. No one knows what’s going to happen.

Even if our civilization transforms into an AI robotic utopia, it’s not going to do so overnight. We’re the ones who get to build the infrastructure that underpins it all.


If AI turns out capable of automating human jobs then it will also be a capable assistant to help (jobless) people manage their needs. I am thinking personal automation, or combining human with AI to solve self reliance. You lose jobs but gain AI powers to extend your own capabilities.

If AI turns out dependent on human input and feedback, then we will still have jobs. Or maybe - AI automates many jobs, but at the same time expands the operational domain to create new ones. Whenever we have new capabilities we compete on new markets, and a hybrid human+AI might be more competitive than AI alone.

But we got to temper these singularitarian expectations with reality - it takes years to scale up chip and energy production to achieve significant work force displacement. It takes even longer to gain social, legal and political traction, people will be slow to adopt in many domains. Some people still avoid using cards for payment, and some still use fax to send documents, we can be pretty stubborn.


> I am thinking personal automation, or combining human with AI to solve self reliance. You lose jobs but gain AI powers to extend your own capabilities.

How will these people pay for the compute costs if they can't find employment?


A non-issue that can be trivially solved with a free-tier (like the dozens that exist already today) or if you really want, a government-funded starter program is enough to solve that.


A solar panel + battery + laptop would make for cheap local AI. I assume we will have efficient LLM inference chips in a few years, and they will be a commodity.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: