Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We probably can't. I mean why stop at humans? Let's give every pet the same luxury, or ... in the limit we could give this to every living being. Ultimately someone is going to draw the line who gets what and who is useful or not "for the greater good".

Eh.

A line, drawn somewhere, sure.

Humans being humans, there's a good chance the rules on UBI will expand to exclude more and more people — we already see that with existing benefits systems.

But none of that means we couldn't do it.

Your example is pets. OK, give each pet their own mansion and servants, too. Why not? Hell, make it an entire O'Neill Cylinder each — if you've got full automation, it's no big deal, as (for reasonable assumptions on safety factors etc.) there's enough mass in Venus to make 500 billion O'Neill Cylinder of 8km radius by 32km length. Close to the order-of-magnitude best guess for the total number of individual mammals on Earth.

Web app to play with your size/safety/floor count/material options: https://spacecalcs.com/calcs/oneill-cylinder/

> It just happens that many living beings don't contribute to the goals of whoever is in charge and if they get in the way or cause resource waste nobody will care about them, humans or not.

Sure, yes, this is big part of AI alignment and AI safety: will it lead to humans being akin pets, or to something even less than pets? We don't care about termite mounds when we're building roads. A Vogon Constructor Fleet by any other name will be an equally bitter pill, and Earth is probably slightly easier to begin disassembling than Venus.



First, don't count on AI being aligned at all. States who are behind in the AI race will increasingly take more and more risks with alignment to catch up. Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems. If you are in a race to achieve that alignment will be very narrow to begin with.

Regarding the pet vs humans - the main difference is really that the humans are capable of understanding and communicating the long term consequences of AI and unchecked power, which makes them a threat, so it's not a big leap to see where this is heading.


> First, don't count on AI being aligned at all.

I don't. Even in the ideal state: aligned with who? Even if we knew what we were doing, which we don't, it's all the unsolved problems in ethics, law, governance, economics, and the meaning of the word "good", rolled into one.

> Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems.

AI or AGI? You don't even need an LLM to automate hacking; even the Morris worm performed automated attacks.

> humans are capable of understanding and communicating the long term consequences of AI and unchecked power

The evidence does not support this as a generalisation over all humans: Even though I can see many possible ways AI might go wrong, the reason for my belief in the danger is that I expect at least one such long-term consequence to be missed.

But also, I'm not sure you got my point about humans being treated like pets: it's not a cause of a bad outcome, it is one of the better outcomes.


It's always nice to see someone else on Hacker News who has pretty much independently derived most of my conclusions on their own terms. I have little to add except nodding in agreement.

Kudos, unless we both turn out to be wrong of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: