Hacker News new | past | comments | ask | show | jobs | submit login

This question applies to every organization in the history of the world. It's logically impossible to design any kind of social system that can't be ruined by people.



Benevolent AI maybe?

More Iain Banks "Culture", that's the universe I'd choose out of all sci-fi to live in I think.


I don't think AI really solves the problem. The question "What happens when sensible humans are no longer at the helm?" really means "What happens when people with different priorities are at the helm?" It's totally possible for a sane being, artificial or not, to take over Patreon and prioritize profit, or enforce their own values, or even shut down the site.

The core problem is that you're interacting with entities you can't control. It's almost tautological; if you can't control them, by definition they can do things you don't like.

I've only read the first Culture book, but I'm pretty sure the Idirans didn't feel like the Culture AIs were "sensible".


An AI would need to live a human life in a human body in order to understand what "benevolence to humans" means, and such an experience would make it functionally equivalent to a human with a cloud prosthesis.


Do we need to live in the body of dog to understand "benevolence to dogs"?


Yes.

Luckily we live in bodies that are almost exactly the same as dogs.

The reason why you empathize with a dog more than an octopus is precisely because our bodies are similar.

Do you have another theory as to why we empathize with dogs more than, say, a forest, or a hive of ants?


People also empathize with a dog more than an adult human: http://booksandjournals.brillonline.com/content/journals/10....


I see how empathy can help, but I disagree its required.

We could choose to be benevolent to octupi in a number of ways. We know killing them is bad from their perspective. We know that some like making homes and all like eating certain eating food and we know many other things. We could get very far by acting in accordance with their apparent desires and simply not exploiting them or their environment.


There's no way to understand anything you can't map on to your embodied experience of the world.

You can only even perceive their apparent desires to the extent you can map them onto your own experiences.

You could offer them language for controlling you, and they could learn to use it according to their desires, through trial and error, but without shared bodily experience you cant understand those desires.

And them being able to control you doesn't solve the benevolence problem. Because benevolence is understanding without ceding control. For you to be benevolent you need to actually understand them, which requires shared experience.

This is why I think consent is a much better basis for social coordination than benevolence.

In the case of AIs it is somewhat moot, because the first general/hard AIs will be clones of humans in humanoid bodies.


I mean, probably, right? It's impossible to know. We can only be benevolent by our own standards. For example, the entire premise of the Matrix was AIs treating humans as benevolently as possible (in their minds) without inconveniencing themselves.


Actually I think the matrix example perfectly fits into the op dog example.

The machines invented the matrix because humans would riot if the simulation was too perfect or too shitty, so instead it was made perfectly normal. The machines didn't care either way, they just wanted to prevent more random "wakeup" events that were caused when the world was unrealistic.

So we can't know what is good for a dog other than what they tell us. Dog is biting us or whining, we're probably doing the right thing. Sure, you can throw in a bit of human empathy to account for the fact that we cuddle and pet our dogs, but we still know what they like based on what they're telling us.


Unless you want to take into account their reaction. An upset dog will growl and bite. A happy dog has certain behaviors. It is not robust communication, but it does exist.

In the beginning perhaps humans used dogs in an exploitative way, but many of the relationships now are mutual. Most dogs owners I know treat their dog as if it were "man's best friend", and they generally seem to reciprocate. The machines in the matrix wanted us for our value as a resource not for companionship and they certainly didn't respect us.

I think exploitation and benevolence are opposed if not mutually exclusive.


If we did we might not keep them as pets.


We created dogs, they have no natural place in the wild before them the canids were Wolves and Coyotes.


We sure as hell didn't create them to be locked in an apartment all day listening to NPR while we work.


I'm thinking Breq in ancillary justice..

A large scale ship's AI reduced to a single ancillary, having to deal with humans on their level


Is that worth a read?

I know it won a bunch of awards.


False dichotomy. The question is not yes/no. It is a matter of degree. How angelic do people need to be for a system to work?

If he had angels any system could work. If we had devils no system could work. The reality is in between.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: