Hacker Newsnew | past | comments | ask | show | jobs | submit | a3w's commentslogin

And cats. Blursed video.

Then again, herbivores seem to... "supplement" their protein sources. So not that unexpected.


And in Futurama, a man with the same family name invents a universal remote. The [drumroll] longer finger!

asked, not ordered. Seems fine.

Nice proof of corncept for a New Economy.

This could have been due to refactoring a text written by the stated, human author. Not only is Anthrophic a deeply moral company — emdash — it blah blah.

Also, you just when you say the word "genuine" was in there `43` times. In actuality, I counted only 46 instances, far lower than the number you gave.


46, even three more times.

Four "but also"s, one "not only", two "not just"s, but never in conjunction, which would be a really easy telltale.

Zero "and also"s, which is what I frequently write, as a human, non english-native speaker.

Verdict: likely AI slop?


I thought a minimal darwin distro exists, giving you headless macos?

There was never a zeroth law about being ethical towards all of humanity. I guess any prose text that tries to define that would meander like this constitution.

Yes there was, Asimov added it in Robots and Empire.

"Zeroth Law added" https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#:~:text...


Sound like the Rationalist agenda: have two axioms, and derive everything from that.

1. (Only sacred value) You must not kill other that are of a different opinion. (Basically the golden rule: you don't want to be killed for your knowledge, others would call that a belief, and so don't kill others for it.) Show them the facts, teach them the errors in their thinking and they clearly will come to your side, if you are so right.

2. Don't have sacred values: nothing has value just for being a best practice. Question everthing. (It turns out, if you question things, you often find that it came into existance for a good reason. But that it might now be a suboptimal solution.)

Premise number one is not even called a sacred value, since they/we think of it as a logical (axiomatic?) prerequisite to having a discussion culture without fearing reprisal. Heck, even claiming baby-eating can be good (for some alien societies), to share a lesswrong short story that absolutely feels absurdist.


That was always doomed for failure in the philosophy space.

Mostly because there's not enough axioms. It'd be like trying to establish Geometry with only 2 axioms instead of the typical 4/5 laws of geometry. You can't do it. Too many valid statements.

That's precisely why the babyeaters can be posited as a valid moral standard- because they have different Humeian preferences.

To Anthropic's credit, from what I can tell, they defined a coherent ethical system in their soul doc/the Claude Constitution, and they're sticking with it. It's essentially a neo-Aristotelian virtue ethics system that disposes of the strict rules a la Kant in favor of establishing (a hierarchy of) 4 core virtues. It's not quite Aristotle (there's plenty of differences) but they're clearly trying to have Claude achieve eudaimonia by following those virtues. They're also making bold statements on moral patienthood, which is clearly an euphemism for something else; but because I agree with Anthropic on this topic and it would cause a shitstorm in any discussion, I don't think it's worth diving into further.

Of course, it's just one of many internally coherent systems. I wouldn't begrudge another responsible AI company from using a different non virtue ethics based system, as long as they do a good job with the system they pick.

Anthropic is pursuing a bold strategy, but honestly I think the correct one. Going down the path of Kant or Asimov is clearly too inflexible, and consequentialism is too prone to paperclip maximizers.


What is a pub? A place to drink in UK? What is a pub rate?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: