Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Imagine two greeting cards. One says “I’m so sorry for your loss”, and the other says “Everyone dies, they weren’t special”.

Does one of these have a higher EQ, despite both being ink and paper and definitely not sentient?

Now, imagine they were produced by two different AIs. Does one AI demonstrate higher EQ?

The trick is in seeing that “EQ of a text response” is not the same thing as “EQ of a sentient being”



i agree with you. i think it is dishonest for them to post train 4.5 to feign sympathy when someone vents to it. its just weird. they showed it off in the demo.


Why? The choice to not do the post training would be every bit as intentional, and no different than post training to make it less sympathetic.

This is a designed system. The designers make choices. I don’t see how failing to plan and design for a common use case would be better.


We do not know if it is capable of sympathy. Post training it to reliably be sympathetic feels manipulative. Can it atleast be post trained to be honest. Dishonesty is immoral. I want my AIs to behave morally.


AIs don't behave. They are a lot of fancy maths. Their creators can behave in ethical or moral ways though when they create these models.

= not to say that the people that work on AI are not incredibly talented, but more that it's not human


thats just pedantic and unprovable since you cant know if it has a qualitative experience or not.

trainimg it topretend to be a feelingless robot or sympathetic mother are both weird to me. it should state facts with us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: