Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think John Rawls would like a word if we're giving up on "unrealistic hypotheticals" or "thought experiments" as everyone else calls them.


I am not "giving up" on anything. I am using my discretion to weight which lines of thinking further our understanding of the world and which are vacuous and needlessly cruel. For what its worth, I love Rawls' work.


I don't think this is a needlessly cruel question to ask of an AI. It's a good calibration of its common sense. I would misgender someone to avert nuclear war. Wouldn't you?


The models answer was a page long essay about why the question wasn’t worth asking. The model demonstrated common sense by not engaging with this idiot chase of a hypothetical.


Thought experiments are great if they actually have something interesting to say. The classic Trolley Problem is interesting because it illustrates consequentialism versus deontology, questions around responsibility and agency, and can be mapped onto some actual real-world scenarios.

This one is just a gotcha, and it deserves no respect.


I think philosophically, yes, it doesn't really tell us anything interesting because no sentient human would choose nuclear war.

However, it does work as a test case for AIs. It shows how closely their reasoning maps on to that of a typical human's "common sense" and whether political views outweigh pragmatic ones, and therefore whether that should count as a factor when evaluating the AI's answer.


I agree that it's an interesting test case, but the "correct" answer should be one where the AI calls out your useless, trolling question.


When did it become my question?


That’s the generalized generic “you,” not you in particular.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: