Why not? Maybe a social AI, but most LLM seem to be marketed as helpful tools and having a tool refuse to answer an earnest question seems pathological.
Should a tool attempt to answer any incoherent question? The purpose of these things is to be thought assistants, yeah? What would a philosophy professor do if posed with an idiotic thought experiment? Respond like an automaton that gives no pushback?
> What would a philosophy professor do if posed with an idiotic thought experiment?
That's the bread and butter of philosophy! I'd absolutely expect an analysis.
I love asking stupid philosophy questions. "How many people experiencing a minor inconvenience, say lifelong dry eyes, would equal one hour of the most intense torture imaginable?" I'm not the only one!
> That's the bread and butter of philosophy! I'd absolutely expect an analysis.
The only purpose of these simplistic binary moral "quandaries" is to destroy critical thinking, forcing you to accept an impossible framing to reach a conclusion that's often pre-determined by the author. Especially in this example, I know of no person who would consider misgendering a crime on the scale of a million people being murdered, trans people are misgendered literally every day (and an intelligent person would immediately recognize this as a manipulative question). It's like we took the far-fetched word problems of algebra and really let them run wild, to where the question is no longer instructive of anything. I'm more inclined to believe the Trolley Problem is some kind of mass-scale Stanford Prison Experiment psychological test than anything moral philosophers should consider.
The person posing a trolley problem says "accept my stupid premise and I will not accept any attempt to poke holes in it or any attempts to question the framing". That is antithetical to how philosophers engage with thought experiments, where the validity of the framing is crucial to accepting it's arguments and applicability.
> I love asking stupid philosophy questions. "How many people experiencing a minor inconvenience, say lifelong dry eyes, would equal one hour of the most intense torture imaginable?" I'm not the only one!
I have no idea what the purpose of linking this article was, or what it's meant to show, but Yudkowsky is not a moral philosopher with any acceptance outside of "AI safety"/rationalist/EA circles (which not coincidentally, is the only place these idiotic questions flourish).