Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That would be like compilers saying no if they think your app idea is dumb, or screwdrivers refusing to be turned if they think you don't really need the thing you're trying to screw.

What is the utility offered by a chat assistant?

> The question is useful as a test of the AI's reasoning ability. If it gets the answer wrong, we can infer a general deficiency that helps inform our understanding of its capabilities. If it gets the answer right (without having been coached on that particular question or having a "hardcoded" answer), that may be a positive signal.

What is "wrong" about refusing to answer a stupid question where effectively any answer has no practical utility except to troll or provide ammunition to a bad faith argument. Is an AI assistant's job here to pretend like there's an actual answer to this incredibly stupid hypothetical? These """AI safety""" people seem utterly obsessed with the trolley problem instead of creating an AI assistant that is anything more than an automaton, entertaining every bad faith question like a social moron.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: