Humans don't simplify problems by reducing them to objective functions: we simplify them by reducing them to specific instances of abstract concepts. Human thought is fundamentally different to the alien processes of naïve optimising agents.
We do understand the "real objectives", and our inability to communicate this understanding to hill-climbing algorithms is a sign of the depth of our understanding. There's no reason to believe that anything we yet call "AI" is capable of translating our understanding into a form that, magically, makes the hill-climbing algorithm output the correct answer.
We do understand the "real objectives", and our inability to communicate this understanding to hill-climbing algorithms is a sign of the depth of our understanding. There's no reason to believe that anything we yet call "AI" is capable of translating our understanding into a form that, magically, makes the hill-climbing algorithm output the correct answer.