Hacker News new | past | comments | ask | show | jobs | submit login

I feel like if you are an intelligent entity propagating itself through spacetime you will have goals:

If you are intelligent, you will be aware of your surroundings moment by moment, so you are grounded by your sensory input. Otherwise there are a whole class of not very hard problems you can't solve.

If you are intelligent, you will be aware of the current state and will have desired future states, thus having goals. Otherwise, how are you intelligent?

To make this point, even you said "A super intelligent species would likely want to preserve everything", which is a goal. This isn't a gotcha, I just feel like goals are inherent to true intelligence.

This is a big reason why even the SOTA huge frontier models aren't comprehensively intelligent in my view: they are huge, static compositional functions. They don't self reflect, take action, or update their own state during inference*, though active inference is cool stuff people are working on right now to push SOTA.

*theres some arguments around what's happening metaphysically in-context but the function itself is unchanged between sessions.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: