Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are right about the internal model, but I wouldn't dismiss the view from the outside.

Ie I wouldn't expect humans without free will to be able to predict themselves very well, either. Exactly as you suggest: having a fixed point (or not) doesn't mean you have free will.



The issue I have with the view from the outside is that it risks leading to a rather anthropomorphic notion of free will, if the criterion boils down to that an entity can only have free will if we can't predict its behavior.

I'm tempted to say an entity has free will if it a) has a self-model, b) uses this self-model as a kind of internal homunculus to evaluate decision options and c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information). It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.


I don't understand why a self-model would be necessary for free will?

> [...] c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information).

I don't think humans reach that threshold. Though it depends a lot on how you define things.

But as far as I can tell, most of my second-to-second decisions are very much coloured by the fact that we have gravity and an atmosphere at comfortable temperatures (external factors), and if you changed that all of a sudden, I would decide and behave very differently.

> It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.

Your homunculus is one hell of a complexity threshold.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: