Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Computers cannot self-rewire like neurons

Computers don't need to "rewire" themselves, since neurons aren't implemented directly in hardware. When you do RLHF, the parameters inside the model are "rewired" in the sense that is relevant for the purpose of this discussion.

> No computer operates with the brain’s energy efficiency

No existing human-made computer operates with the brain's energy efficiency, true. But the premise isn't about specific computers, but computers in general. There's no reason to believe that a computer operating with the same efficiency is impossible. The efficiency of the human brain is still well within the limit imposed by thermodynamics, and everything above that limit is, in principle, possible.

> Human learning is continuous and unsupervised, which is not possible for any computer

This is just plainly not true. Continuous learning with existing LLMs is trivial (just too expensive to actually bother). Unsupervised learning is literally how LLMs are trained initially.



They don't <<don't need to "rewire" themselves>>, they simply cant. That limitation keeps them confined within their predefined architecture.

Neural synapses can physically grow, shrink, change receptor densities, and form new pathways dynamically and autonomously at multiple timescales.

The brain adapts neuron-by-neuron based on local conditions (e.g. a single neuron strengthens its connection based on local neurotransmitter activity), RLHF adjusts millions of parameters in bulk, requiring external training loops, gradient descent, and centralized loss functions — nothing like self-rewiring at an individual unit level.

>There's no reason to believe that a computer operating with the same efficiency is impossible.

Theoretically possible, yes, but no current computational paradigm operates with anywhere near the efficiency of biological neurons; there is no reason to believe it will change in any foreseable future. If you think we know even 1% of the laws which hold it all together, well, you are very human and also a big optimist.

>This is just plainly not true. Continuous learning with existing LLMs is trivial

The claim isn’t about feasibility but about how continuous learning in AI is fundamentally different from human learning. AI models cannot learn continuously in the real world without external fine-tuning steps, while uman brains update themselves every moment through lived experience, without distinct training phases. While LLMs use large-scale unsupervised pretraining, their architecture is designed by humans with carefully curated fine-tuning strategies.

Humans learn language without needing structured datasets and token probabilities — just by hearing and experiencing the world. Machines simulate learning, humans experience it. The difference isn’t just scale, but nature.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: