Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Neurons are pretty simple too.

Any arbitrarily complex system must be made of simpler components, recursively down to arbitrary levels of simplicity. If you zoom in enough everything is dumb.



The deeper you break things down, the dumber they seem. But maybe that dumbness is just an illusion of the observer's perspective.

Consciousness isn’t in the neurons themselves—it's in the invisible coordination and tension between them.


Anything is simple if you approximate it to an adimensional point and ignore all the complexities that make it different from that.


No, you misunderstood. I am describing [taking part of the whole] not [simplifying the whole] - is that clearer?


Neurons are surprisingly not simple. Vastly more complex than the ultra simplified model in artificial neural networks.


Most of the complexity is incidental to intelligence. It's mostly just the machinery of keeping the cell alive.

Most everything in biology is a clumsy hack accidentally discovered via evolution, and then optimised to death over aeons.

We can sidestep all that mess and extract just the core algorithm that is actually required for intelligence.


That is your assumption and it is wrong.

https://grok.com/share/bGVnYWN5_ab498084-58c4-4345-9140-07b5...

Biological Neuron: Processes information through complex, nonlinear integration of thousands of excitatory and inhibitory inputs across dendritic trees, producing spiking outputs with rich temporal patterns. It adapts dynamically via synaptic plasticity, neuromodulation, and structural changes, operating in a probabilistic, energy-efficient manner within oscillatory networks.

Artificial Neuron: Performs simple, linear summation of weighted inputs, applies a static activation function, and produces a single scalar output. It lacks temporal dynamics, local plasticity, or neuromodulation, operating deterministically with high computational cost and fixed connectivity.


This is interesting

https://chatgpt.com/share/68219da9-1e78-8007-b083-8a81bfbea2...

"Dendrites can implement non‑linear sub‑units and even logic‑gate‑like behavior before the soma integrates them, whereas the standard artificial neuron uses a plain weighted sum."

"Neurotransmitter diversity (e.g., glutamate, GABA, dopamine) allows different semantics on each connection. An artificial edge conveys only a signed scalar."


Neither are most functions, but locally, at a point, a linear approximation works just fine in practice.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: