Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do research in the behavioral sciences. I've published a little on analysis of neural activity, although it's not my area of specialty, but I do specialize in analyses that are cognate to DL models and are often covered in textbooks on DL along with classical DL models.

My sense is that the level of understanding of how human cognition works mathematically speaking is roughly similar to our understanding of how DL works in computer science, in the sense that if you asked a cognitive neuroscientist or psychologist how someone classifies cats versus non-cats, you'd get an answer that would seem pretty similar to what you'd get from a computer scientist. The behavioral scientist might go into a lot more detail about certain issues, but that's because the biology is so entertwined.

However, I'd also argue that we really don't know much about how human (or any animal) cognition works, and I'd also argue that our understanding of DL is fairly poor, in that a lot of it is tinkering and seeing what happens, without a deep understanding of why it works. There isn't a theory of DL in the same way that there's a Martin-Lof theory of randomness, or a Kolmogorov theory of algorithmic complexity, or a Fisherian model of inference.

Also, the sort of tasks currently involved in AI research is a tiny subset of what you encounter in neuroscience and psychology. Most of what is a hot topic in comp sci would basically be classified as perceptual tasks in human behavioral science, maybe at a slightly higher level, and maybe motor control. That leaves things like conscious versus nonconscious processing, reasoning, the role of emotion in decision-making, uncertainty valuation, creativity, etc. etc. etc.

I agree that the processing power is an issue but it's only part of the puzzle.

One thing that illustrates the complexity of the issues involved, and how we've only begun to scratch the surface, is the article's assertion that comp sci should borrow more the idea of sparse representations from cognitive neuroscience. I thought that was interesting, because in a lot of ways, one of the major trends in the last ten years in human neuroscience is away from this "sparseness" idea. It was a common assumption maybe 15 years ago, but now people routinely get excoriated for invoking that idea. The current paradigm is one more where a lot of pathways/circuits are being recruited simultaneously. Statements like "you might use 10,000 neurons of which 100 are active" would lead to ridicule. The intuitive way of explaining the problem is that even while your brain is trying to decide if you're perceiving a cat versus something else, it's also processing the consequences of that decision along about 10 different dimensions, the implications for the rest of the stimuli coming in, along with a number of other things we just don't understand.



Each of those higher abstractions the brain is processing us because the neocort3x is a filtered hierarchy that communicates up and down at each layer. The result of the lower levels trigger and input into the level above it. The higher levels project an expectation of the next result to the layer below themselves. While the lowest level might be concerned with identifying edges of lines, the layer above it is identifying letters and above that words, sentences, concepts, meaning, how that is similar to other things, etc. Each of these layers is active at a moment in time, but the communcation interface between each layer is a sparse distributed representation.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: