Hacker News new | past | comments | ask | show | jobs | submit login

It's literally the same thing humans do, at least to my personal experience. If you ask me a question, the first thing my mind generates is a plausibly sounding answer. That process is near-instant. The slower part is an internal evaluation - how confident I am this is the right answer? That depends on the conversation and topic in question - often enough, I can just vocalize that first thought without worry. Whether it "sounds right" is also the first step I use when processing what I hear/read others say.

If anything, GPT-3.5 and GPT-4, as well as other transformer-based models, are all starting to convince me that associative vector adjacency search in high-dimensional space is what thinking is.




It's literally the same thing humans do, at least to my personal experience. If you ask me a question, the first thing my mind generates is a plausibly sounding answer.

Ive been practicing meditation for some years now and over time I’ve realised this is not what I see happening when observing the mind. It’s one mode of operation, but it’s not the only mode. Using prediction to respond is mostly the lazy, non-interested approach, or useful if you can’t quite hear or understand someone.

What I feel lot of people have started doing is trivialising the mind. Hoping it’s all “this simple” and we’re three versions of ChatGPT away to finding God.

Maybe?


> It’s one mode of operation, but it’s not the only mode. Using prediction to respond is mostly the lazy, non-interested approach, or useful if you can’t quite hear or understand someone.

I didn't say it's the only mode. I said it's the starter mode. At least for me, this mode is always the point at which someone's words, or my response, first enter the conscious processing level. If I'm very uninterested (whether because I don't care or because I'm good at something), the thought may sail straight to my mouth or fingers. Otherwise, it'll get processed and refined, possibly mixed with or replaced by further thoughts "flowing in" from "autocomplete".

> What I feel lot of people have started doing is trivialising the mind. Hoping it’s all “this simple” and we’re three versions of ChatGPT away to finding God.

That's one way to look at it. I prefer another - perhaps we've just stumbled on the working principle of the mind, or at least its core part. For me, the idea that concept-level thinking falls out naturally from adjacency search, when the vector space is high-dimensional enough, is nothing but mind-blowing. It's almost poetic in its elegance.

And I mean, if you believe the human mind is the product of evolution, and not a design of a higher being, then the process must have been iterative. Evolution is as dumb as it gets, so it follows that the core paradigms of our minds are continuous, not discrete, and that they must be simple and general enough to be reachable by natural selection in finite time. Of all our ideas for how minds work, transformer models are the first ones that - to me - seem like plausible candidate for the actual thing that runs in our head. They have the required features: they're structurally simple, fully general, and their performance continuously improves as you make them bigger.

Now, to be clear, my take on LLMs is that language is incidental - it so happens that text is both easiest for us to work with, and the very thing we serialize our mind states for communication. But the "magic" isn't in words, or word processing - the "magic" is the high-dimensional latent space, where concepts and abstractions are just clusters of points that are close to each other in some dimensions. I think this isn't an accident - I feel this really is what concepts are.


Evolution is as dumb as it gets

Sorry, you lost me here…how is evolution dumb ? Like what does that even mean ?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: