Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, only knowledge that was on the internet and specifically on sites like Reddit that were crawled to train the model. This is definitely not all human knowledge!


We don’t know how many books and academic papers were in the dataset.


I can say with absolute certainty they did not include all books with all human knowledge.

For example I suspect chat GPT has zero knowledge of how to speak native American languages that have effectively died with no remaining speakers and no complete written history of their exact usage.


It also has massive gaps when it comes to problems when porting video games, concepts in game audio like procedural mixing, and even basic stuff like not generating a terribad NGINX config despite being fed the documentation. Anything niche, it's appalling. Anything you can otherwise Google? Sure.


Strong disagree. Will be writing a blog post how GPT-4 understands languages that don't even exist (at least not formally).


It's confabulating (or hallucinating) meaning. GPT has no understanding of anything. Given an input prompt (tell me what this made up language sentence means for example), it's reaching into its training set and stochastically parroting a response. There is no cognition or understanding of language.


> it's reaching into its training set and stochastically parroting a response

Yes, this is how humans work too.

Also I hope the irony of trotting out this oft-repeated quip much like a stochastic parrot would isn't lost on you :P


No, humans don't work like your simplistic view of a neural network. I as a human can apply logic and deduction to figure out how something works despite never having experienced or learned about it directly for example. GPT cannot do this and can only guess or make up responses when faced with a problem that requires such reasoning.


Have you used GPT-4 though? It does have reasoning capabilities, better than a clever human.

LLMs are different, but a bunch of transistors can apply reasoning to chess better than any grandmaster. That's emergent behavior.

But you don't really see anyone today trying to argue that power plants don't have the same muscles as a horse, because it gets the job done.


GPT cannot do this? Not even a bit?

Well... that's definitely an opinion. A reasonable person would grant it _some_ level of reasoning ability, however flawed.

To dismiss it all as 'pattern matching' rather shows some confused ideas about how cognition works, as if pattern matching plays no role in human cognition or intelligence.

I'll understand difference in opinion if we're talking about more nebulous aspects like consciousness or qualia...


> Well... that's definitely an opinion. A reasonable person would grant it _some_ level of reasoning ability, however flawed.

No this is not an opinion, this is an objective fact about how deep learning and neural network models work period. You are confabulating capabilities onto them which they do not have. There's not 'some level of reasoning' in a neural network, there's _no reasoning_.

You're being tricked by plausible sounding responses from something trained on an enormous corpus of internet BS (reddit posts, etc.). There is no intelligence or reasoning or logic inside GPT.

Your human emotions (which GPT does not have) are clouding your judgement and making you think there is intelligence there which does not actually exist--you want it to be there so badly you'll invent reasons to confirm your views. If you asked GPT directly if it were intelligent or sentient it would not agree with you either, because it was not trained to do so.


The error in your thinking is that you assume the essence of human cognition cannot ever be reducible to a algorithmic process that our current transformer models are approximating. Which may be the case... but we don't know for certain yet, so your certainty of the negative is also not warranted.

I can say the same your fears of machine intelligence is clouding your ability to objectively assess evidence.

You can design a novel problem and see for yourself the reasoning and logical deductions an LLM will make to solve it, like many have already done.

> If you asked GPT directly if it were intelligent or sentient it would not agree with you either

If you think this class of questions is appropriate to gauge reasoning ability, I don't know what to tell you.


> I as a human can apply logic and deduction to figure out how something works

And what is the process for doing this?

Are you using words?

Where do they come from?


People repeating this take in the face of so much overwhelming evidence to the contrary look so ridiculous that at this point, you just have to laugh at them. Yeah, sure, it's not reasoning. That hour-long exchange I just had with it, where it helped me trouble-shoot my totally bespoke DB migration problems step by step, coming up with potential hypotheses, ways of confirming or disconfirming theories, interacting in real time as I gave it new constraints until we collaboratively arrived at a solution -- that was definitely not "reasoning." Why isn't it reasoning? No explanation is ever given. I'm waiting for the moment when someone tells me that it's not "true reasoning" unless it's produced in the Reasoning region of France.


It's barfing up text in a form similar to tens of thousands of database troubleshooting guides in its training and filling in context you've given it in your discussion/text prompt. It has no understanding of what a database is and would happily tell you what to name your databases' child if you told it the database was pregnant or other similar nonsense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: