Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Training on all papers does not mean the model believes or knows the truth.

It is just a machine that spits out words.



It's 1994. Larry Llyod Mayer has read the entire internet, hundreds of thousands of studies across every field, and can answer queries word for word the same as modern LLMs do. He speaks every major language. He's not perfect, he does occasionally make mistakes, but the sheer breadth of his knowledge makes him among the most employable individuals in America. The Pentagon, IBM, and Deloitte are begging to hire him. Instead, he works for you, for free.

Most laud him for his generosity, but his skeptics describe him as just a machine that spits out words. A stochastic parrot, useless for any real work.


Does his accuracy take a sudden precipitous fall when going from multiplying two three-digit numbers to two four-digit numbers?


I don't know about you, but when I do math without a calculator, my accuracy also drops precipitously whenever they add a digit.


Do you have self awareness to anticipate the drop in your accuracy and refuse to perform the operation?


I do anticipate it, but in the situations I'm asked to do such calculations, I don't usually have the option of refusing, nor would I want to. For most real would situations, it's generally better to arrive at a ballpark solution than to refuse to engage with the problem.


Ballpark solution is in a way refusing...


In the very unserious hypothetical I'm describing, I'd say Lloyd's capabilities match that of GPT-4. In this case, he's not a calculator, but he is a decent programmer, so like GPT-4 he quickly runs the operation through a script, rather than trying to figure it out in his head.

https://chatgpt.com/share/67ac17df-fd9c-800d-9d3d-03c66b3e86...

"The result of 720947×263647 is 190,075,513,709."


This is a solved problem, ChatGPT uses a python prompt to do arithmetic now. Just like you would… all good. You Can Just Check Your Own Claims


It has some pieces of the puzzle to intelligence. That's a deal breaker for some people, and useful/promising to others.


I would be very careful to claim exactly that as emergent properties seem kinda crucial for artificial and human intelligences. (Not to say that they are equally functioning nor useful.)


What experiment or measurement could I do to distinguish between a machine that “knows” the truth and a machine that merely “spits it out”? I’m trying to understand your terminology here


Um... what truth?

My truth, your truth or some defined objective truth?


>Training on all papers does not mean the model believes or knows the truth. It is just a machine that spits out words.

Sounds like humans at school. Cram the material. Take the test. Eject the data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: