Hacker News new | past | comments | ask | show | jobs | submit login

I believe you're asking the wrong question, or at least you're asking it in the wrong way. From my POV, it comes in two parts:

1. Do you believe that LLMs operate in a similar way to the important parts of human cognition?

2. If not, do you believe that they operate in a way that makes them useful for tasks other than responding to text prompts, and if so, what are those tasks?

If you believe that the answer to Q1 is substantively "yes" - that is, humans and LLM are engaged in the same sort of computational behavior when we engage in speech generation - then there's presumably no particular impediment to using an LLM where you might otherwise use a human (and with the same caveats).

My own answer is that while some human speech behavior is possibly generated by systems that function in a semantically equivalent way to current LLMs, human cognition is capable of tasks that LLMs cannot perform de novo even if they can give the illusion of doing so (primarily causal chain reasoning). Consequently, LLMs are not in any real sense equivalent to a human being, and using them as such is a mistake.




> My own answer is that while some human speech behavior is possibly generated by systems that function in a semantically equivalent way to current LLMs, human cognition is capable of tasks that LLMs cannot perform de novo even if they can give the illusion of doing so (primarily causal chain reasoning). Consequently, LLMs are not in any real sense equivalent to a human being, and using them as such is a mistake.

In the workplace, humans are ultimately a tool to achieve a goal. LLM's don't have to be equivalent to humans to replace a human - they just have to be able to achieve the goal that the human has. 'Human' cognition likely isn't required for a huge amount of the work humans do. Heck, AI probably isn't required to automate a lot of the work that humans do, but it will accelerate how much can be automated and reduce the cost of automation.

So it depends what we mean as 'use them as a human being' - we are using human beings to do tasks, be it solving a billing dispute for a customer, processing a customers insurance claim, or reading through legal discovery. These aren't intrinsically 'human' tasks.

So 2 - yes, I do believe that they operate in a way that makes them useful for tasks. LLM's just respond to text prompts, but those text prompts can do useful things that humans are currently doing.


I think C.S. Peirce's distinction between corollarial reasoning and theorematic reasoning[1][2] is helpful here. In short, the former is the grindy rule following sort of reasoning, and the latter is the kind of reasoning that's associated with new insights that are not determined by the premises alone.

As an aside, Students of Peirce over the years have quite the pedigree in data science too, including the genius Edgar F. Codd, who invented the relational database largely inspired by Peirce's approach to relations.

Anyhow, computers are already quite good at corollarial reasoning and have been for some time, even before LLMs. On the other hand, they struggle with theorematic reasoning. Last I knew, the absolute state of the art performs about as well as a smart high school student. And even there, the tests are synthetic, so how theorematic they truly are is questionable. I wouldn't rule out the possibility of some automaton proposing a better explanation for gravitational anomalies than dark matter for example, but so far as I know nothing like that is being done yet.

There's also the interesting question of whether or not an LLM that produces a sequence of tokens that induces a genuine insight in the human reader actually means the LLM itself had said insight.

[1] https://www.cspeirce.com/menu/library/bycsp/l75/ver1/l75v1-0...

[2] https://groups.google.com/g/cybcom/c/Es8Bh0U2Vcg


My 2 cents:

I think the vector representation stuff is an effective tool and possibly similar to foundational tools that humans are using.

But my gut feel is that it's just one tool of many that combine to give humans a model+view of the world with some level of visibility into the "correctness" of ideas about that world.

Meaning we have a sense of whether new info "adds up" or not, and we may reject the info or adjust our model.

I think LLM's in their current state can be useful for tasks that do not have a high cost resulting from incorrect output, or tasks that can have their output validated by humans or some other system cost-effectively.


I think LLMs operate in a similar way to some of the important parts of human congnition.

I believe they operate in a way that makes them at least somewhat useful for some things. But I think the big issue is trustworthiness. Humans - at least some of them - are more trustworthy than LLM-style AIs (at least current ones). LLMs need progress on trustworthiness more than they need progress on use in other areas.


IMHO, a more important and testable difference is that humans don't have separate "train" and "infer" phases. We are able to adapt more or less on the fly and learn from previous experience. LLMs currently cannot retain any novel experience past the context window.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: