So why is not intelligence when I ask an LLM to solve a problem, and it proceeds to review code, search external documentation and write a solution, test it and iterate until the problem is solved? It seems very much like we we do. It seems like the difference to you guys is “hard to define magic that we can’t explain / it just is different”.
So far, it's been pretty easy to find examples of how it is different from human intelligence.
For one, it needs multiple orders of magnitude more tokens (millions? billions? more?) to start following grammar rules: watch any kid pick it up with exposure to maybe 10,000 words by the age of 2 (look how they apply the rule to exceptions and while it does not work, it shows their internalization of the rule).
Next, "how" LLMs work is evident when it barfs in gotcha questions like "if yesterday was Tuesday, what day was yesterday" (I believe all major players have overcome this one in the last 12 months, but you can always find these LLM-twisters: yes, you can trick humans too by distracting them, but these are computers which "don't pay attention"?)
And it can frequently undo an improvement it added earlier, or mess something new up that you did not call out specifically.
It's brilliant at recapitulating the daya it's trained on. It can be extremely useful. But it's still nowhere close the capability of the human brain, not that I expect it to be.
Don't get me wrong I think they are remarkable but I still prefer to call it LLM rather than AI.
Some of the things we consider prerequisites of general intelligence (what we usually mean by when we talk about intelligence in these contexts) - like creativity or actual reasoning, are not present at all in LLMs.
An LLM is a very clever implementation of autocomplete. The truly vast amount of information we've fed it provides a wealth of material to search against, the language abstraction allows for autocompleting at a semantic level and we've add enough randomness to allow some variation in responses, but it is still autocomplete.
Anyone who has used an LLM enough in an uncommon domain they are very familiar with has no doubt seen evidence of the machine behind the curtain from faulty "reasoning" where it sometimes just plays madlibs to a complete lack of actual creativity.
> I call it a "bullshit generator" because it generates output "with indifference to the truth".
And if we follow the link we find he's referring to LLMs:
> “Bullshit generators” is a suitable term for large language models (“LLMs”) such as ChatGPT, that generate smooth-sounding verbiage that appears to assert things about the world, without understanding that verbiage semantically. This conclusion has received support from the paper titled ChatGPT is bullshit by Hicks et al. (2024).
No one thinks the database, orchestration, tool, etc. portions of ChatGPT are intelligent and frankly, I don't think anyone is confused by using LLM as shorthand not just for the trained model, but also all the support tools around it.
I wasn't thinking about their data store or other infrastructure. I was thinking about the layers added for reasoning and other functions that modify or guide the output of the LLM.
He's not talking about intelligence though, he's saying it has no knowledge or understanding, whereas something like a decision tree or neural net object recognition model does.
Sure, but how do I know you in particular are intelligent?
Any test you can device for this, ChatGPT would reliably pass if the medium was text, while a good fraction of humans might actually fail. It does a pretty good job if the medium was audio.
Video, and in person remains slightly out of reach for now. But I doubt we are not going to get there eventually.
> Any test you can device for this, ChatGPT would reliably pass if the medium was text, while a good fraction of humans might actually fail.
That's clearly untrue unless you qualify "test" as "objective automated test." Otherwise, "convince Stallman you have intelligence according to his definition," is a test that ChatGPT hasn't passed and which every human probably would:
> I define "intelligence" as being capable of knowing or understanding, at least within some domain. ChatGPT cannot know or understand anything, so it is not intelligence. It does not know what its output means. It has no idea that words can mean anything.
AI could be a boon for mankind. it can be a useful tool. We could employ it in a manner which provides more dignity for workers. That is, let them work less hours, have more leisure time etc. That necessitates something which will keep the powers of capital in check, and people don't seem to think that this is possible.
Corporations are just so large and powerful, that people feel hopeless. Byt we could still get together and enact legislation which will override them. Othing is impossible, it just takes some imagination and organisation.
Like Chomsky once said, if the peasants of Haiti could organise and overthrow their government and create a functioning democracy, then surely we can too, with far more advantages.
I see such positivity in your comment, but also every technology has promised to make everything easier and more convenient and so to give us more leisure time. What the evidence has shown is that the people who end up living the life of leisure are the ones who amass wealth and power, and everyone else is going to be stuck in the rat race because, well, we're living things. The rules we live by are: you can't win, you can't break even, and you can't even stop playing the game.
You say that like inequality is a problem. If so, there's a really easy way to solve it: nuke the planet back to the stone age.
Personally, I'd rather have inequality if it means everyone can live a peaceful life. Let the rich have their yachts or whatever.
I don't see why the increased productivity provided by AI won't make things better, given that all of the ills of the world are caused by scarcity: that is, insufficient productivity.
They actually do, if you ask them better questions. "Inequality" is a boogeyman that has a lot of baggage attached to it by society but, if you were to ask people if it was a good thing for millions of people to be lifted out of poverty if the cost was that on person was obscenely wealthy, most people would side on the side of inequality.
"Inequality" is just an academic word for "Keeping up with the Jones'". Each generation has more material good, both necessities and otherwise, than previous generations. It's only through comparison that people are made to feel poor. Rather than look at trends of poverty and flourishing, people are made to feel cheated by not getting a slice of someone else's pie.
I don't think so, in fact it might be counterproductive. I think it could and should be done within existing structures. But it will require mass mobilisation and counters to mass propaganda.
I would say the primary difference is that the state supersedes capital, rather than the other way around. The Chinese state permits capitalism, but only when it's to the benefit of China's economy and wellbeing.
So, for instance they just banned sports betting outright, as it's not productive or contributing to the economy.
The state runs the "commanding heights" of the economy, the banks, and directs investment, coordinates with industry. Of course it invests in infrastructure development.
reply