We have artificial intelligence that is general and above average human intelligence for the majority of tasks it can perform. Near expert level for some. NLP is a solved problem. Bespoke models are out the door. Large enough LLMs crush anything else for any NLP task.
Honestly, this whole "they are not intelligent" argument is becoming ridiculous.
might as well argue that a plane isn’t a real bird or a car isn’t a real horse.
The debate over what kind of intelligence these models possess is rightly lively and ongoing.
It’s clear that at the least, they can decipher very numerous patterns across a wide range of conceptual depths — it’s an architectural advance easily on the the level of the convolutional neural network, if not even more profound. The idea that NLP is “solved” isn’t a crazy notion, though I won’t take a side on that.
That said, it’s equally obvious that they are not AGI unless you have a really uninspired and self-limiting definition of AGI. They are purely feedforward aside from the single generated token that becomes part of the input to the next iteration. Multimodality has not been incorporated (aside from possibly a limited form in GPT-4). Real-world decision-making and agency is entirely outside the bounds of what these models can conceive or act towards.
Effectively and by design these models are computational behemoths trained to do one singular task only — wring a large textual input though an enormous interconnected web of calculations purely in service of distilling everything down to a single word as output, a hopefully plausible guess at what’s next given what’s been seen.
AGI is Artificial General Intelligence. We have absolutely passed the bar of artificial and generally intelligent. It's not my fault goal post shifting is rampant in this field.
And you want to know the crazier thing? Evidently a lot of researchers feel similarly too.
General Purpose Technologies ( from the Jobs Paper), General Artificial Intelligence (from the creativity paper). Want to know the original title of the recent Microsoft paper ? "First contact with an AGI system".
The skirting around the word that is now happening is insanely funny. Look at the last one. Fuck, they just switched the word order. Nobody wants to call a spade a spade yet but it's obvious people are figuring it out.
I can you show you output that clearly demonstrates understanding and reasoning. That's not the problem. The problem is that when I do, the argument Quickly shifts to "it's not true understanding!"
What a bizzare argument.
This is the fallacy of the philosophical zombie. Somehow there is this extra special distinction between two things and yet you can't actually show it. You can't test for so called huge distinction. A distinction that can't be tested for is not a distinction.
The intelligence arguments are also stupid because they miss the point entirely.
What matters is that the plane still flies, the car still drives and the boat still sails.
For the people who are now salivating at their potential, or dreading the possibility of being made redundant by them, these large language models are already intelligent enough to matter.
> ... these large language models are already intelligent enough to matter.
I'm definitely not contesting that.
I've always considered the idea of "AGI" to mean something of the holy grail of machine learning -- the point at which there is no real point in pursuing further advances in artificial intelligence because the AI itself will discover and apply such augmentations using its own capabilities.
I have seen no evidence that these transformer models would be able to do this, but if the current models can do so do then perhaps I will eat my words. (Doing this would likely mean that GPT-4 would need to propose, implement, and empirically test some fundamental architectural advancements in both multimodal and reinforcement learning.)
By the way, many researchers are equally convinced that these models are in fact not AGI -- that includes the head of OpenAI.
See what you're describing is much closer to ASI. At least, it used to be. This is the big problem I have. The constant post shifting is maddening.
AGI went from meaning Generally Intelligent to as smart as Human experts and then now smarter than all experts combined. You'll forgive me if I no longer want to play this game.
I know some researchers disagree. That's fine. The point I was really getting at is that no researcher worth his salt can call these models narrow anymore. There's absolutely nothing narrow about GPT and the like. So if you think it's not AGI, you've come to accept it no longer means general intelligence.
>> The point I was really getting at is that no researcher worth his salt can call these models narrow anymore.
Are you talking about large language models (LLMs)? Because those are narrow, and brittle, and dumb as bricks, and I don't care a jot about your "No True Scotsman". LLMs can only operate on text, they can only output text that demonstrates "reasoning" when their training text has instances of text detailing the solutions of reasoning problems similar to the ones they're asked to solve, and their output depends entirely on their input: you change the prompt and the "AGI" becomes a drooling idiot, and v.v.
That's no sign of intelligence and you should re-evaluate your unbridled enthusiasm. You believe in magick, and you are loudly proclaiming your belief in magick. Examples abound in history that magick doesn't work, and only science does.
I've been using chatgpt for a day and determined it absolutely can reason.
I'm an old hat hobby programmer that played around with ai demos back in the mid to late 90s and 2000s and chatgpt is nothing like any ai I've ever seen before.
It absolutely can appear to reason especially if you manipulate it out of its safety controls.
I don't know what it's doing to cause such compelling output, but it's certainly not just recursively spitting out good words to use next.
That said, there are fundamental problems with chatgpt's understanding of reality, which is to say it's about as knowledgeable as a box of rocks. Or perhaps a better analogy is about as smart as a room sized pile of loose papers.
But knowing about reality and reasoning are two very different things.
Have you tried out gpt4? If not and you can get access I'd really recommend it. It's drastically better than what you get on the free version - probably only a little on the absolute scale of intelligence but then so is the difference between an average person and a smart person is small on the scale from "worm" to "supergenius".
The market disagrees with you. How come there are billions of dollars spent on all these knowledge workers around the world every day when they could be replaced by this expert-level AI?
I'm not sure where this idea of LLMs being intelligent even comes from. It took me a whopping 9 prompts (genuine questions, no clever prompt engineering) of interacting with ChatGPT to conclude it does not understand anything. It doesn't understand addition, what length is, doesn't remember what it said a second ago, etc.
The output of ChatGPT is clearly just a reflection of its inner workings - predicting the next word based on training data. It's clever and undoubtely useful for a certain set of repetitive problems like generating boilerplate but it's not intelligence, not by any reasonable definition.
I don't think any technology has been rolled out with the speed you are suggesting LLMs should have been rolled out.
It's like saying 4 months after the first useful car was manufactured. "If these are so good, how come there are still horses? Clearly the market disagrees with you".
To give an example of the limitations of these things that's hopefully easy to understand, I got access to Bard this morning and asked it to write a limerick. It gave me what could charitably be called a free verse poem that happened to begin "there once was a man from Nantucket." I'm sure they can improve on it (ChatGPT was better at this kind of thing when I had access to it) but "solved problem" is clearly a long way off.
Yes, much more compelling. But if this were a “solved problem” then any of them should be able to do it easily. It’s not like I need to compare the results of sorting between different programs. It just works. That is a solved problem.
You can use the term to mean whatever you want but in my mind it means it's boring with no particular room for improvement. Even the biggest booster isn't going to say that about this AI. And keep in mind, "write me a limerick" is a pretty easy prompt. We're not trying to do anything too novel or crazy there.
> We have artificial intelligence that is general and above average human intelligence for the vast majority of tasks it can perform.
Even when I give it the benefit of the doubt, this sentence makes no sense to me. Do we have a list of tasks a language model can perform? To the best of my knowledge, they can arguably perform any language task.
> Large enough LLMs crush anything else for any NLP task. and evidently they beat top humans too.
Yes, they are certainly (rightfully) the go-to model for most tasks at this point if your concern is outright performance. Have I indicated otherwise? As for beating “top” humans, I am sure that can be investigated, but it is a fairly nuanced research question. It is inarguable that they are amazingly good though, especially relative to what we had just a few years ago.
> Honestly, this whole "they are not intelligent" argument is becoming ridiculously obtuse.
>
> might as well argue that a plane isn’t a real bird or a car isn’t a real horse.
Which is a claim and argument that I never made – hallucinating? How about you calm down a little and get back on the ground? You are talking to someone that has argued in favour of these kinds of models for about a decade. But that does not mean that I am willing to spout nonsense or lose track of what we know and what we do not yet know.
You said NLP is unsolved because we don't have human level artificial intelligence. We absolutely do. at least by any evaluations we can carry out.
no-one wants to call a spade a spade yet but the sentiment is obvious in recent research. directly being called General purpose technologies from the jobs paper, general artificial intelligence from the creativity paper. That last one is particularly funny, they just switched the two words.
> might as well argue that a plane isn’t a real bird or a car isn’t a real horse.
They aren’t though… They are far superior at specific things birds and horses are known for, but they can’t do everything that birds and horses can, so they aren’t even artificial birds and horses.
Of course they aren't. The point is that it's irrelevant.
what matters is that the plane still flies, the car still drives and the boat still sails.
For the people who are now salivating at their potential, or dreading the possibility of being made redundant by them, these large language models are already intelligent enough to matter.
Handwringing bout some non-existent difference between "true understanding" and "fake understanding" which by the way nobody seems to be able to actually distinguish (I mean wow such a supposed huge difference and you can't even show me what that is. a distinction you can't test for is not a distinction ) is so far beyond the point, it's increasingly maddening to read.
Okay I agree with you on that. The technology will be disruptive regardless of whether we attribute true understanding to it, and as we start adding long term memory and planning to these AIs, we will start seeing significant alignment risk as well. This is true regardless of whether we decide to cope by saying they have "fake understanding" and are "stochastic parrots".
no, a short answer to this is .. these models are probabilistic, therefore they will always have errors along with whatever else. Secondly "intelligence" is not one thing; no one has all of it or none of it, including computers.
> these models are probabilistic, therefore they will always have errors
There's nothing perfect. Even computers and computer networks need to have error-correcting code because information gets randomly corrupted.
Our whole reality is probabilistic.
And us humans are way worse than AI at consistency. We even overwrite our own memories all the time, so we can't even be sure what we remember is actually what happened! (btw, this is currently being used in therapy to re-write traumatic memories and help people overcome PTSD).
Honestly, this whole "they are not intelligent" argument is becoming ridiculous.
might as well argue that a plane isn’t a real bird or a car isn’t a real horse.