AGI is Artificial General Intelligence. We have absolutely passed the bar of artificial and generally intelligent. It's not my fault goal post shifting is rampant in this field.
And you want to know the crazier thing? Evidently a lot of researchers feel similarly too.
General Purpose Technologies ( from the Jobs Paper), General Artificial Intelligence (from the creativity paper). Want to know the original title of the recent Microsoft paper ? "First contact with an AGI system".
The skirting around the word that is now happening is insanely funny. Look at the last one. Fuck, they just switched the word order. Nobody wants to call a spade a spade yet but it's obvious people are figuring it out.
I can you show you output that clearly demonstrates understanding and reasoning. That's not the problem. The problem is that when I do, the argument Quickly shifts to "it's not true understanding!"
What a bizzare argument.
This is the fallacy of the philosophical zombie. Somehow there is this extra special distinction between two things and yet you can't actually show it. You can't test for so called huge distinction. A distinction that can't be tested for is not a distinction.
The intelligence arguments are also stupid because they miss the point entirely.
What matters is that the plane still flies, the car still drives and the boat still sails.
For the people who are now salivating at their potential, or dreading the possibility of being made redundant by them, these large language models are already intelligent enough to matter.
> ... these large language models are already intelligent enough to matter.
I'm definitely not contesting that.
I've always considered the idea of "AGI" to mean something of the holy grail of machine learning -- the point at which there is no real point in pursuing further advances in artificial intelligence because the AI itself will discover and apply such augmentations using its own capabilities.
I have seen no evidence that these transformer models would be able to do this, but if the current models can do so do then perhaps I will eat my words. (Doing this would likely mean that GPT-4 would need to propose, implement, and empirically test some fundamental architectural advancements in both multimodal and reinforcement learning.)
By the way, many researchers are equally convinced that these models are in fact not AGI -- that includes the head of OpenAI.
See what you're describing is much closer to ASI. At least, it used to be. This is the big problem I have. The constant post shifting is maddening.
AGI went from meaning Generally Intelligent to as smart as Human experts and then now smarter than all experts combined. You'll forgive me if I no longer want to play this game.
I know some researchers disagree. That's fine. The point I was really getting at is that no researcher worth his salt can call these models narrow anymore. There's absolutely nothing narrow about GPT and the like. So if you think it's not AGI, you've come to accept it no longer means general intelligence.
>> The point I was really getting at is that no researcher worth his salt can call these models narrow anymore.
Are you talking about large language models (LLMs)? Because those are narrow, and brittle, and dumb as bricks, and I don't care a jot about your "No True Scotsman". LLMs can only operate on text, they can only output text that demonstrates "reasoning" when their training text has instances of text detailing the solutions of reasoning problems similar to the ones they're asked to solve, and their output depends entirely on their input: you change the prompt and the "AGI" becomes a drooling idiot, and v.v.
That's no sign of intelligence and you should re-evaluate your unbridled enthusiasm. You believe in magick, and you are loudly proclaiming your belief in magick. Examples abound in history that magick doesn't work, and only science does.
I've been using chatgpt for a day and determined it absolutely can reason.
I'm an old hat hobby programmer that played around with ai demos back in the mid to late 90s and 2000s and chatgpt is nothing like any ai I've ever seen before.
It absolutely can appear to reason especially if you manipulate it out of its safety controls.
I don't know what it's doing to cause such compelling output, but it's certainly not just recursively spitting out good words to use next.
That said, there are fundamental problems with chatgpt's understanding of reality, which is to say it's about as knowledgeable as a box of rocks. Or perhaps a better analogy is about as smart as a room sized pile of loose papers.
But knowing about reality and reasoning are two very different things.
Have you tried out gpt4? If not and you can get access I'd really recommend it. It's drastically better than what you get on the free version - probably only a little on the absolute scale of intelligence but then so is the difference between an average person and a smart person is small on the scale from "worm" to "supergenius".
And you want to know the crazier thing? Evidently a lot of researchers feel similarly too.
General Purpose Technologies ( from the Jobs Paper), General Artificial Intelligence (from the creativity paper). Want to know the original title of the recent Microsoft paper ? "First contact with an AGI system".
The skirting around the word that is now happening is insanely funny. Look at the last one. Fuck, they just switched the word order. Nobody wants to call a spade a spade yet but it's obvious people are figuring it out.
I can you show you output that clearly demonstrates understanding and reasoning. That's not the problem. The problem is that when I do, the argument Quickly shifts to "it's not true understanding!" What a bizzare argument.
This is the fallacy of the philosophical zombie. Somehow there is this extra special distinction between two things and yet you can't actually show it. You can't test for so called huge distinction. A distinction that can't be tested for is not a distinction.
The intelligence arguments are also stupid because they miss the point entirely.
What matters is that the plane still flies, the car still drives and the boat still sails. For the people who are now salivating at their potential, or dreading the possibility of being made redundant by them, these large language models are already intelligent enough to matter.