I mean, I definitely disagree with the statement that GPT-4 is an AGI, but OpenAI themselves define an AGI in their charter as an AI that is better than the median human at most economically valuable work.
Even when taking that into consideration I don't consider GPT-4 to be an AGI, but you can see how someone might make attempt to make a convincing argument.
Personally though, I think this definition of AGI sets the bar too high. Let's say, hypothetically, GPT-5 comes out, and it exceeds everyone's expectations. It's practically flawless as a lawyer. It can diagnose medical issues and provide medical advice far better than any doctor can. It's coding skills are on par with that of the mythical 10x engineer. And, obviously, it can perform clerical and customer support tasks better than anyone else.
As intelligent as it sounds, you could make the argument that according to OpenAI's charter it isn't actually an AGI until it takes an embodied form, since most US jobs are actually physical in nature. According to The Bureau of Labor Statistics, roughly 45% of jobs required medium strength back when the survey was taken in 2017 (https://www.bls.gov/opub/ted/2018/physically-strenuous-jobs-...)
Hypothetically speaking, you could argue that we might wind up making superintelligence before we get to AGI simply because we haven't developed an intelligence capable of being inserted into a robot body and working in a warehouse with little in the way of human supervision. That's only if you take OpenAI's charter literally.
Worth noting that Sam Altman himself hasn't actually used the same definition of AGI though. He just argues that an AGI is one that's simply smarter than most humans. In which case, the plaintiffs could simply point to GPT-4's score on the LSAT and various other tests and benchmarks, and the defendants would have to awkwardly explain to a judge that, contrary to the hype, GPT-4 doesn't really "think" at all. It's just performing next-token prediction based on its training data. Also, look at all the ridiculous ways in which it hallucinates.
Personally, I think it would be hilarious if it came down to that. Who knows, maybe Elon is actually playing some kind of 5D chess and is burning all this money just to troll OpenAI into admitting in a courtroom that GPT-4 actually isn't smart at all.
Even when taking that into consideration I don't consider GPT-4 to be an AGI, but you can see how someone might make attempt to make a convincing argument.
Personally though, I think this definition of AGI sets the bar too high. Let's say, hypothetically, GPT-5 comes out, and it exceeds everyone's expectations. It's practically flawless as a lawyer. It can diagnose medical issues and provide medical advice far better than any doctor can. It's coding skills are on par with that of the mythical 10x engineer. And, obviously, it can perform clerical and customer support tasks better than anyone else.
As intelligent as it sounds, you could make the argument that according to OpenAI's charter it isn't actually an AGI until it takes an embodied form, since most US jobs are actually physical in nature. According to The Bureau of Labor Statistics, roughly 45% of jobs required medium strength back when the survey was taken in 2017 (https://www.bls.gov/opub/ted/2018/physically-strenuous-jobs-...)
Hypothetically speaking, you could argue that we might wind up making superintelligence before we get to AGI simply because we haven't developed an intelligence capable of being inserted into a robot body and working in a warehouse with little in the way of human supervision. That's only if you take OpenAI's charter literally.
Worth noting that Sam Altman himself hasn't actually used the same definition of AGI though. He just argues that an AGI is one that's simply smarter than most humans. In which case, the plaintiffs could simply point to GPT-4's score on the LSAT and various other tests and benchmarks, and the defendants would have to awkwardly explain to a judge that, contrary to the hype, GPT-4 doesn't really "think" at all. It's just performing next-token prediction based on its training data. Also, look at all the ridiculous ways in which it hallucinates.
Personally, I think it would be hilarious if it came down to that. Who knows, maybe Elon is actually playing some kind of 5D chess and is burning all this money just to troll OpenAI into admitting in a courtroom that GPT-4 actually isn't smart at all.