There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.
"OpenAI is nothing without it's people"
All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.
Yes, and the reason for that is that employees at OpenAI believed (reasonably) that they were cruising for Google-scale windfall payouts from their equity over a relatively short time horizon, and that Altman and Brockman leaving OpenAI and landing at a well-funded competitor, coupled with OpenAI corporate management that publicly opposed commercialization of their technology, would torpedo those payouts.
I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).
I also believe that AGI is not a thing, but for different reasons. I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not. People also don't seem interested in justifying why humans would be GI but other animals with 99% of the same DNA aren't.
My main reason for thinking general intelligence is not a thing is similar to how Turing completeness is not a thing. You can conceptualize a Turing machine, but you can't actually build one for real. I think actual general intelligence would require an infinite brain.
> I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not.
That's actually a great point which I'd never heard before. I agree that it's very likely that us humans do not really have GI, but rather only the intelligence that evolved stochastically to better favour our existence and reproduction, with all its positive and negative spandrels[0]. We can call that human intelligence (HI).
However, even if our "general" intelligence is a mirage, surely what most people imagine when they talk about 'AGI' is actually AHI, as in an artificial intelligence that has the same characteristics as human intelligence that in their own hubris they believe is general. Or are you making a harder argument, that human intelligence may not actually have the ability to create AHI?
Yes, I do think that people usually mean AHI even they say AGI, although they don't realize it because when asked to define AGI they talk about generality and not about mimicking humans. (Meanwhile, when they talk about sentience and consciousness, they will usually only afford that to an artificial entity if it is exactly like a human, and often not even then.)
> Or are you making a harder argument, that human intelligence may not actually have the ability to create AHI?
I wasn't, but I've pondered it since you brought it up. No, I don't think it's impossible to create a greater intelligence than oneself — in fact, evolution has already done it by creating animals, including but not limited to humans. I used to think it was impossible when I pondered science fictional characters like Data from TNG, but modern LLMs show that we can create it without having to understand how it works. Data is depicted as having been engineered, but machine learning is closer to evolution than it is to engineering.
If we were to believe the embodiment theory of intelligence (it’s by far not the only one out there, but very influential and convincing), this means that building an AGI is an equivalent problem to building an artificial human. Not a puppet, not a mock, not “sorta human”, but real, fully embodied human, down to gut bacterial biome, because according to the embodiment theory, this affects intelligence too.
In this formulation, it’s pretty much as impossible as time travel, really.
Sure, if we redefine "AGI" to mean "literally cloning a human biologically", then AGI suddenly is a very different problem (mainly one of ethics, since creating human clones, educating, brainwashing, and forcing them to respond to chat messages ala chatGPT has a couple ethical issues along the way).
I don't see how claiming that intelligence is multi-faceted makes AGI (the A is 'artificial' remember) impossible.
Even if _human_ intelligence requires eating yogurt for your gut biome, that doesn't preclude an artificial copy that's good enough.
Like, a dog is very intelligent, a dog can fetch and shake hands because of years of breeding, training, and maybe from having a certain gut biome. Boston Dynamics did not have to understand a single cell of the dog's stomach lining in order to make dog-robots perfectly capable of fetching and shaking hands.
I get that you're saying "yes, we've fully mapped the neurons of a fruit fly and can accurately simulate and predict how a fruit fly's brain's neurons will activate, and can create statistical analysis of fruit-fly behavior that lets us accurately predict their action for much cheaper even without the brain scan, but human brains are unique in a way where it is impossible to make any sort of simulation or prediction or facsimile that is 'good enough' because you also need to first take some bacteria from one of peter thiel's blood boys and shove it in the computer, and if we don't then we can't even begin to make a facsimile of intelligence". I just don't buy it.
“AGI” isn’t a thing and never will be. It fails even really basic scrutiny. The objective function of a human being is to keep its biological body alive and reproduce. There is no such similar objective on which a ML algorithm can be trained. It’s frankly a stupid idea propagated by people with no meaningful connection to the field and no idea what the fuck they’re talking about.
We will look back on this and the early OpenAI employees (who sold) will speak out in documentaries and movies in a decades time and they will admit that "AGI" was a period of easy dumb money.
There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.
"OpenAI is nothing without it's people"
All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.