And now we have our answer. sama said that Ilya was going "to start something that was personally important to him." Since that thing is apparently AI safety, we can assume that that is not important to OpenAI.
This only makes sense if OpenAI just doesn't believe AGI is a near-term-enough possibility to merit their laser focus right now, when compared to investing in R&D that will make money from GPT in a shorter time horizon (2-3 years).
I suppose you could say OpenAI is being irresponsible in adopting that position, but...come on guys, that's pretty cynical to think that a company AND THE MAJORITY OF ITS EMPLOYEES would all ignore world-ending potential just to make some cash.
So in the end, this is not necessarily a bad thing. This has just revealed that the boring truth was the real situation all along: that OpenAI is walking the fine line between making rational business decisions in light of the far-off time horizon of AGI, and continuing to claim AGI is soon as part of their marketing efforts.
Part of it I think is because the definition that openai has over AGI is much more generous than what I think most people probably imagine for ai. I believe on their website it once said something like agi is defined as a system that is "better" than a human at the economic tasks its used for. Its a definition so broad that a $1 4 function calculator would meet it because it can do arithmetic faster and more accurately than most any human. Another part is that we don't understand how consciousness works in our species or others very well, so we can't even define metrics to target for validating we have made an agi in the definition that I think most laypeople would use for it.
This only makes sense if OpenAI just doesn't believe AGI is a near-term-enough possibility to merit their laser focus right now, when compared to investing in R&D that will make money from GPT in a shorter time horizon (2-3 years).
I suppose you could say OpenAI is being irresponsible in adopting that position, but...come on guys, that's pretty cynical to think that a company AND THE MAJORITY OF ITS EMPLOYEES would all ignore world-ending potential just to make some cash.
So in the end, this is not necessarily a bad thing. This has just revealed that the boring truth was the real situation all along: that OpenAI is walking the fine line between making rational business decisions in light of the far-off time horizon of AGI, and continuing to claim AGI is soon as part of their marketing efforts.
Companies in the end are predictable!