Assume good intentions and nothing of substance being hidden: Is there any way to be transparent here that would have satisfied you or are you essentially asking for them to just keep to themselves or something else?
A great way to be transparent would be to admit that some enormous egos prevented work that should be open from being open, and counterfactually opening them up. Sure, it may piss off microsoft, but statistically things that piss off microsoft also has been great for the world at large.
Unless the problem-in-question is Adjusted Gross Income, I don't think you're talking about a well-defined issue in the first place. That's kinda the problem; even half-definitions like "meets or exceeds human performance in all tasks" doesn't specify what those tasks are or what human performance represents.
In other words, targeting "AGI" is a ill-defined goalpost that you can move anywhere you want, and shareholders will be forced to follow. Today it's about teaching LLMs to speak French, next week they'll need 7 billion dollars to teach it to flip pancakes.
> Working in this space (however you want to say it) costs a lot of money.
Not necessarily? OpenAI deliberately chooses to scale their models vertically, which incumbents like Google and Meta had largely avoided. Their "big innovation" was GPT-4, an LLM of unprecedented scale that brute-forced it's way to the top. The Open Source 72b+ models of today generally give it a run for it's money, the party trick is over.
Nitpicking the definitions is important, because for OpenAI it's unclear what comes next. Consumers don't know what to expect, portions of the board appear to be in open rebellion, and our goal of AGI remains yet-undefined. I simply don't trust OpenAI (or Elon, for that matter) to do the right thing here.
Again you haven’t a clue of the point here. They were created as a research lab and instead after taking a paper created at Google they went closed source and are simply scaling that instead of working to create new and improved models that can actually run on reasonable hardware.
Small transformers being able to beat the same models but scaled up is unrelated to anything being discussed and you just seem like a fanboy at this point
You're basically describing what happened to GPT-2 when T5-flan came out. Not to mention, the incumbent model at the time (BERT) was extremely competitive for it's smaller size. Pretty much nobody was talking about OpenAI's models back then because they were too large to be realistically useful.
So yeah, I do actually anticipate smaller projects that devalue turtle-tower model scaling.
No you missed the point in that the definition being vague, and an implication of magical tooling, was my point when I asked what that even means. By saying this they can now right off any criticism and people like you clearly eat it up.
No it’s a scapegoat to justify doing whatever and using a meme word so that sci-fi fans will accept that whatever. The fact you’re eating it up here is pure cringe, grow up. These people took a non-profit with an egalitarian mission and reversed course once they saw they could make fuck you money. The “AGI” excuse is one only the immature are buying. Dude.
I don't remember reading an exchange like this on HN ever. Either I wasn't paying attention enough or it's just demographic changing? I don't want to start an argument with either of you but it's painful to see US literally being split in half, even when you watch a movie, you can reasonably guess which side the filmmaker is standing depending on the story, narrative, and perhaps the ending. Same for any other outlet that you can think of, including comments on HN, methinks.
The parent comment isn't entirely wrong, though. There are reasonable, safe and productive degrees of curiosity, and their are unreasonable, unsafe and antiproductive degrees too.
AI itself is not worthless, the goal of advancing machine-learning and computer vision has long-since proved it's worth. Heralding LLMs as the burgeoning messiah of "AGI" (or worse yet, crowning OpenAI) is a bald-faced hype machine. This is not about space exploration or advancing some particular field of physics along a well-defined line of research. It's madness plus marketing, and there's no reasonable defense of their current valuation. At least Tesla has property worth something if they liquidate, OpenAI is worth nothing if they fail. Investing in their success is like playing at a roulette wheel you know is rigged.
That wasn't the point of the question. The question was a hypothetical to test if there was any possible response that would've satisfied the original poster.
They're not suggesting to assume good intentions about the parties forever. They're just asking for that assumption for the purposes of the question that was asked
There is no satisfying answer if your actions before are not satisfying. The question implies that the original poster cannot be satisfied, and thus shift the blame implicitly. The problem remain not what the answer is, or how it is worded. The answer only portrays the actions, which are by themselves unsatifsying.