Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If the tech really is dangerous then being more closed arguably is in the public's best interest

If that was true, then they shouldn't have started off like that to begin with. You can't have it both ways. Either you are pursuing your goal to be open (as the name implies) or the way you set yourself up was ill-suited all along.



Their position evolved. Many people at the time disagreed that having open source AGI - putting it in the hands of many people - was the best way to mitigate the potential danger. Note that this original stance of OpenAI was before they started playing with transformers and having anything that was beginning to look like AI/AGI. Around the time of GPT-3 was when they said "this might be dangerous, we're going to hold it back".

There's nothing wrong with changing your opinion based on fresh information.


> There's nothing wrong with changing your opinion based on fresh information.

I don't really get that twist. What "fresh" information arrived here suddenly? The structure they gave themselves was chosen explicitly with the risks of future developments in mind. In fact, that was why they chose that specific structure as outlined in the complaint. How can it now be called new information that there are actually risks involved? That was the whole premise of creating that organization in the form it was done to begin with!


I’d agree. And the fact that it evolved in a way that made individuals massive massive profit, suggests that maybe their mind wasn’t changed, and profit was the actual intention


The fresh information was seeing who built an AGI, and what it looks like.

When OpenAI was founded it was expected that AGI would likely come out of Google, with OpenAI doing the world a favor by replicating this wonderful technology and giving it to the masses. One might have imagined AGI would be some Spock-like stone cold super intelligence.

As it turns out, OpenAI themselves were the first to create something AGI-like, so the role they envisaged for themselves was totally flipped. Not only this, but this AGI wasn't an engineered intelligence but rather a stochastic parrot, trained on the internet, and incredibly toxic; as much of a liability as a powerful tool.

OpenAIs founding mission of AI democracy has turned into one of protecting us from this bullshitting psychopath that they themselves created, while at the same time raising the billions of dollars it takes to iterate on something so dumb it needs to be retrained from scratch every time you want to update it.


> fresh information

What fresh information?


They were founded on the premise that some large player (specifically Google) would develop AGI, keep it closed, and maybe not develop it in the best interests (safety) of the public. The founding charter was essentially to try to ensure that AI was developed safely, which at the time they believed would be best done by making it open source and available to everyone (this was anyways contentious from day 1 - a bit like saying the best defense against bio-hackers is to open source the DNA for Ebola).

What goes unsaid, perhaps, is that back then (before the transformer had even been invented, before AlphaGo, what people might have imagined AGI to look like (some kind of sterile super-intelligence) was very different from the LLM-based "AGI" that eventually emerged.

So, what changed, what was the fresh information that warranted a change of opinion that open source was not the safest approach?

I'd say a few things.

1) As it turned out, OpenAI themselves were the first to develop a fledgling AGI, so they were not in the role they envisaged of open sourcing something to counteract an evil closed source competitor.

2) The LLM-based form of AGI that OpenAI developed was really not what anyone imagined it would be. The danger of what OpenAI developed, so far, isn't some doomsday "AI takes over the world" scenario, but rather that it's inherently a super-toxic chatbot (did you see OpenAI's examples of how it was before RLHF ?!) that is potentially disruptive and negative to society because of what it is rather than because of it's intelligence. The danger (and remedy) is not, so far, what OpenAI originally thought it would be.

3) OpenAI have been quite open about this in the past: Musk leaving, being their major source of funds, forced OpenAI to make changes in how they were funded. At the same time as this was happening (around GPT 2.0), it was becoming evident how extraordinarily expensive this unanticipated path to AGI was going to be to continue developing (Altman has indicated a cost of $100M+ to train GPT-3 - maybe including hardware). They were no longer looking for a benefactor like Musk willing/able to donate a few $10's of millions, but needed a partner able to put billions into the effort, which necessitated an investor expecting a return on investment, and hence the corporate structure change to accommodate that.


> some large player (specifically Google) would develop AGI, keep it closed, and maybe not develop it in the best interests (safety) of the public

https://youtu.be/1LVt49l6aP8


…unless you believe that the world can change and people’s opinions and decisions should change based on changing contexts and evolving understandings.

When I was young I proudly insisted that all I ever wanted to eat was pizza. I am very glad that 1) I was allowed to evolve out of that desire, and 2) I am not constantly harangued as a hypocrite when I enjoy a nice salad.


Sure, but the OpenAI situation feels a bit more like "when I started this charity all I wanted to do was save the world. Then I decided the best thing to do was use the donor funds to strengthen my friend Satya's products, earn 100x returns for investors and spin off profit making ventures to bill the world"

It's not like they've gone closed source as a company or threatened to run off to Microsoft as individuals or talked up the need for $7 trillion investment in semiconductors because they've evolved the understanding that the technology is too dangerous to turn into a mass market product they just happen to monopolise, is it?


> …unless you believe that the world can change and people’s opinions and decisions should change based on changing contexts and evolving understandings.

What I believe doesn't matter. As an adult, if you set up contracts and structures based on principles which you bind yourself to, that's your decision. If you then convince people to join or support you based on those principles, you shouldn't be surprised if you get into trouble once you "change your opinion" and no longer fulfill your obligations.

> When I was young I proudly insisted that all I ever wanted to eat was pizza.

What a good thing that you can't set up a contract as a child, isn't it?


The document says they will open source “when applicable”. If open sourcing wouldn’t benefit the public, then they aren’t obligated to do it.

That gives a lot of leeway for honest or dishonest intent.


> The document says they will open source “when applicable”. If open sourcing wouldn’t benefit the public, then they aren’t obligated to do it.

From their charter: “resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"

I just thought it might be important to provide more context. See the other comments for a discussion on "when applicable". I think this misses the point here.


Didn’t see anything definitive. Care to explain your point or link to a relevant comment?


> Care to explain your point or link to a relevant comment?

Explanation: Reducing the discussion to the two words "when applicable" (especially when ripped out of context) might be relevant in the legal sense, but totally misses the bigger picture of the discussion here. I don't like being dragged on those tangents when they can be expected to only distract from the actual point being discussed - or result in a degraded discussion about the meaning of words. I could, for instance, argue that it says "when" and not "if" which wouldn't get us anywhere and hence is a depressing and fruitless endeavor. It isn't as easy as that and the matter needs to be looked at broadly, considering all relevant aspects and not just two words.

For reference, see the top comment, which clearly mentions the "when applicable" in context and then outlines that, in general, OpenAI doesn't seem to do what they have promised.

And here's a sub thread that goes into detail on the two words:

https://news.ycombinator.com/item?id=39568850




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: