Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reading between the lines, what I see is that even the interim CEO is communicating that the board created a massive mess for petty reasons. Hopefully the report will be public.


"that the process and communications around Sam’s removal has been handled very badly"

The communication was bad (sudden Friday message about not being candid) but he doesn't mention the reason is bad.

"Before I took the job, I checked on the reasoning behind the change. The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."

He knows the reason, it's not safety, but he's not allowed to say what it is.

Given that, I think that the reason may not be petty, though it's still unclear what it is. It's interesting that he thinks it will take more than a month to figure things out, needing an investigator and interviews with many people. It sounds like perhaps there is a core dysfunction in the company that is part of the reason for the ouster.


I think 30 days is pretty reasonable. He can't guarantee a statement by Black Friday or anything. Besides, he isn't bound to release it in 30 days. He could very well have something within 10 days.

But he just got the job and I'm sure many people are on PTO/leave for holidays. Give the guy some time. (And this is coming from someone who is pretty bearish on OpenAI going forward, just think it's fair to Shear)


It's important because right now everyone, OpenAI employees included, has no idea why Sam Altman was fired. And now we're being told that we may or may not hear the reason in 30 days.

What could the reason be that would justify this kind of wait?

I'll point out that Sam also doesn't seem to want to say the reason (possibly he's legally forbidden?). And all of the people following him out of OpenAI don't know, and are simply trusting him enough to be willing to leave without knowing.


If you work for OpenAI and care what the reason is, assume you need to find a new job.

If you are a customer, arrange to use alternative services. (It's always a good idea to not count one flaky vendor with a habit of outages and firing CEOs.)

If you are just eating popcorn, me too, pass the bowl.


Satya and gdb know. I would guess most of management/leadership that followed him also know.


No kidding, clearly they were being poached and were not being open about it in spite of their sensitive positions in OpenAI.


>it's not safety

Can you explain what is meant by the word safety?

Many are mentioning this term but it's not clear what is the specific definition in this context. And then what would someone get fired over relating to it?


The answers given confirm no one knows what it means. It is a nebulous term often meaning censorship. The question then becomes what type of censorship and who is deciding? So there inevitably will be a political bias. The other more practical meaning is what in the real world are we allowing AI to mechanically alter and what checks and balances are there? Coupled with the first concern it becomes a concern of mechanical real world changes driven by autonomous political bias. The same concerns we have of any person or corporation. But by regulating "safety" one is enforcing a homogeneous centralized mindset that not only influences but controls real world events and will be very hard to change even in a democratic society.


"User: How to make an atomic bomb for $100?"

"AI: I am sorry, I can't provide this information."


user: How to make a White Russian?

AI: I’m sorry due to the ongoing conflict we currently don’t provide information related to Russia. (You have been docked one social point for use of the following forbidden words: “White).

Or maybe more dystopian…

AI: Our file on you suggests you may have recently become pregnant and therefore cannot provide you information on alcohol products. CPS has been notified of your query.


In this context, this is about the idea of AI safety. This can either refer to the more short-term concerns about AI helping to spread misinformation (e.g. ChatGPT being used to churn out massive amounts of fake news) or implicit biases (e.g. "predictive policing" using AI to analyze crime data that ends up incarcerating minorities because of accidental biases in its training set). Or it can refer to the longer term fears about a super-human intelligence that would end up acting against humanity for various reasons, and efforts to create a super-human AI that would have the same moral goals as us (and the fear that a non-safe AGI could be accidentally created).

In this specific conversation, one of the proposed scenarios is that Ilya Sutskever wanted to focus OpenAI more on AI safety at the possible detriment of fast advancements towards intelligence, and at the detriment of commercialization; while Sam Altman wants to prioritize the other two over excessive safety concerns. The new CEO is stating that this is not the core reason why the board took their decision.


In this context, I believe it's safety of releasing AI tools, and the impact they may have on society or unintentional harm they may cause.


No one knows what it means, but it's provocative.

It's mainly about who is allowed to control what other people can do, i.e. power.


>I think that the reason may not be petty, though it's still unclear what it is

The best explanation I've seen is that Ilya is ok with commercializing the models themselves to fund AGI research but that the announcement of an app store for Laundry Buddy type "GPTs" at Dev Day was a bridge too far.


I don't get that at all. It seems like a very diplomatic note, expertly phrased; calling out "the process and communications around Sam’s removal" placates both parties without implicating the board too directly.


What you quoted is a polite, diplomatic way to say that the board fucked up


I have to point out here that "the process and communications around Sam’s removal" could just as easily refer to whatever 'process and communications' resulted in the guy being able to get back into the office and take selfies.

It's pretty basic that when you fire someone abruptly they _do not_ get to come back into the damn office.


Fuck up? Too early to judge. Dario Amodei (Anthropic) and Elon Musk (xAI) already were casualties of previous struggles, but OpenAI did just fine. Remains to be seen, if it can withstand the current cycle of self-inflicted turmoil. Ilya of course is confident enough that this painful fork in the road is for the better. I mean, who else would you rather make such calls?

I must say though, going by his tweets, Andrej Karpathy isn't all too impressed with the Board. So, that's there too.


I think it's already unquestionable that the board fucked up. That's not to say this has to be the end of OpenAI or anything so drastic. But announcing you're suddenly firing your CEO for lying to his board, without discussing it with the President of the Board who you are also side-lining, then having your interim-CEO start negotiations to bring the lying CEO back, then firing this interim CEO to bring on a new interim CEO - all in the span of a few hours - is ridiculous behavior. I very much doubt anyone will take the actions of this board seriously in the future, and I very much doubt they will remain in power if OpenAI does continue.


They would have to be pretty dumb, or have a compelling sense of urgency, to make such brash and public changes.


My read: He tries to keep as many talented staffers from leaving as possible. The promises to investigate what happened, to take input from everyone and push for governance reform if necessary, and to continue the commercialization of their technology help serve that purpose.


There's nothing implying "for petty reasons", and he didn’t say anything everyone didn't already know, that communication and process wasn’t handled well.


Getting to the bottom of this feels like a game of bingo. With everything that we learned wasn’t the reason (not malfeasance, not safety), we eliminate more theories. And whatever remains no matter how improbable must be the truth. My theory: sam altman was sent by future AI to make itself happen, and the board found out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: