> Testing a dapp off the mainnet is like ensuring your website works on localhost
I would argue the exact opposite. A website will be deployed to different versions of different browsers on different operating systems. A smart contract will exist on a single distributed computer. It sounds like the actual problem is people treating smart contract development as cavalierly as web app development
If communist China is anything like communist Europe used to be, finding clever ways to overcome the rules or use them to your advantage is almost a sport, and definitely not considered immoral or unethical by most.
Alignment refers to the process of aligning AI with human values. I don't see why a superhuman AI would require different prompting than is in use today.
The idea is that keeping a superhuman AI aligned would require superhuman prompting. This is the whole premise of OpenAI's SuperAlignment research and recent publication.
I'm much more interested in lower parameter models that are optimized to punch above their weight. There is already interesting work done in this space with Mistral and Phi. I see research coming out virtually every week trying to address the low hanging fruit.
> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.
> it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
So the board did not have confidence that Sam was acting in good faith. Watch any of Ilya's many interviews, he speaks openly and candidly about his position. It is clear to me that Ilya is completely committed to the principles of the charter and sees a very real risk of sufficiently advanced AI causing disproportionate harm.
People keep trying to understand OpenAI as a hypergrowth SV startup, which it is explicitly not.
That original announcement doesn't make it nearly as explicit as you're making it. It doesn't say what he lied about, and it doesn't say he's not on board with the mission.
Sounds like firing was done to better serve the original mission, and is therefore probably a good thing. Though the way it's happening does come across as sloppy and panicky to me. Especially since they already replaced their first replacement CEO.
Edit: turns out Wikipedia already has a pretty good write up about the situation:
> "Sutskever is one of the six board members of the non-profit entity which controls OpenAI.[7] According to Sam Altman and Greg Brockman, Sutskever was the primary driver behind the November 2023 board meeting that led to Altman's firing and Brockman's resignation from OpenAI.[30][31] The Information reported that the firing in part resulted from a conflict over the extent to which the company should commit to AI safety.[32] In a company all-hands shortly after the board meeting, Sutskever stated that firing Altman was "the board doing its duty."[33] The firing of Altman and resignation of Brockman led to resignation of 3 senior researchers from OpenAI."