Elon Musk isn't part of OpenAI any more. I don't think his stance should hold that much weight any more.
> Sikka argued that "openness" was the fundamental reason he supported the project.
I believe the relevant quote is this one
> Sam asked me if I would be ok with the fact that such an endeavor would be untethered and would produce results generally in the greater interests of humanity, and he was somewhat surprised by my reaction, that indeed I would only support this venture if such an openness was a fundamental requirement!
I'll leave this open to interpretation; I think there are multiple ways of taking it and I'm not convinced which are accurate.
> OpenAI was heavily criticized by other AI risk researchers for its public approach, and argued strongly that it was doing the right thing. It specifically invoked the threat of AI tools being abused by small groups of people in secret as more pressing than the threats from self-directed or public-use AI.
I'm not sure who you are referring to here. The only critics I've heard claiming OpenAI are too open are those who believe in Bostrom-style AGI risk, which is the idea that (far-term) AI is intrinsically dangerous, rather than being dangerous predominantly because of malicious use.
> If OpenAI thinks state actors can't replicate their work,
> Sikka argued that "openness" was the fundamental reason he supported the project.
I believe the relevant quote is this one
> Sam asked me if I would be ok with the fact that such an endeavor would be untethered and would produce results generally in the greater interests of humanity, and he was somewhat surprised by my reaction, that indeed I would only support this venture if such an openness was a fundamental requirement!
https://web.archive.org/web/20151222094518/http://www.infosy...
I'll leave this open to interpretation; I think there are multiple ways of taking it and I'm not convinced which are accurate.
> OpenAI was heavily criticized by other AI risk researchers for its public approach, and argued strongly that it was doing the right thing. It specifically invoked the threat of AI tools being abused by small groups of people in secret as more pressing than the threats from self-directed or public-use AI.
I'm not sure who you are referring to here. The only critics I've heard claiming OpenAI are too open are those who believe in Bostrom-style AGI risk, which is the idea that (far-term) AI is intrinsically dangerous, rather than being dangerous predominantly because of malicious use.
> If OpenAI thinks state actors can't replicate their work,
I don't believe OpenAI believes this.