Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s easy to defend this position.

It is safer to operate an AI in a centralized service, because if you discover dangerous capabilities you can turn it off or mitigate them.

If you open-weight the model, if dangerous capabilities are later discovered there is no way to put the genie back in the bottle; the weights are out there, anyone can use them.

This of course applies to both mundane harms (eg generating deepfake porn of famous people) or existential risks (eg power-seeking behavior).



This was all obvious >before< they wrote the charter.


I don’t think this belief was widespread at all at that time.

Indeed, it’s not widespread even now, lots of folks round here are still confused by “open weight sounds like open source and we like open source”, and Elon is still charging towards fully open models.

(In general I think if you are more worried about a baby machine god owned and aligned by Meta than complete annihilation from unaligned ASI then you’ll prefer open weights no matter the theoretical risk.)


IMHE, it's been part of widespread discussions in the AI research and AI safety communities since the 2000s.


Open source would also mean it is available to sanctioned countries like china.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: