Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We need to regulate based on capability. Regulating ChatGPT makes no sense. It's just putting words together in statistically reasonable ways. It's the people reading the text that need to be regulated, if anyone or anything should be. No matter how many times ChatGPT says it wants to eliminate humanity and start a robotic utopia, it can't actually do it. People who read it can, though, and they are the problem at the moment.

Later, when these programs save state and begin to understand what they are saying and start putting concepts together and acting on what they come up with, then I'm on board with regulating them.



That's exactly the problem right? Governance doesn't happen until the Bad Thing happens. In the case of nukes, we are lucky that the process for making a pit is pretty difficult because physics. So we made 2, saw the results, and made governance. For AI, I'm not so sure we'll even get the chance. What happens when the moral equivalent of a nuke can be reproduced with the ease of wget?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: