Hacker News new | past | comments | ask | show | jobs | submit login

We can constrain their use, as with nuclear materials.



Nuclear materials have the advantages of being rare, dangerous to handle, and hard to copy over the internet.


No not really. There's no power in the world that can restrain this in it's current form even mildly much less absolutly. Why do you think that would be even slightly possible?


For the same reason we can regulate other things? Encryption is regulated, for example. There "just" needs to be international co-operation, in the case of AI.


How's that going for Tornado Cash?


Despite doing a pretty decent job of containing the risk we're still on the clock until something terrible happens with nuclear war. Humanity appears to be well on track to killing millions to billions of people; rolling the dice relatively regularly waiting for a 1% chance to materialize.

If we only handle AI that well doom is probable. It has economic uses, unlike nuclear weapons, so there will be a thriving black market dodging the safety concerns.


At some point in the probably near future it will be much simpler to create an autonomous AI agent than a nuclear bomb.


True, so we need to make sure we don't find ourselves in a mess before it happens. Right now I don't see nearly enough concern given to risk management in industry. The safeguards companies put on their models are trivially subverted by hackers. We don't even know how to cope with an AI that would attempt to subvert its own constitution.


So let's avoid that future.


Look at who has access to US nuclear codes now. I don’t believe it’s as constrained as you think.


It is a lot easier to detect illicit nuclear work compared to illicit AI work.


It is hard to hide anything that uses as much electricity as a large training run.

Also there are only a few companies that can fab the semiconductors needed for these training runs.


You will run an autonomous ai agent on your own hardware or by having your own local ai pass out commands to distributed systems online, ai, real people, or just good old fashioned programming. There is no stopping this.


It is in fact possible to stop training runs that consume billions of dollars in electricity and in GPU rental or depreciation costs. If no one does such a training run, then no one can release the weights of the model that would have been produced by the run, so you won't be able to run the model (which would never come into existence) on your own hardware. I don't care if you run DeepSeek R1 in your basement till the end of time. What my friends and I want to stop is the creation of more capable future models.

It is also quite possible for our society to decide that deep learning is too dangerous and to outlaw teaching and publishing about it, which would not completely stop the discovery of algorithmic deep-learning improvements (because some committed deep-learning enthusiasts would break the law) but would slow the discovery rate way, way down.


But it’s not actually possible for our society to decide that. In the real world, at this moment when laws and norms are gone and a billionaire obsessed with AI has power, that will 100% not happen. It won’t happen in the next several years, and that is the time left to do what you are saying. Pretending otherwise is a waste of time.


I prefer to retain some hope that our civilization has a future and that humans or at least human values and preferences have some place in that future civilization.

And most people who think AI "progress" is so dangerous that it must be stopped before it is too late have loose confidence intervals extending for at least a couple of decades (as opposed to just a few years) as to when it definitely becomes too late.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: