Hacker News new | past | comments | ask | show | jobs | submit login

They are already good at criminal activities such as phishing. That bar is rather low, especially once you scale up (hitting 100 people and successfully scamming 1 is still great ROI with cheap small models).

But I don't see what governments can really do about it. I mean, sure, they can ban the models, but enforcing such a ban is another matter - the models are already out there, it's just a large file, easy to torrent etc. The code that's needed to run it is also out there and open source. Cracking down on top-end hardware (and note that at this point it means not just GPUs but high-end PCs and Macs as well!) is easier to enforce but will piss off a lot more people.




It's just going to turn into an arms race of AI trying to stop AI.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: