Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How to you expect those laws to be enforced?

It's impossible to determine with 100% confidence whether or not an image/video was AI generated. If the AI-generated image of Steve Jobs had been copied a bunch on the web, a reverse image search would have turned up lots of sources. Watermarks are imperfect and can be removed. There will always be ambiguity.

So either you're underzealous and if there's ambiguity, you err on the side of treating potentially AI-generated images as real. So now you only catch some deepfakes. This is extra bad because by cracking down on AI-generated content, you condition people to believe any image they see. "If it was AI generated, they would have taken it down by now. It must be real".

The alternative is being overzealous and erring on the side of treating potentially genuine images as AI-generated. Now if a journalist takes a photo of a politician doing something scandalous, the politician can just claim it was AI-generated and have it taken down.

It's a no-win situation. I don't believe that the answer is regulation. It'd be great if we could put the genie back in the bottle, but lots of gen-AI tools are local and open-source, so they will always exist and there's nothing to do be done about it. The best thing is to just treat images and videos with a healthy amount of skepticism.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: