Good point. Then it's actually an active attempt, right?
Also I realized my statement was a bit harsh, I know someone probably worked hard on this, but I just feel it's easily circumvented, as opposed to some of the watermarks in images (like Google's, which they really should open source)
In all reality I spent like 30 minutes on this one Sunday afternoon when every model failed nearly 100% of the time - now it's more like 95% but about half figure out that there is something wrong and prompt the user to fix it. This isn't meant to be a permanent fix at all - just a cool idea that will be patched just like DANs were back in 2023.
true.
It does put the AI companies in the position though of continuing to build/code software that circumvents their attempts to steal content though.
Which might be looked upon unfavorably whenever dragged to court.