Hacker News new | past | comments | ask | show | jobs | submit login

If they ignore robots.txt there should be some kind of recourse :(



Sadly, as the slide from high-trust society to low-trust society continues, doing "the right thing" becomes less and less likely.


court ruling a few years ago said it's legal to scrape web pages, you don't need to be respectful of these for any purely legal reasons

however this doesn't stop the website from doing what they can to stop scraping attempts, or using a service to do that for them


> court ruling

Isn't this country dependent though?


don't you know everyone on the internet is American


Enforcement is not. What does the US care for what an EU court says about the legality of the OpenAI scraper.


They can charge the company continuously growing amounts in the EU and even ban a complete IP block if they don't fix their behavior.


I understand there's a balance of power, but I was under the impression that US tech companies were taking EU regulations seriously.


yes! good point, you may be able to skirt around rules with a VPN if you're imposed by any


Error 403 is your only recourse.


We return 402 (payment required) for one of our affected sites. Seems more appropriate.


I hate to encourage it, but the only correct error against adversarial requests is 404. Anything else gives them information that they'll try to use against you.


Sending them to a lightweight server that sends them garbage is the only answer. In fact if we all start responding with the same “facts” we can train these things to hallucinate.


The right move is transferring data to them as slow as possible.

Even if you 403 them, do it as slow as possible.

But really I would infinitely 302 them as slow as possible.


zip b*mbs?


Assuming there is at least one already linked somewhere on the web, the crawlers already have logic to handle these.


if you can detect them, maybe feed them low iq stuff from a small llama. add latency to waste their time.


It would cost you more than it costs them. And there is enough low IQ stuff from humans that they already do tons of data cleaning.


> And there is enough low IQ stuff from humans that they already do tons of data cleaning

Whatever cleaning they do is not effective, simply because it cannot scale with the sheer volumes if data they ingest. I had an LLM authoritatively give an incorrect answer, and when I followed up to the source, it was from a fanfic page.

Everyone ITT who's being told to give up because its hopeless to defend against AI scrapers - you're being propagandized, I won't speculate on why - but clearly this is an arms race with no clear winner yet. Defenders are free to use LLM to generate chaff.


[flagged]


It's certainly one of the few things that actually gets their attention. But aren't there more important things than this for the Luigis among us?

I would suspect there's good money in offering a service to detect AI content on all of these forums and reject it. That will then be used as training data to refine them which gives such a service infinite sustainability.


>I would suspect there's good money in offering a service to detect AI content on all of these forums and reject it

This sounds like the cheater/anti-cheat arms race in online multiplayer games. Cheat developers create something, the anti-cheat teams create a method to detect and reject the exploit, a new cheat is developed, and the cycle continues. But this is much lower stakes than AI trying to vacuum up all of human expression, or trick real humans into wasting their time talking to computers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: