Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes this does remind me of the old spam wars of early 2000s. Back then collaborative block lists were useful to reject senders at IP level before using a Bayesian system on the message itself.

Even though these bots are using different IPs with each request, that IP may be reused for a different website, and donating those IPs to a central system could help identify entire subnets to block.

Another trick was “tar-pitting” suspect senders (browser agent for example) to slow their message down and delay their process.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: