Everything on the web is a robot, every client is an agent for someone somewhere, some are just more automated.
Distinguishing en mass seems like a waste to me. Deal with the actual problems like resource abuse.
I think part of the issue is that a lot of people are lying to themselves that they "love the public" when in reality they really don't and want nothing to do with them. They lack the introspection to untangle that though and express themselves with different technical solutions.
I do think the answer is two-pronged: roll out the red carpet for "good bots", add friction for "bad bots".
I work for Stytch and for us, that looks like:
1) make it easy to provide Connected Apps experiences, like OAuth-style consent screens "Do you want to grant MyAgent access to your Google Drive files?"
2) make it easy to detect all bots and shift them towards the happy path. For example, "Looks like you're scraping my website for AI training. If you want to see the content easily, just grab it all at /LLMs.txt instead."
As other comments mention, bot traffic is overwhelmingly malicious. Being able to cheaply distinguish bots and add friction makes your life as a defending team much easier.
Distinguishing en mass seems like a waste to me. Deal with the actual problems like resource abuse.
I think part of the issue is that a lot of people are lying to themselves that they "love the public" when in reality they really don't and want nothing to do with them. They lack the introspection to untangle that though and express themselves with different technical solutions.