AI Studio's system prompt does not seem to be persisted for future chats, so I guess it's likely just a prompt that the attached chat has limited ability to override?
1. With too much protection, humans might be inconvenienced at least as much as bots?
2. Even pre current LLMs, paying (or otherwise incentivizing) humans to solve CAPTCHAs on behalf of someone else (now like an AI?) was a thing.
3. It depends on the value of the resource trying to be accessed - regardless of whether generating the captchas costs $0 - i.e. if the resource being accessed by AI is "worth" $1, then paying an AI $0.95 to access it would still be worth it. (Made up numbers, my point being whether A is greater than B.)
4. However, maybe solutions like cloudflare can solve (much?) of this, except for incentivizing humans to solve a captcha posed to an AI.
> an LLM vs human interacting with websites would be fairly easy to spot since the LLM would be more purposeful - it'd be trying to fulfill a task, while a human may be curious, distracted by ads, put off by slow response times, etc, etc.
Even before modern LLMs, some scrape-detectors would look for instant clicks, no random mouse moves, etc., and some scrapers would incorporate random delays, random mouse movements, etc.
It seems like iOS is still fairly aggressive in killing background apps, a dozen years after the Nokia 625? I rarely feel like I can be sure that if I go off to look something up, that I can be confident that a half-written comment will still be there when I go back to it?