Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All of these LLMs already have the ability to go fetch content themselves, I'd imagine they'd just skim your URLs then do it's own token-efficient fetching. When I use research mode with Claude it crawls over 600 web pages sometimes so imagine they've figured out a way to skim down a lot of the actual content on pages for token context.


I made my own browser extension for that, uses readability and custom extractors to save content, but also summarizes the content before saving. Has a blacklist of sites not to record. Then I made it accessible via MCP as a tool, or I can use it to summarize activity in the last 2 weeks and have it at hand with LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: