I wonder how effective it is though these days, I'd say 70% of my RSS feeds are either truncated forms of a full article (with a 'click here to continue reading!' link), or just summaries.
That's what the scraper scripts are for. For each site that does this, I have a bit of code which visits the article URL and pulls out the full content.
This is re-implemented by so many. (HN user megous https://news.ycombinator.com/item?id=13226170 Dec 2016.) It would be nice to find a way to share this work, but the variety of tools used often make the customizations too custom (or maybe this is just the most common reason why creators feel there is no reason to share?).
Is anyone aware of any repositories where the customization required to obtain the important content is maintained by a community?
I wonder how effective it is though these days, I'd say 70% of my RSS feeds are either truncated forms of a full article (with a 'click here to continue reading!' link), or just summaries.