Hacker News new | past | comments | ask | show | jobs | submit login
Please, expose your RSS (rknight.me)
302 points by rknightuk on Dec 10, 2023 | hide | past | favorite | 110 comments



Hot tip: YouTube channels do expose an RSS feed; you can subscribe to a channel by just pasting its URL in your news reader.

You might find RSS feeds in other unexpected places! Use an extension to put the "subscribe via RSS" button back in your browser.

<https://addons.mozilla.org/en-US/firefox/addon/awesome-rss/> <https://chromewebstore.google.com/detail/rss-subscription-ex...>


Apparently that also applies to individual playlists if you don't want to subscribe to a whole channel.

https://www.youtube.com/feeds/videos.xml?playlist_id=PLpg6WL...


A lot of RSS/Atom is left unexposed for two key reasons:

1) Google Reader and friends disappeared, and 2) Browsers stopped supporting RSS feeds natively, including discovery of a feed.

Put the functionality back in browsers where it belongs, it could at least be used to find feeds and then you can use newsbeuter or something similar to subscribe with.


It was intentional. RSS is a way to sidestep advertising.

Guess who makes the most widely used browser today.


> It was intentional. RSS is a way to sidestep advertising.

I would argue that's only a part of the story. After all RSS quickly morphed into Link with a short blurb and image just like any modern link aggregation site rather than a full page of content.

The big issue was RSS was it prevents site lock-in like Facebook ect. Even Reddit pushes people to the comments to try keep you on the site as much as possible in order to have you see the link aggregator's Advertisements and more recently, to provide more robust data collection for later sale.


I'm willing to advocate in favor of the short blurbs. I follow a lot of 'dev blogs' by RSS and my homegrown feed reader also only shows the short blurbs. I do this because all these people crafted their (personal) sites, made it beautiful and personal, and I'm wanting to visit their site to pay hommage and look at what they made (whether that's a cool logo, a nice color scheme, etc.) exactly in the way they made it. My ad blocker already blocks the ads I don't want to see (if there are any externally loaded ads).

It also has the benefit of the site owners now being able to see somebody is subscribed to the feed. I'm visiting their website and they might see my traffic in their visitor statistics. To make this even more clear to the site owner, I add an URL parameter called /?rss_ref=heyhomepage.com


I advocated for the exact opposite[0]

There’s no point in serving only excerpts. By serving full content the people like you who like to consume on the original site can still do that.

Serving blurbs forces everyone to do that and that’s wrong imo.

[0] https://manuelmoreale.com/rss-excerpts


Yeah, there's definitely something to say for giving people a choice, so I'm going to keep this in the back of my head. For my system it feels weird to show the full content. I don't trim the feed description (yet) and sometimes these are very very long, looks awful with the rest of the UI.

What I also do is filter out all the markup and even links, I only show some sentences of text basically. It sits wrong with me to make my reader a sort of browser window and display all the markup.

Beautifully simple site you have, by the way. And I found I'm already subscribed to your feed.

"RSS is not a notification system. It's a distribution system." you say on your site. Was that the purpose of the inventors? I guess I take the freedom to use it slightly different then, we're on Hacker News after all. ;) We can agree to disagree.


> there's definitely something to say for giving people a choice

This is why I took over the maintaince of the Feediron plugin for the TT-RSS feedreader, I really like having the flexibility to define what content I feel is relevant on a feed by feed basis


In not sure if it was intended to be a distribution system but it sure is today. Especially because the web and the browser do have built in notification tools we can leverage if what we want to do is send notifications.

But I can see why not all situations allow for sending full content via rss.

I still believe that IF you can send the full content you should do so.


It’s weird, our publishing clients don’t seem to care about having an rss feed exposed but Google for sure doesn’t want it. Honestly surprised they haven’t come up with some BS sitespeed reason to remove it from html.


Firefox dropped RSS/Atom support too


Part of Mozilla's bad decisions with Firefox generally boiled down to "we don't have someone to maintain this", or "our analytics show people don't use this".

Shit gets removed, people get upset, and they're told to deal with it. They didn't consider that the people who used the features that were removed also didn't send telemetry data, so they made a dumb decision based on incomplete and inaccurate data instead of something sane like appeal to the community and invite people to maintain or test it.


That some corporate services stopped existing does not imply one cannot have an "application/rss+xml" link in the html page source - in fact, it is not normal to depend on perishable services when "unrevokable" desktop software applications can be available, and in the case of RSS readers there still exists a large number (and writing one is pretty trivial).

To discover a feed, just check the html source. The problem is when the feed exists but it is not mentioned at all in the website.


I once found that my website's RSS button did not appear. I did not understand why, until I checked the ad blocker. At least two lists still have rules to block RSS icons (AdGuard Widgets, EasyList Social Widgets). They are not enabled by default though.


Kind of ironic for the 'social widgets' one considering RSS is pull-only.


Native RSS support never made sense as a core feature for browsers in the first place; extensions are much better suited for this. There are plenty of replacements for Google Reader as well, which also provide much more than just RSS.

In the media optimized for engagement/addiction, RSS provides a step off the platform, so of course they don't use it. Even many personal blogs the article mostly talks about might want to keep the reader on the site for clicks.

It's a conflict of interest between content providers and consumers (mostly advertisement). It's not an implementation issue.


Please, also expose that you're subscribed to a blog's feed. I sometimes sent an email to site owners to compliment them with a nice article I just read. Sometimes I leave a short comment here on HN to let somebody know I subscribed to their feed. I hope it encourages people that put energy and heart into quality content to keep making that content. I try to not overdo that here on HN, because it adds little to the general discussion.

What I also do with my homegrown RSS reader (see https://www.heyhomepage.com if you're interested) is adding a parameter to the links I click, like /?rss_ref=heyhomepage.com Site owners get my traffic and they can easily see where it comes from if they happen to look at their visitor statistics.


> Site owners get my traffic and they can easily see where it comes from if they happen to look at their visitor statistics.

I suggest to also put the name of your reader and the total number of subscribers into the HTTP request (as user agent), if possible.

https://darekkay.com/blog/rss-subscriber-count/


If I have more users than just a couple, I would definitely look into doing that. But my feed reader is part of a website system (together with it's own feed) that sits nicely separated on a different domain for each client. I have to centralize things to even get this number.

Somewhere in this same thread I said: "Exposing the number of subscribers from large RSS readers is a nice addition, but it also slightly nudges one to centralization." After second thought, I think it's better to just show this number if possible. It's better for RSS as a whole, it shows it's still alive.


My understanding is that because of the way the various aggregators work is basically impossible to quantify the audience of RSS feeds, and that’s a good thing.


Getting an exact number is impossible, that's what I wrote in the linked blog post as well. But every RSS reader that requires user accounts knows the exact numbers, and 4 of the main readers expose those numbers via HTTP requests.


From my personal digging [0] RSS is way too noisy. Just a wild guess but the 4 main readers are not where the majority of the RSS users are.

[0] https://manuelmoreale.com/poking-around-my-server-logs


RSS can also be a bit inconsistent. Sometimes people put content in the feed. Sometimes it's a link. Of course, if it's in a link, programmatically scraping the content usually requires specialized code per site, unless you are okay with various random garbage in your data (e.g. generalized scrapers exist, but they might include header text from the site or similar).

I suspect with the advent of LLM, there's finally a market for this kind of stuff. People can sell data, including articles, as data source for people's ML pipelines. It's a possible path we can detox from ads.

So yes, please start with making RSS better! It's a beacon of light in a sea of darkness.


I don't think RSS needs to "improve" for the benefit of scrapers.


I want to run/build my own clients. Scrapers can get data no matter what.


Clients for what? There are RSS readers; they mostly don't scrape into RSS, beyond rendering the content.


I wrote a program last year that automatically downloaded articles I cared about and allowed me to print them in a booklet I could read when I woke up. Right now, you've got to view content in a web page and it's not very transportable.

Another use case is converting articles into audio for listening on the go.


NewsBlur has a switch to choose between two views : feed or text. The text view is generally able to fetch the full article on the website even if the feed only contains an abstract.


> I suspect with the advent of LLM, there's finally a market for this kind of stuff. People can sell data, including articles, as data source for people's ML pipelines.

You mean RSS will improve copyright laundering?


I've built an opinionated RSS reader/tracker that was built around this - It works with links only. This way, you have consistent experience, and the author gets visits to his website.

You can give it a go here: https://lenns.io. I'd be happy to get some feedback. Thanks.


Seems interesting, but can you provide more information on the pricing or your business model. Investing in an rss web client is about giving up some control vs local client (self hosted). So it would be good to plan and decide from the beginning.


> Sometimes people put content in the feed. Sometimes it's a link.

This is the reason I gave up on trying to use RSS.

If I'm going to have to follow a link anyway, I'll just save myself the time and go directly to the websites I want to read.


RSS solves the problem that web portals with updating content are web pages that you have to grok in their rendered form. RSS allows such a web page to offer a summary of the updating items in a more structured form that users can consume via a feed reader.

The idea is that checking an Inbox-like reader dashboard with 7 feeds is less time consuming than separately going to 7 websites and scrolling around.

RSS readers keep track of which items are new. They can filter items, mark them "read" and delete them. They offer searches through the items.

Using the original site instead of its feed doesn't guarantee you don't have to follow any links.

Look at this HackerNews: you have to click on links in the front page to get to the submissions, which are in different sites. Gee, how useless; if I have to click, I might as well just go directly to those websites and skip this HackerNews thing.


> Look at this HackerNews: you have to click on links in the front page to get to the submissions, which are in different sites. Gee, how useless; if I have to click, I might as well just go directly to those websites and skip this HackerNews thing.

This analogy breaks down when you take into context what the purpose is for both sites.

HN: I want to discover new content that likeminded people find interesting.

RSS: I want to read what this person has to say specifically.


It's not just an analogy, because you can have an RSS feed of HackerNews. You're no worse off clicking on links in a HackerNews feed, than clicking on links in HackerNews.


> if I have to click, I might as well just go directly to those websites and skip this HackerNews thing.

You accidentally arrived at the root of the anti-RSS rationale for many HN readers.

They don't want to actually go on the web. They want to participate in the comments section, where it's safe and sanitized; no paywalls, no 'sign up for my email list' popups, no analytics scripts, no strange site layouts.

They feel RSS should deliver exactly what they want: full-text article and images, nothing more.


I _don't_ particularly want to participate in the comments section on most sites. Full text and image content of the article and nothing further is absolutely what I want out of an RSS feed - I should not have to leave my reader or deal with the internet at large to read the things that interest me.


With just article text and images, you have no UI to reply to it.

Maybe RSS could be extended to FRSS (Forum RSS). Forums export posts in FRSS format, which includes elements that tell the reader how to submit replies, as well as up/down votes. The RSS reader would render its own UI for doing these things.


Isn't it throwing the baby with the bathwater? In such a case RSS already tells you if there is new stuff and what already. If you follow a dozen of websites, not having to visit all them each time is already useful.

Or are all your followed website producing new content such that there's always something new when you check it out?


I've been meaning to get around to an RSS pipeline tool that runs each link through a chatbot summarizer and stuffs the summary in the description field.

Might get around to it this Christmas, but I'm throwing it out there in case I can lazyweb it into existence.


RSS is there to inform you of new content being released - it is a "push" notification.


One hackneyed reason that some folks give for not wanting to expose their RSS is wanting some level of legibility into their subscribers. I recently learned that many large RSS readers actually expose high-level analytics numbers [^1] so you can get an estimate of RSS readership that way, too. Would love for more readers to support this functionality: as far as I can tell across all of the RSS feeds my product exposes, the only supporting clients of this faux protocol are NewsBlur, Feedly, Feedbin, and inoreader.

[^1]: ht to Darek Kay's blog post, https://darekkay.com/blog/rss-subscriber-count/, for alerting me to this fact!


Exposing the number of subscribers from large RSS readers is a nice addition, but it also slightly nudges one to centralization.

I try to let site owners know I'm subscribed to their feed by adding an URL parameter like /?rss_ref=heyhomepage.com They might or might not see this RSS referrer in their own visitor statistics. I also do not consume all the article's content solely in my reader, but I show a short summary and then click the link to the article. This way I can enjoy their (personal) site and they can see my traffic more clearly.


This is also common for podcast clients with server-side crawling, which is basically all of them.


The easiest solution for this would be to just check the server requests. My reader is terminal based and adding support for js analytics kind of defeats the purpose.


To be clear, the approach outlined above _is_ based on just HTTP; there's no javascript involved.


As autodiscovery is often broken or missing on many sites, my feed reader Temboz falls back to testing these suffixes in the forlorn hope there may be a RSS or Atom feed hiding somewhere:

      'feed', 'feed/', 'rss', 'atom', 'feed.xml',
      '/feed', '/feed/', '/rss', '/atom', '/feed.xml',
      'index.atom', 'index.rss', 'index.xml', 'atom.xml', 'rss.xml',
      '/index.atom', '/index.rss', '/index.xml', '/atom.xml', '/rss.xml',
      '.rss', '/.rss', '?rss=1', '?feed=rss2',


I actually made https://hn-blogs.kronis.dev a while ago, which was based on RSS/Atom feeds that people shared on HN, it's still running in the background and pulls in the latest posts daily. There are links to the original HN post and a blog post that I made about it all on the site, but here's the post directly: https://blog.kronis.dev/articles/ever-wanted-to-read-thousan...

It was certainly interesting to do and I love that the technology exists, however a lot of people handle integrating it differently. Sometimes the feed type is misreported, other times you can't quite get all of the meta data about the feed or items you'd like, whereas sometimes you either have the whole thing just throw a network error, or just have someone with control sequences in the text which breaks your attempts at parsing XML.

I'm convinced that the larger the scale of your dataset, the more potential issues you're going to run into, at some point if any type of error can occur, you're going to have to deal with it.


I felt like this was aimed at me[1], so I added an RSS icon in my page's footer.

https://github.com/gavinanderegg/gavinanderegg.github.io/com...

I had assumed that an "application/rss+xml" link would be sufficient, but I get that folks would likely not assume that exists on all sites these days. As someone who reads blog posts mostly through RSS, I'm very happy to make this more explicit!

[1] https://mastodon.social/@gavinanderegg/111362850402497489


Thanks, my page also was lacking in exposing my RSS feed. I'm htmlstupid and so massaged what you did for your site.

That's two of us now that have exposed our RSS.


I use miniflux and if im looking for a feed I just toss in the domain. 3 out of 4 times it find something at /rss, feed, or something else.

Would be great if it was exposed. I presume it’s not because it’s baked into frameworks and comes along for free, but maybe that’s naive.


The fact that RSS was suppressed will never cease to make me angry.


Suppressed by who? Google might have killed their newsreader, but RSS never stopped, because it's not dependent on Google (hurray!). I'm very happy Wordpress automatically adds a feed to every site on the internet.

I see it as a filter. All the stupid clickbait content goes somewhere else, and all the quality content - where the individuals who made it have skin in the game - sits nicely in my feed reader. It's a blessing in disguise, if you ask me.

Yes, you have to put extra energy in curating a nice collection of feeds. But isn't that valid for everything good in life? This energy compounds and pays back in no-time.

Long live RSS, long live the open web!


Tangential question, since I use feeds but am not an expert in this: why do some feeds lose the articles after sometime while others don’t?

Say I have two feeds from two different sites. The newsreader shows only the most recent (10 or 20 or whatever) articles for one feed whereas it shows all the articles from the time I subscribed to that feed for the second one. I know that the feed XML itself contains only a limited number of articles in both the cases whenever the reader requests for a refresh.

How can I make sure that the newsreader preserves (does not purge) the older articles in the first case above (I use NetNewsWire on macOS and Mozilla Thunderbird on Windows)?


That'd be a per Reader, per Feed "Max Entries to Save" setting - which may have default values that differ from reader to reader, from "feed collection" to "feed collection" (if you create tree heirarchies in your reader) and might (I don't recall) have default values (unless over ridden in reader) from individual feed values.

It's all very implementaton dependant.

I mainly use FeedBro as a browser extension - everything is a per feed value that can be set to over ride FeedBro defaults ; in FeedBro I can bundle all different news sites (BBC, ABC(AU), Fox, CNN, etc) rss feeds under a "News" heading but cannot set a single Max Entries value that is applied to all the sub feeds - other implementations (different readers) can.


Well, to answer your initial question, RSS is XML, meaning for the most part it relies on serving the entire document every time to create a valid feed file. As such, as the number of articles grows, every request for the feed grows in size. So clipping it to the latest stuff helps relieve load on the server.

RSS is also commonly used to serve podcasts, and I've been told by a number of podcast hists that they like this because it prevents someone from sticking the RSS in their podcatcher and blasting the server with requests for every episode all at once.

I'm guessing that a lot of RSS generators have this as a default, and many of the people with their RSS set up like this simply never change the default.


Problems with exposing RSS feeds:

- People do not feel the need to expose it as more and more RSS clients are killed, it is not often used feature

- I think that RSS client are not killed because it is a faulty standard. It is a problem with deliverability. There is no state of the art RSS client. Most of RSS clients are not feature-rich, even thunderbird. People also do not know how to manage RSS feeds, how to obtain them

- I think that people use social media, link aggregators to see 'what's new', not RSS feeds

- People that self host do not focus on details, often even not properly configure their pages. OG:fields are not set, page title is not explanatory, weird redirects are used with invalid HTTP statuses, javascript page loading. If it is a problem with these basics, then other things are treated even worse, RSS included

- RSS feeds are often even not named properly. Naming "Blog", or "Title", or "Home" your RSS feed, or page is erroneous, in the same manner that you would not entitle a Book you wrote "My Book"

RSS readers I am aware of:

- https://newsboat.org/

- https://www.rssbrain.com/

- https://theoldreader.com/

- https://www.freshrss.org/

- https://readwise.io/read

- thunderbird

- https://github.com/rumca-js/Django-link-archive - RSS reader, web scraper, which I wrote


I definitely see the author's points, but I disagree with the proposed solution. Feed discoverability should be a basic browser feature, not something web developers need to actively implement, nor some information that users have to scrape themselves from the DOM.

I made an extension back in the day that parses the <link> tags and puts back the feed icon where it's supposed to be (on the right side of the URL bar), and also renders the feeds properly rather than just spitting out the raw XML : https://addons.mozilla.org/en-US/firefox/addon/rss-viewer/

This is how the web used to work 10 years ago. The <link> tag has a purpose: it instructs browsers that the current page has a feed, so the browser can parse it and show a feed button to the user. Both Firefox and Chrome used to work this way 10 years ago.

Then motherfucking evil Google began its war against feeds. First it killed Google Reader, then it stopped complying with the <link> conventions, removed the feed icon from their browser and stopped rendering RSS/Atom. Firefox quickly complied too.

Let's make it clear once and for all: yes, I, can add an RSS URL to my website to make discoverability easier, but it's not my job as a web developer to explicitly make feed discoverability easier by modifying my frontend. That's just a workaround. My job as a web developer should be to provide the right <link> element, and then the browser should know what to do with it. Just like the browser is supposed to know what to do with scrollbars if my divs exceed the window size.

And, as a user, I shouldn't write my custom JavaScript to parse the feed URL from the DOM. Just like I'm not expected to write my JS user script to render the title of the page. If browsers refuse to provide such basic features, then it's a browser problem. I would say "choose a browser that natively supports feeds", but there's none left. So use extensions to mitigate the impact of feed-hostile browser politics.


Thanks for the extension, looks great!

Sadly doesn't work with the https://news.ycombinator.com/rss feed as it forces the download IIUC.


>> if you're going to add an RSS button, please ensure it looks like an RSS button and is in RSS orange

> This is an excellent idea and I have done so here.

I don't know if this was sarcasm or whether it's just a bit silly that literally the next line of text on the page is a link (the link?) to subscribe—with the RSS icon—in white on magenta.


I also chuckled. The orange does something extra for visibility, but I can't stand that color (even though I'm Dutch). The 'Wifi icon on it's side' is enough for me.


Yeah, I don't like the colour either. I also can't really justify adding an orange logo on my site when it's almost entirely in black and white otherwise.


FYI there is an orange RSS icon in the header


Fortunately I found an iOS reader that will go and hunt for reclusive RSS/ATOM feeds, but the author's right - why make it difficult - just put a real link up!


RSS is pushed toward oblivion because allow to made personal aggregators to follow news, instead of third party bit tech ones in primis. In secundis because are less interesting for ads clicking. Lastly but not least because you can easily archive feeds so it's hard for the source of the feed change the content, making them unpublished etc.


i have been using rss to get all my tech news (other than here perhaps) for over a decade.

however since past at least 5 years, many tech sites have implemented "dark patterns" to hinder the experience and to nudge you to open their website. the "click here to read more" or just giving the summary outright does not help with the goal of having a feed.

i get that in today's age, where everyone wants to gatekeep their "content", it is an easy way to scrape data (i truly hope this does not become a trend). at the same time, it is so cheap to maintain that even static site running on solar can manage it (https://solar.lowtechmagazine.com/feeds)


Shoutout for The Old Reader (https://theoldreader.com/) as a great online RSS reader ($20 per year) which sprung up in response to Google Reader shutting down and is still going strong.


Check out FreeRSS, it's an iGoogle clone, just Add Widget, paste in the url of the website, and it auto-detects any RSS feeds in the site.

https://freerss.robdelacruz.xyz/


When I was building this https://github.com/outcoldman/hackernews-personal-blogs it was crazy how inconsistent people are with their links to rss feeds


are there any guides on how to do RSS correctly?

I just dump all of our blog content in there (250 posts) and it's a lot of content haha. I don't know how you're supposed to set it up and there are so few guides on it


I wrote one a while ago and have been maintaining it since: https://kevincox.ca/2022/05/06/rss-feed-best-practices/

It was fairly popular on HN when it first went live: https://news.ycombinator.com/item?id=31293488

It has a section on discovery: https://kevincox.ca/2022/05/06/rss-feed-best-practices/#disc...


I thought this might be an interesting read but was super put off to be greeted with a long screed moralizing over my choice of browser. I have 3 different browsers installed that use for various situations and reasons. Getting shamed by an internet stranger is not one of them.


Thanks for sharing


Other reply has good advice. But beyond that I'd advise 1) specifically using the Atom format, which is more likely to display correctly, and 2) just include the latest 10 posts.


Why only 10? Why limit it at all?


It's a good idea to limit it somewhere so you don't end up sending 10 MB every time it's fetched. The feed will be re-fetched to check for updates so the cost isn't paid just once.


It is paid only once since that 10 MiB gets cached by the reader. In future fetches the reader asks for entries newer than the date it last checked which means that the items already requested won't be sent again.


My understanding was that most feed endpoints just sent the whole thing back each time, or nothing (304 Not Modified, when conditional headers are included). This allows server and CDN caching of the response.


RSS works in the framework of users periodically checking for new content (maybe daily, maybe weekly...): you have to decide a window that has the users receive a reasonable number of new and old posts.


anyone out there generating fediverse feeds for their static site?


I don't think this would be practical for a static site. You still need to maintain a list of followers of your account somewhere and that needs to be dynamic if you want it to work the way people expect it to where they follow you from other instances.

Assuming you kept the @ list of accounts through some other means, if you had your webfinger setup with your public key, you could after creating new content to push up sign the publish events and push them to those followers.

I don't know of anyone doing this though.


I think it was Mark Johnson of the linux matters podcast that was talking about how he's experimenting with integrating his blog with the fedi-verse using aws lambda for the dynamic parts https://linuxmatters.sh/16/

I also read another blog about it a while back, but essentially you have the same issue as comment systems where it can't be static, there needs to be some server component handling the back and forth.


Reddit recently nerved their RSS feeds. It no longer contains the post content. So all you can do now is click the link.


Please, stop emailing me to ask about my blog’s RSS url when these tags are already in every page of my site.


I used to generate RSS feeds for all the sites I build for clients. Those were good times - 2003.


Is there any service that scrapes arbitrary sites into an RSS feed?


Noice...

Checkout my RSS feed of AI characters and Chatbots getting created netwrck.com/rss.xml


I developed a new web RSS Reader (PWA): https://www.qireader.com Welcome to use.


That looks great! Is it open source?


It does have a github, but without source code, so probably no.

https://github.com/oxyry/qireader


This looks really nice


RSS doesn’t move the needle. Only a super tiny, insignificant number of people care about RSS. Social aggregators like HN and Reddit won. RSS lost.

I have a blog and try my best to maintain the RSS feed. But if we’re being honest it’s a waste of time.

I stopped using RSS readers when half my subs required me to open a browser tab to go to the actual website to view the actual content. Webcomics were particularly annoying about this.

I mean I get it. The ideal RSS feed has no ads which means no revenue for people working hard to create interesting things I want to read! Alas.


I never actually expected my rss reader to provide a place for viewing the content. I treat it as a notification service. There is usually enough info in the subject line for me to decide whether I want to open the notification in a browser or dismiss it as read.

In addition to a few blogs, I use rss for news pages (including hacker news front page) and to get notified about new package releases in pypi and rubygems. For the package releases, I just mark them as read as soon as I upgrade my virtualenvs.


> Social aggregators like HN and Reddit won. RSS lost.

Social aggregators are great for discovery, they can let the community surface good items, but they suck for subscription. Once I have found authors that I like I want a way to reliably see their content, not hoping that it happens to be popular on the site at around the time that I happen to visit. Unless you are refreshing your feed hourly you are going to miss a lot of content from your favourite authors, and even the most addicted users won't see it all. As an author I also appreciate loyalty of subscribers more than viral surges, but I understand that the money is probably better with the viral surges.


"Expose" was the key word here. A ton of sites already have feeds and just don't make them easy to get to. It's honestly not hard.

It's also a category error to compare RSS and social aggregators.


I'm saying that social aggregators killed RSS. Not that they're equal in use case. Where "killed" means "some minuscule percentage that is large than 0 but less than an arbitrary threshold of my choosing".


Where do you think we find the posts we submit to HN?


If the target audience for a blog is HN readers, then RSS is probably worth it (since HN readers use RSS, myself included).

The biggest issue I have with RSS is that by the time I find interesting blogs, they’ve stopped posting, so I never see updates in their feeds. But that’s an issue with blogs, not RSS.


I don’t agree — RSS was only ever meant really to show the title and some few paragraphs, maybe a photo — exactly like Reddit does. I for one would hate to scroll through that much content.


Talking about RSS in terms of 'winning and losing' is exactly why I use RSS over the mainstream social media. Not everything is a competition.


> I stopped using RSS readers when half my subs required me to open a browser tab to go to the actual website to view the actual content.

The RSS client that I use, InoReader, has an option to fetch full content if it is just a blurb. So, I have to never deal such annoyances.


FreshRSS and Miniflux both have the option. The former can even let you customize some css filters to cleanup the stuff it retrieves.


> The ideal RSS feed has no ads which means no revenue for people working hard to create interesting things I want to read!

I don't see a problem with this at all, the best things on the internet are free and open source.

Everyone gets content for free and the author gets more views everybody wins.


As an independent content creator I just called my rental agency and asked them if they would take ‘views’ as partial payment and I can confirm that they will not.

Bummer!


And what quality content are you creating to contribute to this?


> The ideal RSS feed has no ad

Isn't that like the ideal anything-else? What's specific about RSS on ads?


RSS users are the Linux gamers of the podcast & blog world - tiny amount of people but all the problems in the world.


Ok, as both of these… this made me laugh


> Reddit won. RSS lost.

Reddit supports RSS.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: