Note a relay is a perf optimization and doesn’t have to be a single shared chokepoint.
These days running a relay is fairly cheap (~$30/mo?), there’s maybe a dozen of them, and some apps don’t use one at all (instead relying on services like https://constellation.microcosm.blue/ for querying backlinks).
Atproto isn’t “many servers sending messages to each other”. It’s structured more like RSS:
1) there’s an app-agnostic hosting layer (and anyone can run a host, a bit like personal site with RSS)
2) then there’s apps, which aggregate over data from all hosts (a bit like Google Reader or Feedly)
So there’s no such thing as “defederating”. You don’t have many copies of Tangled beefing with each other. It’s more like you can run your own hosting for your own data (if you want), and anyone can build an app that aggregates from everyone’s data (Tangled is one such app).
> Atproto isn’t “many servers sending messages to each other”. It’s structured more like RSS
Except that, crucially, RSS/Atom plays well with static nodes (e.g. personal websites generated with Jekyll/Hugo/whatever—or even written by hand[1]), and Atproto does not. (Nor does Mastodon; previously: <https://news.ycombinator.com/item?id=30862612>.)
It'd be great if the complexities needed to support the "Atmosphere" were widely recognized/acknowledged to be overkill and soon enough ended up going the way of things like CORBA and WSDL while in its place a resurgence of interest in the Atomsphere emerged.
Atom was designed for news, before social media existed, where 15+ minute polling times were (borderline) acceptable. Atproto was designed for social media, in an age of Twitter users getting their news in seconds, to the point of being able to comment on live events play-by-play. There's no coming back from that world.
With that said, I wish both Mastodon and Atproto supported opt-in pull-based, static sources.
> Atproto was designed for social media, in an age of Twitter users getting their news in seconds, to the point of being able to comment on live events play-by-play.
And this is widely recognized by now to have been a very bad thing, even/especially those most susceptible to its draw. It's strange that you're framing it as a strength and not a lament.
> There's no coming back from that world.
You can't say that when everyone just begs the question and shoves application-server-needed-here protocol designs to the fore.
There's always some Gemini protocol faction that shows up to yell that everything is wrong and we have to keep hand assembling our packets by hand or it'll never work.
Atproto's PDS is the root idea that everything extends off of, is the "social filesystem" that you control. There's a protocol objective to be able to spread your data around widely and for folks to be able to cryptographically check that that data came from you (even if you have to change hosts or even if someone sneakernets your data around). That's going to have some complexity! But it allows aggregation, is essential to how we are able to syndicate data so widely in atproto. It's so important it's in the name: Authenticated Transfer protocol.
And that in turn enables systems like Tangled here to be built, that layer stop the personal data servers, and relays. These work because there is identity.
If you need your static site to be on atproto (yay!), you can just have one of the various PDS hosts (such as Bluesky or eurosky or black sky or npmx) host the PDS for your. Since it is authenticated and user sovereign, you can permissionlessly move to a different host whenever you please, should that go awry. It's unclear to me why static site needs are an interesting or useful target that social networking ought conform to.
If you want to make a simpler network where we don't have those guarantees, please go right ahead. It feels to me like a snap reaction though that doesn't bother weighing what we have gotten or why things are this way, that is reflexively demanding.
> If you need your static site to be on atproto (yay!), you can just have one of the various PDS hosts (such as Bluesky or eurosky or black sky or npmx) host the PDS for your. Since it is authenticated and user sovereign, you can permissionlessly move to a different host whenever you please, should that go awry.
These seems to defeat the purpose of the relative amount of sovereignty that hosting a static site gives you compared to depending on a PDS.
> It's unclear to me why static site needs are an interesting or useful target that social networking ought conform to.
Your data is still signed by you, and you still have the keys to move your PDS no matter what happens to your host. Do you have an actual threat model or reason why you are so afraid / unwilling to accept any compromise?
Your lack of a reply at the end, refusing to support basically your entire ask with even a modicum of supporting cause, feels a bit vindicating, that indeed you are a hostile agent & not here to engage or discuss, but to throw bombs.
The web is already structured like this. You can poll a URL for updates. You can host your own data. Anyone can build an app that aggregates from everyone's data.
Yes, all of those things are possible. Now imagine a protocol built from the ground up for those purposes, not just possible, but the entire community and ecosystem embracing those things.
We've tried that, multiple times. Semantic Web, "everyone has an API" and more before and after, none of them gain sufficient traction to stick around and be built on top of.
There’s no such thing as “running a domain” or “atproto provider” in atproto. You’re approaching it with a Mastodon/AP mindset and it doesn’t match that.
In atproto, there’s two axes.
One is hosting. Bluesky offers hosting but some people host on their own (it’s just a Docker container with sqlite), some on Cloudflare, some on community-hosted nodes like https://npmx.dev and https://selfhosted.social. From app perspective it looks exactly the same way (unlike in Mastodon where “hosting” = “choosing a community”) and you can switch hosting anytime.
Another axis is apps. Apps aggregate from data from all hosts. Bluesky is an app, Tangled is an app, Leaflet is an app, Wisp is an app, Semble is an app, and so on. Those can all aggregate over the same data (which enables cross-app interop) but they don’t have to (eg Bluesky doesn’t overlap with Tangled much except that Tangled can reuse Bluesky avatar on login). Generally you don’t have people running copies of the same app (as in Mastodon) which is why there aren’t many “blueskyes”. But when someone has an incentive, they can. (Eg Blacksky is a complete fork including server and DB, allowing their own moderation decisions over same data.) Similarly you can build your own app on top of distributed Tangled data.
Hope that helps clarify why “atproto provider” as a concept doesn’t make sense. You have hosting, which is as distributed as you want, and you have apps, which anyone can make.
So does Bluesky app have control over what data it aggregates and can decide (without checking with a user) not to aggregate data from a host? I am trying to understand what are the implications for a user, and a bad scenario where one would disagree with an action of the app.
And if the answer is "yes" then at least when someone "makes their own app" can they easily use "Bluesky hosts list" + add special extra hosts (or remove specific hosts) so that the app relies on the platform, with the exception the disagreement point?
An app can choose to ignore/ban some users (or even entire hosting servers if they’re specifically created for network abuse). This is similar to how any web app may choose to ignore POST requests from spammers.
And yes, someone can decide to aggregate data themselves and provide an alternative app over same data with different moderation policies. In fact that’s already the case (Blacksky runs their own application server that mostly piggybacks on Bluesky moderation decisions but overrides some of them. There are also clients that ignore moderation altogether and show you the raw data from hosting.)
Not really. From my understanding, in AP, your account belongs to an instance and your data is then synced to other servers. If the instance goes down, your account is gone.
In ATP, your data is stored in the "Atmosphere", hosted on decentralized "Personal Data Servers" (PDS). The app then simply parses and filters that data. They can apply moderation actions by choosing not to display or read certain posts, but your data still exists and another app could choose to display it. Similarly, if the app goes down, your data is still perfectly intact in the Atmosphere.
It might then seem like the PDS is equivalent to an AP instance, but as mentioned, they are decentralized. Identity is verified through signatures, so if your PDS goes down, you can migrate to a new one as long as you have your signing keys. Therefore, the account belongs to you and not any specific server.
You're interpreting my post with the assumption that I don't know what I'm talking about. You don't need to explain the protocol to me.
Domain here referred to the area of influence or control, like what the provider of a relay effectively has. The fact that other groups can run any element of the infra themselves doesn't change the fact that the drift towards centralization is much greater with ATP than with AP.
ATP has its own uses (quick aggregation) but it doesn't even attempt to solve fundamental issues of current ecosystem of social networking,. AP, on the other hand, offers the foundation for further development in the right direction.
A new hosting provider can preemptively request known relays to crawl it. Or relays (or apps) can lazily discover it when the user hosted there tries to log in for the first time, or their data is linked to by a known user. It’s similar to the relationships between websites and search engines.
Hosting providers don’t need to discover other hosting providers. Data only flows between hosting and apps; not between hosting and hosting or apps and apps.
Understatement, probably. Your blog posts are so far the best introduction I've seen to ATProto. Is there any tagging I missed that collects them all in one place?
ActivityPub and atproto are differently shaped. Pitting them against each other is like asking “why need web when we have email”.
ActivityPub is email-shaped. Servers are inboxes sending messages to each other.
atproto is web-shaped. User repositories host data (like personal sites or git/RSS), while apps aggregate from repositories (like Google Reader).
Different topologies lead to different properties. Eg atproto lets user change hosting with no disruption in app experience. atproto also lets anyone build new apps aggregating over existing data.
ActivityPub doesn’t allow either of those things. It’s literally a bunch of small centralized coupled hosting+app services messaging each other.
Calling AP services a bunch of small "centralized" services in this context removes all the meaning from that term. You might as well call any web server centralized while comparing them to clouds.
Proper federation is exactly such bunch of small services messaging each other. On the hand, what ATProto leads to is at most a handful of large-scale providers each running the own portion of the network.
There’s a clear difference in architecture between
1) a layer of app-agnostic hosting providers + a separate independent layer of apps aggregating over data from those (like personal sites with RSS + aggregators like Google Reader)
2) a circle of flat instances where each node couples app+hosting (like many little Twitters)
One doesn’t couple hosting with apps, another one does.
Mastodon/AP model is (2), atproto model is (1). You should be able to see the outcomes from different network shapes.
In atproto, you can build a new app that works with existing data, but in AP you can’t. In atproto you can move hosting with zero effect on your identity or how you show up in apps, in AP you can’t.
Is it really so hard to write your articles by yourself? The blandest tone imaginable, all the usual LLM tells in the sentence structure. You are polluting HN and the broader internet by posting this publicly.
Honestly... I think I've been reading too much AI content, because I 100% wrote that. I do use AI to help make outlines and gather thoughts. But the Article was written by me.
Did you do any kind of AI assisted proofreading or grammar? So much of the structure of the article screams AI.
Stuff like this:
> Each one of these, on its own, is just a bug. Together, they’re a culture.
And the headings starting with "The"
AI seems to have adopted a style reminiscent of startup marketers circa 2020 - really simple, lots of one liner quips and far too much incredulity about minor things. Now we've come full circle!
I do usually do a pass through Grammarly, and there are some times when I ask AI for help getting my point across. But I always try to change what they write into something I feel I would say.
If you're being honest, I apologize. I find it a bit difficult to believe but maybe it's really that the style is everywhere now. Partially it's sentence pattern cliches:
>Not just sell. Not just ship. Use.
>The honest read ... The hopeful read ...
>The grumbling isn’t about features. It’s about the texture of using the products.
>Yes, Apple Silicon is incredible. Yes, the Watch saved lives. Yes, the iPhone got better cameras
There's also bizarre not-quite-landing uncanny metaphors that LLMs love to do:
>Today's Apple ships friction and treats it like background radiation.
>The texture changed.
>And the rot follows that exact line.
If you're surrounded by this kind of writing, it may be good to get other inspirations. It's bad!
English is not my first language, although I am ok with it, whenever I write I always pump it through an AI first with a prompt of "Make it better English", especially if its a business email with English speaking clients.
I enjoyed your article and shared it on my family-geek-whatsapp group
The good old "this clearly photoshopped" of the 2020's era.
LLM's got their inspiration from popular sources written by humans. Now humans are exposed to LLM on repeat basis every day. It looks only normal that writing done with or without LLMs tend to converge to the same style.
I don’t think there are AI slop tells anymore. Humans have been reading AI slop for a couple years now, so it plausible enough that any one person will have picked up a couple AI-like phrases.
This moaning is so fucking boring. Yeah, so what. For what we know this moan can be just some random openclaw instance digging for karma. I come here for discussion, not to read 50% “AI slop” comments.
Does Google Reader help you make sense of it? It’s more like each app is like its own Google Reader. And indeed you were able to access the same posts via other apps at that time of outage.
I’ll just say that as someone who was on the React team throughout these years, the drive to expand React to the server and the design iteration around it always came from within the team. Some folks went to Vercel to finish what they started with more solid backing than at Meta (Meta wasn’t investing heavily into JS on the server), but the “Vercel takeover” stories that you and others are telling are lies.
Gosh, Dan, in seeing your response here - I'm truly sorry I wrote this. While I still find opt-out telemetry distasteful and dangerous, I over-generalized to React in a hurtful way. You've been an incredible influence on me and I have the utmost respect for everything you've done. I've shown quite the opposite of respect in my writing, here.
For whatever it's worth on the RSC front: I, and many others used to "if there's a wire protocol and it's meant to be open, the bytes that make up those messages should be documented" were presented with a system, at the release time of RSC, that was incredibly opaque from that perspective. There's still minimal documentation about each bundler's wire protocol. And we're all aware of companies that have done this as an intentional form of obfuscation since the dawn of networked computing - it's our open standards that have made the Internet as beautiful as it is.
But I was wrong to pin that on your team at Vercel, and I see that in the strength of your response. Intention is important, and you wanted to bring something brilliant to the world as rapidly as possible. And it is, truly, brilliant.
I should rethink how I approached all of this, and I hope that my harshness doesn't discourage you from continuing, through your writing, to be the beacon that you've been to me and countless others.
Hey, appreciate the reply! I’m sorry for lashing out as well.
Re: protocol, I see where you’re coming from although I can also see the team perspective and I kind of like it the way it is. The team’s perspective is that this isn’t a “protocol” in the sense that HTTP or such is. It isn’t designed to have many implementations emitting it. It is an implementation detail of React itself for which React provides both the “writer” and the “reader”. Those are completely open source — none of the RSC frameworks need to know the actual wire format. They just use the packages provided by React to read and write. Keeping the protocol an implementation detail of React (rather than an “open format”) lets React evolve it pretty substantially between versions with zero concerns about backwards compat. Which is still quite useful at this stage.
When you’re concerned about wire format not being an open protocol, it’s because you’re imagining it would be useful for others to read and write. But this doesn’t really buy you anything. If you’re making an RSC framework, you should just use the React packages for reading and writing. And if you’re not, there’s no reason to use this format. Eg if you make an RSC-like framework in another (non-JS) language, it’s better for you to use your own wire format that’s more targeted to your use case.
Does this help clarify it?
(Note I do think it would be beneficial to document the current version for educational reasons, which is part of why I made https://rscexplorer.dev, but that’s separate from wanting it to be fixed in stone as protocols must be. I think the desire for it to be fixed is rooted in a misconception that frameworks like Next.js somehow “rely” on the protocol and thus have “secret knowledge”, but that’s false — they just use the React packages for it and stay agnostic of the actual protocol.)
Coincidentally I just watched OBAA yesterday and found it very lacking. I’m so surprised by the positive reception. Great visual, acting and music, but I found almost no emotion in it because none of the conflicts it sets up actually resolve on screen. Characters don’t confront consequences of their choices and don’t grow.
These days running a relay is fairly cheap (~$30/mo?), there’s maybe a dozen of them, and some apps don’t use one at all (instead relying on services like https://constellation.microcosm.blue/ for querying backlinks).
reply