Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find the nostr[1] protocol to be a nice alternative here. The core idea is the separation of message content from the relays which transport them. So you sign a message and publish it to several relays, and then anyone can retrieve your message from any of the relays where it is published.

As opposed to Mastadon, your identity is tied to your pubkey rather than the instance that you created your account on. The difference to Scuttlebutt is the differentiation of relays vs clients, and the lack of chaining messages together in the "blockchain"-like structure.

The result is a fairly simple protocol which is fairly low latency and seamless for many applications compared to other decentralized solutions.

[1]. https://nostr.com/



> Learn about Nostr: A simple, open protocol that enables a truly censorship-resistant and global social network.

> Relays are like the backend servers for Nostr. They allow Nostr clients to send them messages, and they may (or may not) store those messages and broadcast those messages to all other connected clients.

I still haven't wrapped my head around these two words from the nostr site. How can it be truly censorship-resistant when used are entirely dependant on what their relays decide to store and share?


I'm not particularly familiar with the project, but:

The claim is that it enables social network/s that are resistant to censorship. It doesn't say that it is a social network that's impossible to censor.

It's a protocol that everyone is free to implement. Valid implementations of the protocol should be compatible with each other. Meaning you don't have to go to soshl.com or use the Soshl app. Relays can choose whether or not to store/broadcast your messages, yes. But they can't stop you from sending them and they can't fake or modify your messages. They also can't revoke your identity. I assume a relay can't force other relays to delete things, but I'm not sure of the details.

A relay choosing to retransmit your message or not could be considered censorship, but it is not the same censorship as a court ordering Youtube to take a video down, which is not the same as Twitter silently burying your posts, which is not the same as the police arresting protestors, etc...


Users publish their messages to multiple relays, and they can be followed by others on multiple relays. The idea is that if a relay starts censoring, then users and their followers can choose different relays, so the system as a whole “routes around censorship”. At least in theory.


I interpreted in a similar yet different way. It is not just that users can choose a non-censoring relay but that users are connected to many different relays and so it does not matter if a few relays censor stuff


This is an interesting detail to me and may just spend on how the community grows

In Mastodon each server makes their own decision on who to censor. Functionally it's grown into a bit of a hard mentality with entire servers being censored if they don't go along with the larger group


Messages are signed locally with entirely local private key, so relays can censor or ban all they like following local laws, without harm to your identity or social graph.

Hopefully someone, like a Raspberry Pi cluster on a derelict Soviet military space station on an international orbit, will take your message and relay it, :shrug:, and those messages can still be authenticated by the key.

That’s how I understand that part; Nostr uses DNS for blue badges and relay connections so “truly” indeed sound a bit of crypto talk though.


Do you know if nostr supports peer to peer messages, or did everything have to go through a relay?

If it's all through a relay it probably isn't very censorship resistant at all. My messages may be signed locally but if the network trends towards mob-based bans and censorship like Mastodon as soon as I get on the wrong side of the network I'll be the only one seeing my signed posts


I believe it's all through relays, there's no peer discovery or even inter-relay meshing. So that could very well could happen. Currently it's just each clients multi-posting to dozen relays and receiving dozen duplicates.

What I believe that "censorship resistance" actually means in un-cryptocurrencified talk is, it lets relay operators filter out locally illegal but not globally unethical contents(e.g. political speeches, pornography, certain URLs and strings) without banning users and/or fragmenting the network. And that is an improvement over Mastodon/ActivityPub architecture.

But boy those crypto guys knows how to hype it up...


>If it's all through a relay it probably isn't very censorship resistant at all. My messages may be signed locally but if the network trends towards mob-based bans and censorship like Mastodon as soon as I get on the wrong side of the network I'll be the only one seeing my signed posts

Well on their page they said" To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself)."

If you really want to prevent that then it is to also selfhost.


Good question. Not sure about Nostra, but that is how WebRTC is doing it. If there is only 2 people then it is peer to peer, more parties require some sort 'relay' to facilitate the chats.

However the main issue with peer to peer is not everyone is going to be online all the time, relay serves as a temporary storage place until one of the user/client goes online and get the message.


I tried nostr but found it hard to find anything except people sending micro transactions to each other and posting memes about Bitcoin.


That's because bitcoiners created it and Jack Dorsey boosted it and abandoned Twitter for it. He's been extremely active there, https://primal.net/jack

And the idea of using nano transactions for upvotes has long been an attractive approach in this community for mitigating bot swarms from influencing conversations. In fact, the proof of work system that bitcoin was based on was originally proposed in the 90s as a solution to fight email spam. A micro cost of computation power (or a nano transaction) is neglegible to most legitimate users, but adds up quickly for spammers who depend on sending out thousands of emails for a single hit.


Isn't nostr still an append-only solution?


Yes, you delete by appending a delete event:

https://github.com/nostr-protocol/nips/blob/7d792055372ce1af...

Then clients and relays SHOULD act as if the referenced event is deleted. This is different than MUST, probably because there’s no way to guarantee it happens. It’s also possible the delete event doesn’t make it to everywhere that received the initial event.


DELETE is never a MUST when it's sent to someone else computer.


I demand you delete your comment!


Social media sites are being siphoned up by various intelligence services, data brokers, quant trading firms. Everything you delete always has a window where it can be permanently recorded by someone. Even on HN. Anything you delete/edit isn't necessarily gone forever, even if it is no longer on the server you posted to.


If I was one of these creepy organizations, “deleted, edited, or not posted” would have to be a pretty strong signal that I might be interested in it.


It's a key reason why I don't attempt to delete social media posts. I've said a lot of stupid shit over the years, and whether I like it or not, that stupid shit is permanent; any attempt to take it down would only serve to make it more interesting and more worthy of preservation. Better to let that stupid shit linger unnoticeably - or even better, embrace that I'm (hopefully) a better person than I was $N years ago.

See also: the Streisand effect


> It's a key reason why I don't attempt to delete social media posts. I've said a lot of stupid shit over the years...any attempt to take it down would only serve to make it more interesting and more worthy of preservation.

Only if someone cares enough about your social media to be watching for deletions, which I'd imagine is pretty rare. If no one's watching now, now is the right time to delete something.

> See also: the Streisand effect

I think it's important not to overstate that effect, as there are lots of scenarios where deletion will not draw more attention to something. For instance: you're a rando deleting something that's not interesting, not currently being observed, or something no one will notice when it's gone.

The trope namer got hit by it because 1) she's a celebrity, and 2) removal would result in a noticeable gap.


> removal would result in a noticeable gap

It would, if someone were looking in detail, but that isn't what the Streisand Effect is. That effect is the demand to stop/remove something that draws attention to it, not the actual removal (if it is indeed removed at all).

If the photo of her property had been quietly removed by the photographer I and most others would likely never know anything about it, much like you and most of the rest of the world wouldn't know I deleted a post on my blog that was so full of embarrassing typos that I couldn't be bothered to correct it. I know about it because of the heavy-handed demand that it be removed from that collection.


Right, but how do I know that nobody's watching? Indeed, the idea that nobody's watching seems far-fetched; even if that somebody is some Internet archival robot or NSA monitoring tool or advertising algorithm, it seems more likely than not that all but the most recent posts would draw more attention if deleted than if left alone, whether I'm a nobody or a celebrity or somewhere in between.


> Right, but how do I know that nobody's watching?

A better way to think about it is: do you have good reason to think someone is actively watching you or will go through the effort to collate your social media posts against some archive (that will be incomplete unless you're an active target)? If you don't, delete away.

If you're still concerned, delete some old stuff randomly to distract from the stuff you really want to delete.

> even if that somebody is some Internet archival robot

Those can't even scrape everything people actually want to keep, let alone comprehensively scrape every social media post from everyone. There are huge gaps.

I actually was peripherally involved in some volunteer efforts to archive a bunch of stuff around the Hong Kong protests and the Afghanistan withdrawal. IIRC, they didn't even try to scrape Facebook posts (because FB was doing so much to thwart scraping, and there may have even been problems with Twitter). There are well-developed tools for YouTube, through.


I don't get why there is no inter-relay communication. From what I read it would just work but it is not allowed for some reason.


How is it different from Secure Scuttlebutt?


Seems neat. Has anyone written an ActivityPub bridge for nostr?


Yes, there’s one called Mostr


A very apt name for a bridge


Users don't use protocols they use websites. Or more specifically, apps.


Exactly! It's amazing how few people understand this. Signal is the perfect example, there's a lot of technical magic wrapped up it, but my Dad can use it.


Whatsapp rolling out e-2-e to its 1b+ users was the most significant move in digital privacy ever, without most users knowing that it even happened.


This is 100% true. And all apps goes through App Store approval process, which makes all the censorship talk kind of moot.

Sideloading has to become a default, somehow without us returning to antivirus era. Problem is that no one knows how to make that happen.


It's funny how common jailing-breaking iPhones used to be, I did mine immediately when it would come out and everyone else I knew did too. Now I don't know a single person that does.


You know what, that’s a great insight. I got the iPod touch precisely because it could be unlocked to be a pocket Unix with shell. 15 years later, it still is and isn’t. Like, lol not at all ha ha. It was a giant, near irrecoverable mistake to computing freedom.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: