If you're an amputee I truly am sorry for you and hope the handicap hasn't disrupted your life too much.
Jokes(...?) aside though, your absolute deference to precision is an example of why metric flies over people's heads. Feets, Tokyo Domes, arguably even nautical miles and so on are relatable at a human level unlike metric which is too nice and clean.
This sort of argument is odd to someone in a country which uses both, where a yard is intuitively "a bit smaller than a metre", a pint corresponds to a pint glass or "about half a litre" rather than anything meaningful and I'm aware that a rod and a furlong are things but have absolutely no idea what they correspond to. A foot is comfortably bigger than the average foot size, and an inch really isn't an easier unit to approximate than a centimeter
The SI was specially aimed to reduce such meaningless discussions, yet we steel have big endians and little endians comparisons, long after the dust settled.
One meter is about one long step for an adult. To approximate the length of a field, you just walk along it with big steps and count. It will not be correct, but pretty close. A cm is a little bit smaller than the width of your index finger. It's all bout what you are used to. Metric doesn't "fly over people's head" where metric is the standard way to measure things, but inches, feet, gallons, pounds, miles fly over our head because we are not used to it so don't have any frame of reference.
The idea is that you can send a message to a complete stranger. How do you know you didn’t send it to the wrong stranger? Suppose the user doing the lookup (by username) gets the wrong public key and sends a message to the wrong person? That’s a man-in-the-middle attack. Redirecting someone’s messages to someone else is pretty bad.
People mostly don’t use real names on Mastodon, so essentially, you are your username. Impersonating a username is like impersonating a domain name for a website.
> The idea is that you can send a message to a complete stranger. How do you know you didn’t send it to the wrong stranger?
Are you talking about the directory server in isolation or how it's going to be integrated into the E2EE for the Fediverse proposal? It's not clear for your comment which use case you're imagining.
The thing is: With the E2EE proposal, you don't just encrypt something to a complete stranger based on the Directory Server's contents. Instead:
1. You poke their instance for their user ID, and which directory servers they use.
You can have further client-side checks, gossip protocols about username<->ID mapping, etc. to keep the instance honest about which ID maps to which user. I just don't want to deal with GDPR's "right to be forgotten" and usernames (which may be PII) at this layer.
2. You poke the directory server for the public keys, and proofs of inclusion in the SigSum ecosystem; including co-witness signatures.
3. Your software verifies these proofs, automatically, client-side. It refuses to move forward if anything is amiss.
4. Now that you have an Ed25519 (or post-quantum) signing key, you then query the instance for one of their SignedPreKey bundles.
5. You verify the signature on the SignedPreKey bundle, then use that to establish an untrusted comunciation with the user in question.
After step 5, if you want to verify their identity, something like Signal's safety number for user trust can be implemented to allow out-of-band verification.
It's important to take a step back and think about what the instance admins can and cannot do to mess with this process.
Scenario A. A new user wants to enroll in E2EE messaging. The instance admin can pretend to be them and push an AddKey on behalf of the user, thereby locking them out of enrolling. Evidence of this becomes immutable and public, and the user can mitigate this by getting the hell off that instance.
Scenario B. A malicious admin wants to snoop on the private messages of a user that has previously enrolled. The protocol forbids AddKey messages from being published by anyone but the user that possesses an incumbent signing key, which the instance admin does not have. They are thwarted by this policy.
Scenario C. A malicious admin remaps the user's ID and enrolls a new AddKey, and selectively reveals one ID to some users and another to other users. This would be a challenge to pull off in a useful way, since the API to get a user's ID from the instance to lookup in the Directory server will not necessarily reveal who's asking for it.
Third parties that publicly map User IDs and broadcast them on Fedi can mitigate this issue by alerting users to a difference. Although this would be annoying to integrate with a transparency log, having a watchdog service that your client software polls to ensure others see the same ID. Even as simple as automatically posting "user@instance is now at <random id here>" every time it changes would make this sort of misuse easily discoverable.
This is out of the scope of the Directory Server design, but worth considering for the overall E2EE system. Since the blog post was focused on the Directory Server design, I didn't feel like speculating when I wrote it.
Scenario D. A malicious admin remaps the user's ID and enrolls a new AddKey, but instead of selectively revealing one at a time, replaces them for all future DMs for everyone. Client software can easily detect this attack by noticing that the user's ID has changed, but the user didn't publish a MoveIdentity to their new ID.
Scenario E. A user loses all of their secret keys and wants to start the enrollment over again. The user asks the admin to perform the exact same steps as a malicious admin from Scenario D, but for legitimately honest reasons (i.e. the AddKey for the new identity actually came from the user).
Ultimately, distinguishing Scenario D and E is a social problem, not a technology problem. Detecting the behavior doesn't necessarily mean malice.
I'm thinking end-to-end. I think it's important to put the directory server into context, or you could end up building the wrong thing, or perhaps building something more complicated than it needs to be.
In particular, to decide what's in scope, it's important to distinguish between users and their client software, and to decide what the user's input is to the client software. The user has to somehow get their intention across to their messaging software about who they want to contact.
For TLS, there's an assumption that people know which domain name they want already. Figuring out the right domain name is out of scope; that's a social thing. A lot of times, it's part of a URL. Maybe they're trusting Google. Or when sending email, we assume the email address is already known to the user, somehow.
For TLS, the IP address doesn't matter, because the domain name is part of the URL and part of the certificate and what matters is which server holds the private key. Securing DNS isn't part of the system [1] and transparency logging isn't needed for IP address changes.
Domain names are kind of trust-on-first-use. People hear about a domain name somehow and the important thing is that the owner doesn't lose control of their domain name after they've publicized it and people start linking to it. Similarly, here, I think the important thing is the user doesn't lose control of their username unless they really screw up, like in the scenario you describe.
I don't entirely follow what you're doing and I'm not sure if a directory needs transparency logging as a directory. It's sort of like DNS. The sensitive bit is generating the public/private key pair. But I think maybe you've combined roles, so that the directory server is also the certificate issuer?
About Scenario A:
> The instance admin can pretend to be them and push an AddKey on behalf of the user, thereby locking them out of enrolling. Evidence of this becomes immutable and public, and the user can mitigate this by getting the hell off that instance.
There will be lots of AddKeys happening all the time. If the AddKey doesn't contain the username and they don't recognize the ID (because they had nothing to do with it), how does anyone know when an AddKey refers to them? Maybe the server tells other people that an AddKey refers to them, but privately? Possibly the user would find out through gossip, but the log isn't enough evidence by itself, and if the gossip isn't entirely trusted either, they don't have a clear case. It seems a lot less straightforward than scanning a transparency log for your domain name.
[1] Though, it comes back in again when Let's Encrypt checks that you control your domain name.
The problem isn't the employee or the hiring process. It's the security infrastructure! One compromised account, supposedly from sales, shouldn't bring down the whole company.
The idea has been discussed. The main problem is that Mastodon is a zero-trust environment, and we can't trust the other server to send us the correct preview.
Don't trust the server, trust the poster/client... that's really where the preview should be coming from anyway. And if you _don't_ trust the poster, you shouldn't be trusting anything they link you to anyway. It's trivial to lie to these preview generators with a couple of meta tags if the destination isn't trustworthy.